mike_2000_17 2,669 21st Century Viking Team Colleague Featured Poster

It is a matter that Microsoft does not care that its changes to the OS will cause this equipment to malfunction. Microsoft does what Microsoft wants to do.

That's right on. The cause of a lot of problems with Windows and across versions of Windows is that Microsoft maintains very relaxed standards (by "standards" I mean actual specifications for their APIs and the OS's exact behavior) and versions of Windows can often vary a lot within those relaxed margins. And to make matters worse, Microsoft often decides to redo their APIs or parts of them, and effectively deprecating the older ones, which they drag around half-heartedly to keep some backward compatibility, which is often leaves a lot to be desired.

If you write very specialized software, you simply cannot write it "correctly" according to Windows' APIs, because their specifications are too vague and leave too much uncertainty. The only option is to assume the most likely behavior, test it to confirm your assumptions on specific versions of Windows, and cross your fingers. If people use your software on a slightly different version of Windows, it probably will work alright most of the time, but it could invalidate your assumptions, and break your software. This is the bane of all Windows software developers and maintainers.

When DOS was replaced by Windows, almost all of the old real-time hardware was rendered obsolete.

DOS was an acceptable substitute for a hard real-time system, but only by virtue of its …

mike_2000_17 2,669 21st Century Viking Team Colleague Featured Poster

There is also a verified repository that was created by the auditing team. There, you can find the last unchanged version of TrueCrypt, i.e., version 7.1a, with verified hashes. The version on the sourceforge site has been altered by the TrueCrypt developers to disable the ability to create new containers (partitions, folders, etc.), so, it's just a version to allow you to access your existing containers temporarily while you migrate to something else. If you want to keep using TrueCrypt, you need to use the 7.1a version.

There is also a somewhat official fork called VeraCrypt.

But like others have said, you should probably wait until the final report of the audit (which is apparently still ongoing) before deeming TrueCrypt as unsafe. I think that much of the community is just on hold for that moment, so they know what to fix in an eventual fork or re-starting of TrueCrypt development. It would be precipitated to do anything before there is a final report (or other news) from the auditors.

mike_2000_17 2,669 21st Century Viking Team Colleague Featured Poster

There are a lot of tutorials out there on this topic. Here are a few:

http://www.codeproject.com/Articles/598695/Cplusplus-threads-locks-and-condition-variables
https://solarianprogrammer.com/2011/12/16/cpp-11-thread-tutorial/
http://www.bogotobogo.com/cplusplus/multithreaded4_cplusplus11.php

Also, the standard C++11 threading library is almost entirely based on the Boost.Thread library (the Boost libraries is sort of the anti-chamber for creating and testing to-be-standard libraries). So, you can also follow tutorials and documentation on using Boost.Thread, because it has a longer history of being used by people and thus, has more material to read about it.

Also, getting C++11 threads to work on Windows could be a bit tricky. You will need either Visual Studio 2013 (anything older will not support it, or at least, not well), or a bleeding-edge version of GCC from MinGW-w64 (notice that the "new" features include "Winpthread: new library, pthreads implementation for Windows") or from TDM-GCC (notice that the change-log says "C++11 concurrency features (including std::thread) are now available.").

If you can't get C++11 threads to work, you should use Boost.Thread, which is, as I said, virtually identical to the standard C++11 thread library (but works on older compilers too, and on Windows).

Finally, you should also know that most of the difficulties related to writing multi-threaded applications are language-independent. So, you might want to do some research on that topic, not specifically for C++. Writing multi-threaded application is an art and it can be very challenging (and very frustrating at times), and all that has nothing to do with …

rubberman commented: As usual, Mike2K gives good advice. +12
mike_2000_17 2,669 21st Century Viking Team Colleague Featured Poster

If you look at the documentation for setHtml function, you will see that it has a second parameter (defaulted in your case) to specify the base URL (or file path) from which to get any external objects (stylesheets or images) referenced in the html. Here is a quote of the reference:

"External objects such as stylesheets or images referenced in the HTML document are located relative to baseUrl."

As rubberman often says: RTFM! (Read The Manual!)

mike_2000_17 2,669 21st Century Viking Team Colleague Featured Poster

If you don't know ahead of time what size you need for the arrays, then you need to use a dynamic array type instead of std::array (which is only for static size arrays). The class you can use is std::vector (see docs). This would make your lines look like this:

typedef map<string,set<MatchRecordPointer>> PlayerHistoryMap;
typedef vector<PlayerHistoryMap> ArrayOfPlayerHistoryMap;
typedef vector<ArrayOfPlayerHistoryMap> Array2DPlayerHistoryMap;
Array2DPlayerHistoryMap PlayerMatchHistory;
mike_2000_17 2,669 21st Century Viking Team Colleague Featured Poster

Would it hurt if I used dir over ls?

No. dir is actually identical to ls in Linux. The only difference is the extra character to type. And of course, it carries the shame of having to admit that you still cling to the shackles of MS-DOS, your previous oppressor (it's called the Stockholm syndrome).

I ought to hear that Ubuntu was user-friendly :(.

For one, as user-friendly as Ubuntu can be, the Linux terminal is the Linux terminal, regardless of what distribution you use, and it's only user friendly when you are used to using it (know the commands and all that), it'll get easier as you get better with it, that's all I can say. It's a bit of a learning curve, but not a long one, so don't worry (I would say, a month or two and you'll be proficient with it).

And second, you are talking about installing an unusual distribution of Apache server and related tools ("unusual", as in, it differs from the main-line supported versions on Ubuntu's official repositories), from an installer (the .run file) which is to be run as root in the terminal. You are not exactly being friendly with the system here, which is OK, you need to do what you need to do, just don't expect the OS to be friendly back. ;)

mike_2000_17 2,669 21st Century Viking Team Colleague Featured Poster

The AVL tree is one of these things that I would qualify as a fundamental data structure and balancing algorithms that are good to know and understand, but the data structure itself has very limited practical purpose. The thing is that if you need a self-balancing binary search tree, you would probably select one of a number of better options, like the red-black tree that is used in most standard library implementations of a basic binary search tree.

There is, however, one practical application of AVL tree that I can easily think of (because I wrote such code). If you want a cache-efficient and memory-efficient data structure for storing a binary search tree, you need to use a contiguous layout (e.g., breadth-first layout, cache-oblivious vEB layout, etc.) which means that balancing algorithms, like that of the red-black tree, that are based on tree rotations cannot be used because there is no way to efficiently perform tree rotations, and if used, it would make things worse. In that case, the balancing algorithms of the AVL tree are pretty much the best thing to do. If Hiroshe is right that AVL tree is used in the Linux scheduler, then it's possibly for these reasons.

Another important role of the AVL tree is that more general search trees, like metric search trees or space partitioning trees, also require balancing algorithms that do not involve the tree rotation tricks, because they make no sense when the sorting is multi-dimensional (as opposed to uni-dimensional, as …

mike_2000_17 2,669 21st Century Viking Team Colleague Featured Poster

The de facto standard compilers for Linux are GCC (GNU Compiler Collection), which you can install if it is not pre-installed on your distro. Under Ubuntu, you can simply install "build-essential" package, which pulls in all that you need (compilers, make, auto-conf.. etc..). Other distros will have an equivalent packages. If not, you can install the individual packages for GCC compilers (gcc for C, g++ for C++, gfortran, gcj for Java, etc..) and the "binutils" package as well as other useful tools like "make", "autoconf", "cmake", etc..

The other alternative compile in Linux that some distros have on their package repositories is Clang (clang, clang++, etc..) which a more modern (but less stable) replacement for GCC.

But up to here, this will only get you as far as being able to compile some code in the terminal / command-line.

To actually do some coding, you can install an IDE. There are plenty of decent IDEs for Linux and C++, like CodeBlocks, Geany, KDevelop, NetBeans, Qt Creator, etc.. which are all offered on most distro's package repositories so that you can easily install them from apt-get or yum or your software center application.

mike_2000_17 2,669 21st Century Viking Team Colleague Featured Poster

We don't do people's homework here. If you need help in solving this problem, you have to start by showing us what you have done so far and where exactly you hit a snag. In other words, you need to show that you've made efforts to solve it before anyone will consider helping you.

mike_2000_17 2,669 21st Century Viking Team Colleague Featured Poster

VC6 and VS2010 are not binary compatible (neither are most versions of Microsoft's compilers, they just change it every time, even for revisions on the same version). This means that the binary layout of classes (e.g. how the objects look in memory) can be significantly different between those two versions.

If your third party DLL was compiled for VC6, then you cannot use it with VS2010. You will get exactly the kind of issue you are describing (you get objects back from it that appear to be half correct but all messed up). You need to either get a newer version of that DLL that is compiled for VS2010 (or compile that DLL yourself from the sources, if you have them). And if that is not available, you will have to resort to a much more drastic approach, which is to wrap all the functions of a DLL into a C API (not C++) that you export from your own DLL which you compile with VC6, and then you use that DLL in your VS2010 project (because C APIs are always binary compatible, but it means that you cannot use any C++ features for those exported functions). That latter solution is pretty intense in terms of work and re-writing of code.

Microsoft compilers have never had a stable ABI (binary interface specification), meaning that you must always ensure that your third party libraries are either specifically made for the same compile version that you have, or that they only have …

mike_2000_17 2,669 21st Century Viking Team Colleague Featured Poster

You need to declare the sortbyScore function as a static member of your class.. i.e., like so:

class Game {
  // ...

    static bool sortbyScore(const PlayerScore &ps1, const PlayerScore &ps2);
};

bool Game::sortbyScore(const PlayerScore &ps1, const PlayerScore &ps2)
{
    return ps1.score > ps2.score;
}

// call sort with:
sort(scores.begin(), scores.end(), &Game::sortbyScore);

Otherwise, you could also make the sortbyScore function into a free function, like so:

bool sortbyScore(const PlayerScore &ps1, const PlayerScore &ps2)
{
    return ps1.score > ps2.score;
}

// call sort with:
sort(scores.begin(), scores.end(), &sortbyScore);

Either way, that should work.

mike_2000_17 2,669 21st Century Viking Team Colleague Featured Poster

tell me which ubuntu version shall i go along with virtual machine?

The latest LTS version is probably the best place to start, that is, version 14.04.

can i use an old desktop pc running window xp and install a Linux Distro to get me started

Yes, that's a very common thing to do, and I often recommend that as a way to give a second life to an older PC.

will the hardware components of such pc be good enough to get me going with no hitch

Yes. Linux tends to have much lower requirements on the PC, and as far as drivers are concerned, older hardware is usually well-supported (it's cutting edge hardware that can cause some troubles).

then if am able to download any Linux Distro online using my window vista,will i be able to burn that onto a cd/dvd in windows os,such that i will be able to use the cd to run the distro on that desktop i intend using

Yes, you can certainly do that. There are several installation guides depending on what you want to do.

Then what is the basic hardware spec to run a Linux Distro.

Basic requirement for Linux is generally much lower than for Windows. However, for much older hardware, you might have trouble running the latest versions of the distros, so you might have to consider using an older version (with lower requirements) or using a …

Gribouillis commented: good help +14
mike_2000_17 2,669 21st Century Viking Team Colleague Featured Poster

Using VirtualBox is definitely a great option for you, given the needs you have expressed. There are plenty of easy to follow tutorials out there about how to go about creating one of those. For example, this one seems easy enough to follow.

For beginners, the preferred Linux distribution is probably Ubuntu (or maybe Kubuntu). There are many distributions, and most are easy to use, like for example Linux Mint (more like Windows 7) or ElementaryOS (more like Mac OSX). The nice thing with using VirtualBox is that you can pretty easily test out different distributions until you find the one that suits you the best. For most distributions, the install process is about as easy as with Ubuntu, which is about as hard as installing any Windows application (just answer some basic questions and hit next a bunch of times), especially when installing into a VirtualBox. It's only when you set a dual-boot that things can get a little bit tricky (but not too hard either, you just have to be careful, follow instructions, and understand what you're doing).

The only slight issue with VirtualBox is that it can be quite a bit slower than if it was running "natively", which is why you would normally move on to a dual-boot setup if you use Linux more intensively (e.g., like I only use Windows on very rare occasions, for special purposes).

mike_2000_17 2,669 21st Century Viking Team Colleague Featured Poster

The standard does not require a specialization of the std::function for variable argument functions. There are two reasons for that. First, variable argument functions (or C-style variadic functions) are not recommended to be used, ever, because they are completely unsafe from a type-checking point of view and have only been kept around in C++ for backward compatibility with C (e.g., variadic functions were used to implement standard C functions like printf()). Second, variable argument functions present some difficulties when it comes to implementing type-erasure, which is the technique used in implementing std::function such that it can store whatever is given to it (function pointer, lambda, etc.) and call it with perfect forwarding of the arguments it is called with.

Now, the reason why you can form something like std::function<void(...)> is that this invokes a specialization of std::function that does not exist, and therefore, it becomes what is called an "incomplete type". An incomplete type simply means that the compiler knows that you are invoking a type, but doesn't really know what that type is (e.g., it doesn't exist or hasn't been defined yet). There are certain things you can do with incomplete types, and others that you cannot do with them. Basically, anything that requires any knowledge about the type cannot be done with an incomplete type, such as declaring a variable of that type, calling a member function of it, and things like that. Some things don't actually require any knowledge about the type and can therefore be done …

cambalinho commented: thanks for all +3
mike_2000_17 2,669 21st Century Viking Team Colleague Featured Poster

ok.. but why that error message?

Because you cannot use auto for non-static data members. That's what the standard says, and that's what your compiler says, and so, that's as good as a fact of life that you have to accept. The rationale behind that rule is too much to explain. The auto keyword does not provide anything more than the convenience of not having to explicitly write out the type, and therefore, it's not necessary for anything, simply convenient where it is allowed to be used, but for non-static data members, it is not allowed, so you have to explicitly write the type instead (which you can always do).

seems that i can't use lambdas with auto

You certainly can use auto with lambdas. In fact, part of the rationale for introducing auto was to make the use of lambdas more convenient.

seems that my best thing still be the pre-compiler way:

Arrgh... What is it with you and archaic features and pre-processor tricks? It seems like you always come back to some MACRO to create odd constructs that I wouldn't even advise to do even outside of a MACRO, let alone in one. I tell you to stay away from MACROs, you run towards them. I tell you to stay away from variadic functions, you present a variadic function solution, and you double down with var-args macro in the mix... please stop hurting me like that.

but i wanted something like …

mike_2000_17 2,669 21st Century Viking Team Colleague Featured Poster

No, you can't do that. auto is a reserved keyword that tells the compiler to infer the type from the expression that initializes the variable. In other words, it's a placeholder for a deduced type, it is not a type by itself.

mike_2000_17 2,669 21st Century Viking Team Colleague Featured Poster

You have to use a template alias. As so:

template <typename... Args>
using Event = std::function<void(Args...)>;

Event<int,int> test = [](int x, int y)
{
  int a=x+y;
  std::string b = std::to_string(a);
  MessageBox(NULL, b.c_str(), "hello", MB_OK);
};

The reason why your initial typedef worked is because it uses void(...) which represents a variadic function, not a variadic template. That makes it a valid declaration (I think), but a rather useless one, as you would technically have to assign to it a variadic function, not "any function", and I'm not sure you can do that with lambdas. And if I try with a more correct syntax to initialize Event test, there is still an error related to Event having an incomplete type, which indicates that having a std::function with a variadic function is not proper (which I can understand). Variadic functions are essentially obsolete and completely avoidable in C++11, so, stay away from that archaic feature.

You could also simply do this:

auto test = [](int x, int y) {
  int a = x+y;
  std::string b = std::to_string(a);
  MessageBox(NULL,b.c_str(),"hello",MB_OK);
};
cambalinho commented: thanks +0
mike_2000_17 2,669 21st Century Viking Team Colleague Featured Poster

I think you need to look at QSizePolicy (accessed from QWidget::sizePolicy()). Basically, the size policy is what allows you to specify how widgets should grow automatically when their parent widget (with a layout) resizes, and things like their minimum / preferred size (e.g., to force to parent widget to adopt sufficient size to display its contents).

I don't know exactly the programmatic steps to do this, because I mostly create my GUIs in Qt Designer, where you can set all those things in the menus and property editors, which is much easier.

For resizing the main window, I would presume you either just use the size property, or you rely on the size policy, as I just mentioned.

mike_2000_17 2,669 21st Century Viking Team Colleague Featured Poster

In many ways, C++ is the mother of all modern languages, in most application domains. So, is it enough by itself? Not really, but it's a really good start or foundation. In any case, I don't think you can really become an expert C++ programmer without being exposed to a lot of other stuff in the meanwhile. I mean, at the very least, you are going to need some decent scripting skills (e.g., Bash). For the rest, as you do things, you are going to need other things, like knowing SQL to interact with some databases, or knowing some PHP or Python or something for some higher-level tasks (even if it only serves the purpose of exposing or using your C++ code at a scripting level). And then there are plenty of other languages and skills you'll need to pick up in other specific domains.

Remember that programming languages are tools of the trade, it's the trade you have to concentrate on learning. If a company is looking for a programmer, they will be looking for one with particular skills in a specific domain (e.g., databases, web servers, numerical analysis, data analytics, etc..) and even if they say that the job is in doing "C++", it is generally implied that you are familiar with a lot of other things that are specific to the domain in question (e.g., for a "numerical analysis" job, it is implied that you are familiar with fortran, Matlab, and many numerical analysis algorithms, and so …

mike_2000_17 2,669 21st Century Viking Team Colleague Featured Poster

Just to clarify, in the listings the version of the MSVC compiler used by Visual Studio 2012 is MSVC 11, and there is also the November update version which added quite a few of the more useful features (variadic templates, init-lists, etc.). Visual Studio 2013 uses version 12.

Overall, the support for C++11 features in Visual Studio 2012 is pretty slim, I would say, almost insignificant. You really have to move to 2013 (Update 3) to get something that could be qualified as "full support", roughly.

In any case, I generally use Boost MACROs (header <boost/config.hpp>) to determine which features of C++11 can or cannot be used in the code. The Boost team works pretty hard to keep this list up-to-date. You could easily make a program that tests the features for different compilers to know which are available or not on a particular compiler (i.e., just print out the list of macros that Boost defines for a given compiler version). They generally determine which compiler supports which features by literally trying them out with a test suite that exercises each feature.

mike_2000_17 2,669 21st Century Viking Team Colleague Featured Poster

Probably the easiest answer is just to get CodeBlocks.

And also, do not use Turbo C++, it is way too outdated to be relevant or worth using. This is a very old IDE that uses a pre-standard C++ compiler. Do not use it. And if it was recommended to you by your teacher, tell him that he should update his 20 year old (or more) course program, because C++ has changed a lot in the past 20 years (including 4 revisions of the standard).

mike_2000_17 2,669 21st Century Viking Team Colleague Featured Poster

This whole question of "What do you have to hide?" is a subtle case of the "loaded question" fallacy (like "Do you still beat your wife?"). This fallacy is about loading the question with a premise such that whichever answer is given, the premise is implicitly accepted or acknowledged in the process. In this case, the premise is that you have no right to privacy and the only reason you would seek privacy is for doing criminal or otherwise immoral things. If you say you have things to hide, you acknowledge that this is your reason to seek privacy. And if you say you don't have anything to hide, you acknowledge that you are willing to be an "open book", so to speak, with no privacy.

The reality is, everyone has things that they would prefer to keep private. And the vast majority of those things are not illegal or even immoral, just maybe a bit embarrassing, or personal, or simply "none of your business".

The real purpose of the invasion of privacy is to do thought police. People tend to have this unrealistic idea of what "thoughts" are. People picture "thoughts" as being what you think about when you're at home alone, in bed before you sleep, or on the bus or whatever. In reality, thoughts consist of much more than that, they are the things you are curious about and might look up on the internet, they are the things you read, the things you watch, the things …

Warrens80 commented: your post are long but are filled with so much interesting facts and opinions that I can't stop reading them +0
mike_2000_17 2,669 21st Century Viking Team Colleague Featured Poster

I have nothing against ElementaryOS, but it is just that they made a number of conscious choices that favor a cleaner, purified interface that is tailored more towards the average "check emails and surf the web" type of user. By making those choices they sacrifice other things that make it less of a practical working environment. So, it's not that it's bad, it's just that it is consciously aimed at mass-market users, the same kind of crowd that Apple aims to please. Just like I like KDE for work, I would certainly not say that it is exciting to use or that people looking for a OSX-style experience would like it, different strokes for different folks.

I guess elementary could ship these features out of the box but then the iso would be much larger and heavier.

Exactly. Providing more "work" apps would be a change of aim that would result in a significantly larger OS, which defeats the point of it. It's just like a construction worker needs a pick-up truck to carry all his tools and construction materials as well as getting around a construction site. But a pick-up truck is not appropriate for most urban people who will prefer a smaller economic car that is easy to park in parallel and gets them from A to B. But my point is, it's silly to take a small compact car and try to turn it into a pick-up truck or try to use it as such. …

mike_2000_17 2,669 21st Century Viking Team Colleague Featured Poster

What programming language are you using @Mike?

About 99% in C++. The rest is C (low-level) and Python (high-level). And of course, there are always times that call for some work in Matlab.

mike_2000_17 2,669 21st Century Viking Team Colleague Featured Poster

Hopefully it's not a PhD in computer science or any kind of technical field... if so, she should do it herself.

Otherwise, this should be posted to business exchange forum, or to rent-a-coder.

mike_2000_17 2,669 21st Century Viking Team Colleague Featured Poster

If your compiler supports C++11 (the latest language standard), you can also initialize a vector with multiple values (called an initializer-list), like so:

std::vector<int> fu = {1,3,4,5,6};
mike_2000_17 2,669 21st Century Viking Team Colleague Featured Poster

what languages would be used to create something like adobe photoshop?

Photoshop is entirely written in C++, as far as I know. They've released a lot of open-source code in the past, always in C++, and their developers have been active in the C++ community.

Actually it depends on compiler

No, it does not. It depends on the system and external libraries that can be used to facilitate the process of constructing and programming GUI applications. And for any platform, any C++ compiler can be used to compile code that uses one of these system or external libraries.

if you are using a compiler DEV C++ then it does not supports GUI

First, DevC++ is not a compiler, it's an IDE (Integrated Development Environment), which is just the application that assists you when writing / compiling / debugging code. On the one end of the spectrum, you can just write code in Notepad (or some other basic text editor) or some enhanced text editors like emacs, vim, sublime-text, Kate, gedit, etc.. On the other end of the spectrum, you can write code with a comprehensive IDE like Visual Studio or C++Builder XE6. And in the middle of things, there are lightweight IDEs that just assist you in writing code (with highlighting, code-completion, and documentation tooltips on classes and functions) and perhaps debugging it (step-through) without too much heavyweight features (like …

mike_2000_17 2,669 21st Century Viking Team Colleague Featured Poster

If I'm not mistaken, the 48bit limit on the RAM addresses on 64bit platforms has to do with the virtual address spaces. Virtual addressing (for those who don't know) translates the relative logical addresses that processes use for their own memory (program data) into the physical addresses in RAM. This is something that is implemented by the CPU architecture (firmware / hardware) and is specified as part of the instruction set, not the operating system. So, unless you were to implement some complicated workaround scheme (which, as rubberman says, is a real PITA) that is the effective limit. The x86-64 (amd64) instruction set basically makes RAM addresses 48bits long, and thus limiting RAM addressing to 256TB total (and the Linux kernel reserves half of that to kernel-space code, thus leading to 128TB limit). Unless the OS poses further limitations, the supported amount of RAM depends mostly on the instruction set and the size of its virtual address space.

I found this list of limits for kernel 3.0.10:

https://www.suse.com/products/server/technical-information/#Kernel

Apparently the limit is 64TB, but apparently, even the people at SUSE have not been able to put together a system with so much RAM.

There are Linux supercomputers that have lifted that limit considerably - not a task for the noobie kernel hacker... :-)

Aren't supercomputers using very different instruction sets? Like Sparc, System/Z or PowerPC instruction sets. These tend to be much more scalable when it comes to the amount of RAM and CPU cores. I …

mike_2000_17 2,669 21st Century Viking Team Colleague Featured Poster

I see no reason why this wouldn't work. You picked the chipset that is recommended for the given processor and it supports the given mmc clock rate as well as the 64Gb capacity (total). As for Linux, the kernel has supported terabytes of RAM for a decade at least, and currently supports about 128TB of RAM and 4096 cores (and running more than a millions processes at once), if I'm not mistaken (remember, Linux is still primarily as server-grade operating system, and thus, can support really beefed-up systems).

RikTelner commented: Linux-boss is still answering... goal!! +2
mike_2000_17 2,669 21st Century Viking Team Colleague Featured Poster

Je suis divisé
On m'a multiplié
J'ai jamais rien soustrait
Et je laisse la somme aux autres

  • Claude Péloquin (Québécois Poet)

(English)
I'm a man divided
They multiplied me
I never subtracted anything
And I leave the sum of it all to others

mike_2000_17 2,669 21st Century Viking Team Colleague Featured Poster

If Dani already has an answer to each one of these questions, that would really freak me out, and I would have to start picturing her living with an imaginary friend, and fifty stray cats. ;)

<M/> commented: lol +0
Warrens80 commented: that would very creepy +0
mike_2000_17 2,669 21st Century Viking Team Colleague Featured Poster

I think this definitely makes sense from both the point of view of quality of results and moving the web towards more security. Most "serious" websites out there are already secure using https, so, it is definitely, in part, a valid indicator of quality for a website. And, of course, all sites should be using secure communications because a huge number of cyber-attacks or other malpractice on the internet is based on sniffing the plain-text traffic around the web. Plain-text traffic should not be the norm.

mike_2000_17 2,669 21st Century Viking Team Colleague Featured Poster

What makes Bill a better name than Fred? His name is Fred. Stop trying to rename him!

Maybe Bill can be his middle name? ... Like "Fred Bill Daniwebber, the first of his name".

<M/> commented: i like it +0
mike_2000_17 2,669 21st Century Viking Team Colleague Featured Poster

The more special a package is, the harder it is to do any kind of upgrade or replacement of parts. A normal desktop PC is easy to upgrade because the parts are standard, easy to reach, assemble and disassemble, and I've done it many times. A laptop is harder because parts are not as standard, they often have special adaptors or form-factors that are made for that particular model or series of laptops. Disassembly and reassembly of a laptop can be tricky because everything is so tightly packed and delicate (like those darn plastic clips that always break when you pull something out), and there isn't much room for too many additional or bigger components. I have never really upgraded a laptop, or even replaced a part in one, except for replacing the keyboard (which is pretty easy). As for an "all-in-one" computer like yours, I would imagine that it is even worse than with a laptop. I think that all-in-one computers usually have a special motherboard just for that model or series, and everything is meant to be the same forever, not really "upgradable".

You really only have two options: install a lightweight OS that this computer can run well (such as Lubuntu, Xubuntu, Peppermint OS, Puppy Linux, etc.); or, you can buy a new computer. You could also do both, I guess, buy a new computer and install a lightweight Linux distro on your old one.

The reality is, an Atom N270 is never going to run Windows …

mike_2000_17 2,669 21st Century Viking Team Colleague Featured Poster

so thought that I would test it out, when I had a chance ... using a C++11 compiler.

Why? My code does not require any C++11 features. It is purely C++98 code.

This is what I came up with that actually did handle both kinds of errors the way expected ... and I thought you might like to see it.

Ah.. yes, this has been a pet peeve of mine with the iostream library, i.e., the lack of commitment to using exceptions. Despite a lot of nice design patterns that are used in the iostream library, there are also some glaring problems. One such problem is that it was largely written at a time when people were still very reluctant to the adoption of exceptions as a primary error-handling mechanism. So, even though they made all the iostream classes into well-designed RAII classes that can be gracefully used alongside exception throwing and handling, they made them use error-codes (or error-states, which is the OOP equivalent of error-codes) as the default error-handling method and made exceptions second-class citizens in the iostream classes. I find that the option to report iostream errors via exceptions is so poor that you can hardly use it. If I have to, I just check the error-states and throw an exception if needed. Like this:

try
{
    std::ifstream fin( FNAME );
    if( !fin.is_open() )
        throw file_not_openable(FNAME);
    std::vector< double > data2;
    std::copy
    (
        std::istream_iterator< double >( fin ),
        std::istream_iterator< double >(),
        std::back_inserter( data2 …
mike_2000_17 2,669 21st Century Viking Team Colleague Featured Poster

There is no way to generate a true random number from a computer program. Random number generators that most programming language libraries have are actually what is more precisely referred to as "pseudo-random number generators". They simply use some math functions that are very wild (in reality, these math functions are very tricky to define because you need them to produce "wild" numbers, but you also need those numbers to be uniformly distributed over a certain range). In any case, those numbers are deterministic and predictable, even if they don't look to like it.

To produce truly random numbers, you generally need some hardware that taps into some external, naturally-occuring, random process. These are called hardware random number generators and most modern computer chipsets have such a device such that they can operate cryptography and other things that benefit from having truly random numbers (e.g., not predictable by an attacker that is trying to break the security). Many programming languages also provide standard functions to access that random device, such as std::random_device in C++.

mike_2000_17 2,669 21st Century Viking Team Colleague Featured Poster

And Miley Cyrus can make a comeback to disney as a tongue-flailing, crazed, twerking, naked, alien... no makeup required.

mike_2000_17 2,669 21st Century Viking Team Colleague Featured Poster

Will this work?

Yes. However, notice that there is a difference between that program and your original one. In the original program, if there is not enough memory (bad-alloc exception) to store all the numbers, the program will not read any number at all. In the second version, the program will read numbers up to the maximum that it can read before it runs out of memory (if it ever runs out of memory). That could make a difference if you expect the amount of data to be large enough to exhaust the memory available. For example, if you are running this within a multi-threaded application, the second version could exhaust the memory and cause a bad-alloc problem in a concurrent thread, where you might not expect such an exception to occur (and thus, not have the try-catch blocks to handle it).

The while(!in.eof()) loop is within the try-block. Is this okay? Or should the try-block be within the while loop? Does it matter?

It is better to put the try block outside the while loop, as you did. The reason why it matters might not be obvious at first, but it's an interesting one. In general, with loops, you want to keep the inside of the loop as short as possible because this is code that gets repeated over and over again. If the code for a single iteration can fit within a cache page, then the entire loop can run over and over again …

mike_2000_17 2,669 21st Century Viking Team Colleague Featured Poster

In that example, the vals array would be called a static array or fixed-size array. But, the tests that are done in the get and put functions would be called "bound-checking" or bounded array access (I.e., the array access is "bounded"). So this might explain how you got the term "bounded array" from.

Hiroshe commented: you beat me too it +8
mike_2000_17 2,669 21st Century Viking Team Colleague Featured Poster

Are you using some external library? Anything other than the standard libraries? If so, verify that the libraries you are linking to have been built with the same version of MinGW (or close to it) as the one you are using. This "dw2" version of this DLL is an obsolete file newer versions don't use (and don't install), but if an external library that you use is built to still be using it, you'll get this kind of errors.

mike_2000_17 2,669 21st Century Viking Team Colleague Featured Poster

In view of the latest head-butting between Linus Torvalds (head of Linux) and Kay Sievers (head of Systemd), this looks to be related. Apparently, systemd has had a long history of ignoring bugs and forcing others to work around their quirky or sub-standard software. The fact that it doesn't strictly filter logs seems to be inline with that reputation. I would assume that in any seriously maintained software projects, such trivial vulnerabilities would be patched up (up-stream and down) as soon as it's discovered.

I recently went through the exercise of compiling the Linux kernel from source for a ARM board. Systemd was one of those components that was constantly giving problems along the way, but in all fairness, it's pretty central too. This project should be handle with double the rigor of any other project, but instead, it seems to get about half the rigor of other projects.

mike_2000_17 2,669 21st Century Viking Team Colleague Featured Poster

The 880M card has been added pretty recently (last May) to the officially supported list from the Nvidia Linux driver version 331.67. This is pretty much the latest stable Nvidia driver (there are a couple of later Beta drivers). So, assuming you can install this driver, i.e., that you have either the know-how to install it manually or that your distribution of Linux is recent enough to offer it in its repositories of proprietary drivers, then it should work fine. Needless to say, such a cutting-edge graphics card will require such a cutting-edge (recent) driver, and such a recent driver will, in general, require a fairly recent kernel version, meaning that you should pick a cutting-edge distribution of Linux, such as Arch or just a very recent version of Ubuntu or OpenSUSE or whatever... The Nvidia site doesn't give a minimum for the required kernel version, but I would aim to get at least 3.10 or later, which most newest releases of Linux use (OpenSUSE is 3.11, Ubuntu is 3.13, and with Arch you would probably get 3.15-16 now (rolling), etc.).

N.B.: The Nouveau driver's team maintains a list of cards supported by their drivers. Your card (GTX 880M) is not yet listed there. I would assume that it won't take too long for the 800-series to be supported, as they already support the complete GTX 7xxM series and everything below it. I would say probably a couple of months. But in any case, you can just use …

mike_2000_17 2,669 21st Century Viking Team Colleague Featured Poster

Thanks Moschops for that shout out. I was just going to point to that stack overflow answer.

Choosing a good STL container for a given situation is a pretty subtle task, unless it's a trivial problem. I would generally recommend writing your code in a generic way to be able to swap the container type without too much trouble, then use the one that seems the most appropriate (it's usually vector), and test different containers if this code seems to be a bottleneck in your software.

mike_2000_17 2,669 21st Century Viking Team Colleague Featured Poster

Hey guys,

I just wanted to give a shout out for the McGill Robotics team. This is a team of undergraduate students from different engineering disciplines here at McGill. They are participating to the RoboSub competition right now! They managed to build and code a fully operational, autonomous submarine robot in less than a year, very impressive.

I'm a fan of that team in part because they are the reincarnation of the McGill LunarEx team, which is a robotics team that me and my friend founded back in 2008, and kept going with new students since then, until the Lunar excavation challenge was abandonned in favor of this new and challenging competition. Several of the students in the team have also been working in my lab. I've also been involved, at arms length, with some of their work. I also gave them a crash-course in C++ programming when they were getting started using C++ and ROS for their control software.

If you want to see what robotics is all about, check out the videos on their youtube channel, or check their website. You can also follow their progress live on twitter @McGillRobotics.

mike_2000_17 2,669 21st Century Viking Team Colleague Featured Poster

A pattern is a repeated decorative design, within the context of this discussion.

Ah, well, yeah, I guess that's the common man's definition. I guess I've been hanging out with the wrong crowd for too long. In my mind, the word "pattern" immediately invokes the more scientific terminology, as in "pattern recognition", "speech patterns", or "patterns of behaviour". So, to me, it's this definition I think of: "a combination of qualities, acts, tendencies, etc., forming a consistent or characteristic arrangement."

My point was that as long as there is a degree of freedom in choosing the specifics of this "characteristic arrangement", then there can be some randomness to it. For example, in pattern recognition, a classic example is a "table", as in, what set of qualities or features makes us recognize something as being a "table", that's the pattern, and being able to tell a table apart from a chair is the recognition part. If there were no random and unpredictable variations in the designs of tables, then it wouldn't be a problem of pattern recognition. See what I mean.

What makes a pattern a pattern is just as much its set of rules as its degree of freedom (or creative liberty, if you like). Without that degree of randomness, a pattern is not a pattern, it's just a thing (i.e., one particular definite object). If a Texan talks to me, I could say that I recognize the accent (i.e., speech "patterns") and deduce that he is from …

mike_2000_17 2,669 21st Century Viking Team Colleague Featured Poster

Or, maybe even better ...

Well, if we go into STL territory, we should do it in proper STL style ;)

template <typename OutputIter>
OutputIter fillFromFile(const std::string& FNAME, OutputIter it)
{
  typedef typename std::iterator_traits<OutputIter>::value_type Value;
  typedef std::istream_iterator<Value> InIter;
  std::ifstream fin( FNAME.c_str() )
  return std::copy(InIter(fin), InIter(), it);
}

and call it with:

std::vector<int> v;
fillFromFile("data.txt", std::back_inserter(v));
mike_2000_17 2,669 21st Century Viking Team Colleague Featured Poster

Judging by your other post here, it seems that you busted three expensive phones in a row. In my experience, people who really have "bad luck" with their electronic devices are in reality just too careless with them. You might want to consider that. BTW, who's covering the bills for all these phones? I can't imagine he's very happy with you.

mike_2000_17 2,669 21st Century Viking Team Colleague Featured Poster

as if i was playing a heavy duty game for sevral hours straight

Did you by any chance happen to have discharged the battery completely while playing a heavy duty game? If so, I would bet your battery is toast. Reaching the minimum charge on a battery while drawing a lot of current from it can easily cause permanent damage to it, after which it can't be charged fully again.

mike_2000_17 2,669 21st Century Viking Team Colleague Featured Poster

A pattern implies order and random implies a total lack of.

Well, a pattern implies some order, not complete order. If you take a picture of a grass field, there is a pattern there, e.g., green color palette, vertical lines, a particular spatial frequency (number of grass blades per square meters), and so on, but the picture is definitely random.

Unless the rules of the pattern define exactly and completely how things should be, then any particular instance of a pattern is only that, one particular instance of it, i.e., a "sample". And a sample that is drawn from a random distribution is called a "random sample". With a simple substitution, we get "random pattern".

Further mind-blowing. Here's a random number: 5. How can you tell that this is a random number? If you want to know that it's indeed random, you need to know how I generated it, but if you know how the number was generated, can you still call it random?

you don't care which so you pick a pattern at random

In that case, I think the appropriate term is "arbitrary pattern". Arbitrary is the proper term for "I don't care which".

mike_2000_17 2,669 21st Century Viking Team Colleague Featured Poster

Like Hiroshe, I had never heard of a CPU being clocked above 5 or 6GHz, and even then, that requires liquid cooling, or even liquid nitrogen cooling. It would seem that the world record is 8.8GHz.

Basically, the main reason why the CPU speed as been limited in the past 10 years is the physical limitations of heat transfer. A silicon chip and the heat sink on top of it has a certain maximum amount of heat transfer rate, i.e., there is a limit to the amount of heat that could possibly flow out of it, even under ideal conditions (ambient temperature at 0 Kelvin). And transistor technology has also reached its limit, pretty much. The energy consumption of a CPU, and thus, the heat coming out of it, is proportional to the number of transistor switches per second, i.e., the clock frequency. The higher the frequency the more heat is generated, and there is no way to reduce that amount of heat with existing transistor technology (or any transistor technology, i.e., there would need to be a major technology switch to something else (not transistors)). And the only way to increase the amount of heat that can flow out of a CPU is by lowering the ambient temperature or having a better coolant than air (such as water, and thus, the liquid cooling), but even then, you are still limited to the theoretical ideal limit, and I would assume that even that limit isn't much better than …