mike_2000_17 2,669 21st Century Viking Team Colleague Featured Poster

This looks like any typical package management system, like rpm (red-hat family), dpkg (debian family), apk (android), and so on... of course, I don't think Windows has anything equivalent to this... but again, that's what Windows users are used to, as sub-par system.

There are several tools out there that can help you generate packages (of various flavors) from your git repository, and maintain the packages up-to-date with it.

mike_2000_17 2,669 21st Century Viking Team Colleague Featured Poster

Warning: Software engineers love to provide vague definitions for every term, and then, they vehemently insist that they are very different and should not be confused for one another.

Basically, information hiding and encapsulation are, in a way, two sides of the same coin. The idea of information hiding is that your users should not know more about your code (classes, functions, etc.) than they need to to be able to use it. The idea of encapsulation is that your code shouldn't be spilling its guts in public, i.e., it should be neatly packed and hermetically sealed. Clearly, that sounds like the same thing, and it often is in practice (there are many ways, especially in C++, to implement both, and they are often the same). The critical difference is in the point of view, not so much in the actual code. They are two ideals. Information hiding is the ideal of minimizing (down to the essentials) the information about your implementation that the user is exposed to. Encapsulation is the ideal of minimizing the dependencies to external components (or other components of your library) down to only what is essential to its use. In other words, let's say you are writing component A and to write it you use component B, but the user does not need to know about that when using component A, well, information hiding tells you that he shouldn't know that A uses B, and encapsulation tells you that he shouldn't be bothered by A's …

mike_2000_17 2,669 21st Century Viking Team Colleague Featured Poster

And their Visual Stupido on Linux is, of course, some remote access cloud-based thing that no sane programmer would ever want to use, but are so annoyingly easy to sell to the I-have-no-clue-what-my-employees-actually-do kind of managers. They probably sell this to managers by telling them that the remote access / cloud stuff allows their employees to work from anywhere on any platform.

For instance, on a 2 hour transfer at some airport on a business trip, the Micro$oft sales people will probably argue that their product allows your employees to be productive in those two hours, not mentioning that they'll need a high-bandwidth connection (the kind you rarely get at an airport), a secure connection (the kind you never get at an airport), an outlet to plug in the laptop (because their product will surely suck that battery dry in no time), a table to set it on (not to burn your lap), they'll need to connect through a VPN of some kind (and watch out for shoulder-surfing), probably wait a while for the sync to be completed, and then, they can start working, but will constantly be slowed down by all the lag in the system.

In the mean time, rubberman can be sitting in the same airport, with a small laptop that he doesn't have to plug in or take off his lap (because it's not overheating) and code away on his handy standalone light-weight code editor on his own local version of the code. If he ever …

mike_2000_17 2,669 21st Century Viking Team Colleague Featured Poster

Linked lists have nothing to do with matrices, as far as I know. And even if they did, it would be beyond idiotic to use them in that context.

And as Maritimo pointed out, we are not in the business of doing people's homeworks for them. We provide help in the form of explanations and hints to people who have shown genuine efforts towards solving the problem themselves.

As for matrix inversion and LU decomposition, I would suggest you just start with the wiki page on LU and the section on how to invert a matrix with it.

mike_2000_17 2,669 21st Century Viking Team Colleague Featured Poster

Happy holidays! And happy new year!

mike_2000_17 2,669 21st Century Viking Team Colleague Featured Poster

If you only intend to read from the file, then you should only give the ios::in flag. Because the default behavior with ios::out option is to create the file if it does not already exist. Also, you should use ifstream instead of the more general fstream. It's always better to use the most specialized version that meets your needs, this way you always get the correct default behaviors.

mike_2000_17 2,669 21st Century Viking Team Colleague Featured Poster

It's really impressive that the GNU / GCC team managed to do that. When I was checking the source code for deque (see a few posts back), I did see some hints at the fact that this might be possible, from the convoluted way in which they structured things, but I didn't want to say anything because there wasn't an explicit stated guarantee. Thanks for finding this documentation page that formally states the guarantee. Hopefully the OP can use this somehow.

mike_2000_17 2,669 21st Century Viking Team Colleague Featured Poster

I agree that D might turn out to be another "Esperanto" programming language.

D seems to have tried to satisfy everyone, and therefore, pleases no one. I remember how Andrei was pitching D to C++ programmers by painting the language as essentially the same as C++ but with cleaner syntax for meta-programming, with python-style modules, and with native concurrency and parallelism constructs, and having all that while respecting the main tenants of C++ (scoping rules, value-semantics, templates, multi-paradigm programming, ease of making DSELs, etc.).

That all sounded great until they mention that D uses garbage collection, and not only that, but a "Java / .NET" style of garbage collector (a virtual machine supervising everything), instead of a "python / php" style (a reference counting scheme with a reference-cycle breaker). That's a big turn off for any C++ programmer. So, they had to backtrack a bit and provide some terrible "C++/CLI" style of managed vs. unmanaged memory. What is especially astounding is when you read their official description of garbage collection which sounds like it was written by a computer science freshman student, which is full of old myths and tales from Java echo chambers (like the fact that garbage collectors don't leak memory!! lol..). It kind of baffles me a bit because it has started to become clearer and clearer in the expert community that Java-style garbage collection is a failed experiment (along with its "checked exceptions"), and that consensus was building around the idea of Python-style collectors, …

Maritimo commented: Very well explained. Thanks. +2
mike_2000_17 2,669 21st Century Viking Team Colleague Featured Poster

What you must do is to create a function foo(int) that internally "saves" the int into a deque.

Right. That's what I described as the typical trick to make this work (creating a pure C interface (or carefully limited C++ interface) between the debug and release parts of the code. But like I said, this is could constitute a lot changes (with the potential for additional bugs) to the existing code, and therefore, not very practical in general.

But this is not the case here, the original problem is to create a queue for floats.

If you read the original post again, kungle is implying that he is working on a substantially project and that the example code about deques of floats is merely one small example of the kind of tricks he had to do, picked out of that larger code-base. Clearly, he is looking for a general solution that will work for a large code-base with many uses of the STL in various places.

So, isolating a few deques behind a C interface is not a big deal, but isolating large sections of an existing code-base and refactoring all the code that calls it is a whole different ball-game. I was addressing the latter, and you seem to be addressing the former.

Typical good style C++ code is riddled with uses of STL containers and algorithms. And if the idea is to go about wrapping each of these behind some C API, then that …

mike_2000_17 2,669 21st Century Viking Team Colleague Featured Poster

Sorry, I don't agree with mike.

It is brave to disagree with me on such technical matters. Unfortunately for you, this is not a matter of opinion, it is fact, and so, be prepared for the scolding that follow...

It is perfectly posible to create a library to work with std::deque<foo>& compiled with -O3 and work with other code compiled with -Og.

Possible, yes, but very problematic, which is what I said. It's playing with fire beyond what I could sanction, let alone advice people to do, not to mention that it's a lot of pain for little gain.

The typical trick to make this kind of thing possible is to limit the interface between the two libraries (or parts of the code) to be purely a set of C functions involving only primitive types, opaque pointers and some specially-designed objects (this is where it gets tricky, and most people choose to limit them to POD class types, to stay on the safe side). However, on your typical code-base, retro-fitting a C++ library to create this kind of a compilation firewall requires quite a bit of effort, and it's not worth it if your sole purpose is debugging (not to mention that this kind of library overhaul could easily create additional bugs!).

Otherwise, if you don't do this kind of insulation between the optimized and the debug parts of your library, you can run into some pretty nasty problems. Assuming a situation where you have a …

mike_2000_17 2,669 21st Century Viking Team Colleague Featured Poster

1) Is this statement correct: In memory, elements of an array are stored "against" each other (I think more precisely, contigious space is allocated for them?) and a pointer to an array points to the first element.

Yes. This statement is correct.

2) Are there any memory-specific issues that the below code will encounter?

No (but as you said, it's incomplete). There are a number of things that could be done better, but there is no memory-related "wrong-doing".

3) If it interests me, would tools such as Valgrind worth looking into now, or should I wait until I have a better grasp of C++?

It's never too early to try out Valgrind. If you have any doubts that some piece of code might be doing something funny memory-wise, like fiddling with pointers in some dubious way, then just run your program in valgrind (it helps if you make sure to compile with debugging symbols, which is done with the option -g for GCC / MinGW). If there is anything wrong, valgrind will most probably catch it. And after all, what's the worst that could happen? The worst that could happen is that valgrind gives you a report that you don't understand, and if so, just retry later on or come here to ask us what it means, it can only help you learn better.

4) My "String List" is not complete, but are there any other glaring issues I might look into?

mike_2000_17 2,669 21st Century Viking Team Colleague Featured Poster

I also sujest you to compile some files with -O3 and others with the -Og option

That could be problematic, especially for the STL. Since most of the STL is header-only libraries (templates), the parts compiled without debugging and the parts compiled with debugging will end up compiling a debug-version and a release-version of the STL components, respectively. These are not, in general, binary compatible. What this means is that if you have a function that takes a parameter of a type like std::deque<foo>&, and the function itself is compiled with one set of options and the code that calls the function is compiled with the other set of options, then the function will interpret the memory that this reference parameter points to differently from how the calling context created it. In other words, their memory layouts will be different, and in general, incompatible, leading to pretty nasty errors.

So, unless you are really careful, this is not a very practical thing to do. And at the end of the day, you won't really be able to optimize the debug-version performance of the STL components because you still have to use the debug versions of it.

Does an workaround exists I'm not aware of, to efficently use the STL in an development environment?

Besides telling the compiler to try to optimize as much as it can without affecting the debug information, using the -Og option, as NathanOliver said, I don't think there is much else that …

mike_2000_17 2,669 21st Century Viking Team Colleague Featured Poster

I'm guessing that Pruno is more popular in american prisons, it sounds very delicious...

mike_2000_17 2,669 21st Century Viking Team Colleague Featured Poster

I've recently did a conversion from bin/cue to iso with bchunk and it worked just fine:

$ bchunk IMAGE.bin IMAGE.cue IMAGE.iso

I'm not sure if you can omit the .cue file (can the .bin work alone?), but if you have it, then this certainly works.

But it seems the software that I use for investigation does not accept .iso extention just .img extention.

If your software just doesn't accept the iso extension, you just have to change your iso file's extension to .img. Like I said, the format is the same, it's just a different extension. Simply renaming the file with the correct extension should work:

$ mv IMAGE.iso IMAGE.img
mike_2000_17 2,669 21st Century Viking Team Colleague Featured Poster

iso and img are the same thing. They are two possible extensions for the same file format.

mike_2000_17 2,669 21st Century Viking Team Colleague Featured Poster

According to the documentation, the setHtml() function sets the html data directly (as a QString containing the html code), and you can use toHtml() to retrieve that html code later on. If you got an html page from some URL and you dumped the html source code from that page into a QString that you passed to the setHtml() function, how could it ever be possible for the QTextBrowser object to have any idea of where that code came from?

What you need to use is the source property, which is used to get/set the URL of the page that you display in the QTextBrowser, as it is explained the documentation.

In summary, RTFM!

mike_2000_17 2,669 21st Century Viking Team Colleague Featured Poster

Yeah, like rubberman said, to have good audio driver support, beyond the basics, you need pulseaudio and ALSA. Modern versions of Ubuntu or other debian-based distros install those by default, as far as I know, but this is definitely the first place you need to look to solve your problem. Obviously, you also have to make sure to set audacity to hook up to pulseaudio and ALSA (which, again, should be the default behavior). Here is a nice tutorial on this.

I think that you should make sure to install pavucontrol (pulseaudio volume control), because that should pull in any other things that you might be lacking for being able to record audio. Also, on Linux, installing VLC is a very good idea because this be-all-end-all video player pulls in pretty much every conceivable video and audio codecs and drivers (or driver servers, like pulseaudio). You might also want to install alsamixer or other alsa-related tools (just search in the software center for anything related to alsa, and install whatever seems relevant).

mike_2000_17 2,669 21st Century Viking Team Colleague Featured Poster

Then when I restarted, it was GRUB the default loader

So, that means that the boot repair utility installed Grub on the MBR. There is no easy way to go back now. You pretty much have to keep it like that until the end of time.

easyBCD still doesn't detect that grub loader.

The entries that you created with EasyBCD are now useless. Grub is now on the MBR, and your old (broken) grub that you had on the Linux partition is no longer needed. You should go into Windows and remove the EasyBCD entries (other than Windows, of course!), this way, selecting Windows under grub should lead you directly to Windows, without having the EasyBCD entries menu appear in between.

Just so that you understand what is going on, let me just draw out how it used to be, and how it is now.

Before, you had the following sequence:

Windows Boot Loader (on MBR of '/dev/sda')
 v
Windows Boot Manager (on 'C:\' drive)
  (with entries added by EasyBCD)
 v
Grub 2 (on Linux partition '/dev/sda1')
 v
Linux booted!

Now, the setup has changed to this:

For booting Windows:

Grub 2 (on MBR of '/dev/sda')
 v
Windows Boot Manager (on 'C:\' drive)
 v
Windows booted!

For booting Linux:

Grub 2 (on MBR of '/dev/sda')
 v
Linux booted!

Now that you have grub on the MBR, the path is direct to Linux. This is the "recommended" way to do things (not recommended by …

mike_2000_17 2,669 21st Century Viking Team Colleague Featured Poster

Try the boot-repair disk first.

Slavi commented: Sure =) +0
mike_2000_17 2,669 21st Century Viking Team Colleague Featured Poster

because I am trying it on Kali instead of Ubuntu

That should not make any difference. As far as things like Grub are concerned, any distribution that ultimately derives from Debian works the same and uses the same commands. Whether the instructions are from Ubuntu, Mint, or Kali, or Debian directly, the procedures should be exactly the same.

all i can see on my screen is Try (hd0,0) ext2

There is something weird about it. I don't get it. You keep putting up different identifiers for your drives. And apparently, (hd0,0) is a legacy grub number (because grub2 starts from 1 for the partition number). And why do you have an ext2 partition. And the fact that you can boot just fine from an alternative bootloader (super grub) means that your installation, kernel image and initrd image are fine. You should try to get to your grub's menu (not from "super grub", but from the installed grub2) and boot manually by following the instructions here (remember, these instructions, like others, require that you be smart about what parameters you enter, don't just blindly enter the commands!!). If you are able to boot manually, by smartly following those instructions, then there is no reason, other than a bad configuration, why you shouldn't be able to boot automatically (from the menu entry). Just remember / note-down the outputs (and names of things like the partition, images, etc.) you see during the manual booting procedure, because these …

mike_2000_17 2,669 21st Century Viking Team Colleague Featured Poster

I never used boot-repair. It seems that you don't need to download the disk image to use it, you should be able to simply install it from your Kali installation or from a LiveUSB:

https://help.ubuntu.com/community/Boot-Repair#A2nd_option_:_install_Boot-Repair_in_Ubuntu

mike_2000_17 2,669 21st Century Viking Team Colleague Featured Poster

then I tried also

$ sudo grub-install --root-directory=/mnt/ /dev/sda1

Be careful man! I remember telling you in a PM that you have to be careful about every step you take and not let there be things that you don't understand in the commands that you issue. What is your understanding of the option --root-directory=/mnt/?

This option comes from the instructions that I linked to that refer to installing grub from a LiveCD/USB. The instructions mount the Linux partition onto a folder that they call /mnt, and then, they direct the grub install command to that directory where the Linux partition is mounted, so that the grub installation is done for that Linux installation and not the current running instance (the LiveCD/USB).

If you have managed to boot into the Linux installation that you are trying to repair, then you must not put that option there.

When you are in the Linux installation you are trying to repair, the correct command to reinstall it on your Linux partition is:

$ sudo grub-install /dev/sda1

And if grub throws a fit about not wanting to do this "dangerous" operation (the grub developers really would prefer that you install grub as the primary bootloader, instead of chain-loading with EasyBCD, but I tend to disagree with them, because grub is too brittle, IMO, to be the main bootloader), then you have to "force" it to do it:

$ sudo grub-install /dev/sda1 --force

But I'll await the reply about the reconfiguration of …

mike_2000_17 2,669 21st Century Viking Team Colleague Featured Poster

so i am wondering whether the Linux entry in easyBCD has to be grub legacy instead?

Definitely not. This is clearly grub2, and EasyBCD entries should be configured as such.

I think you should try the $ sudo grub-install /dev/sda1 (from your Kali installation). You can also do try grub-mkconfig to recreate the grub cfg files.

It's odd that your entries are marked as belonging to "hd1". Did you plug / unplug the hard-drive and plug it into different slots and stuff in-between doing some of these steps? This can cause serious problems for grub because grub contains static identifiers for hard-drives that only work for the way the hard-drives were plugged in at the time when grub was configured. Swapping hard-drives around while configuring things will make grub be out of sync with where the hard-drives are actually plugged in later.

If you have to change the hard-drive placement on the SATA slots, then things can get tricky as far as configuring grub. One possible solution is to create the grub configuration (with grub-mkconfig) and then, manually edit the "grub.cfg" file to change the hard-drive identifiers to what they will be once you put the hard-drive back where it belongs.

mike_2000_17 2,669 21st Century Viking Team Colleague Featured Poster

The (hd0..) stuff is grub's way of identifying hard-drives, see the equivalence table here for their correspondance to the Linux device identifiers like /dev/sdX9 stuff. (hd0) is equivalent to /dev/sda and (hd0,0) is equivalent to /dev/sda1.

mike_2000_17 2,669 21st Century Viking Team Colleague Featured Poster

Yes, if you are able to somehow boot into your Kali installation, then you can simply run $sudo update-grub2. If that doesn't work, just use the grub-install command.

mike_2000_17 2,669 21st Century Viking Team Colleague Featured Poster

I also use EasyBCD for my dual boots. From time to time, mostly after distribution upgrades (e.g., 13.10 -> 14.04), the link from the EasyBCD-configured bootloader to grub2 is broken by an update of grub2. Every time this has happened, the fix was simply to boot into Windows and use EasyBCD (an up-to-date version of it) to remove the broken Linux entry and re-create it again. I've never had problems with that.

If things are still broken after you have fresh entries from EasyBCD, then it might mean that your grub2 installation or configuration is corrupt. To fix that, you can basically follow these debian-family instructions here, except that you need to point the grub installation to the partition (not the hard drive) where you originally put grub2 (I assume it's on your Linux partition), so, you should do $ sudo grub-install --root-directory=/mnt/ /dev/sda1 (replacing "sda1" with whatever is the correct device identifier for your Linux partition), as opposed to what the instructions say about using "sda" or similar, which has the effect of installing grub on the MBR of the hard drive, which is not what you want.

After you've reinstalled grub2, you might have to go back to Windows again to recreate the Linux entry with EasyBCD.

Slavi commented: Thank you Mikey! +5
mike_2000_17 2,669 21st Century Viking Team Colleague Featured Poster

We now have a live chat integrated into the site. See here:

https://www.daniweb.com/chat

mike_2000_17 2,669 21st Century Viking Team Colleague Featured Poster

If what I talked about in my last post is beyond your level, then you have a problem. You must learn to walk before you can run. You should not attempt any data fitting or pattern recognition work if you don't even understand the basics of frequency-domain analysis and Fourier transforms.

This is fundamental to any form of signal analysis or image processing, including pattern detection / recognition from data series.

For example, in image processing terms, a pattern like -1 1 -1 (or variations of that) is a kernel which could be used for detecting rapid changes in value (edge-detection, image sharpening, smoothing, etc.). This kernel matches a pattern of very rapid change in value, which, in frequency-domain terms, is at the Nyquist frequency. Similarly, the pattern 1 1 1 (or variations of that) is a kernel to detect the average (or underlying constant value) in a signal or image, because by averaging the values it removes any local changes. If you correlate a kernel with the signal, you get spikes of amplitude whenever the patterns match, that's called cross-correlating a signal.

As far as using a Genetic Algorithm, which you seem persistent about, there is nothing magical or special about it. A GA is not going to solve any of your problems. People think that a GA is a kind of magic pill that solves everything. It's not. GAs take just as much care, if not more, in setting up …

mike_2000_17 2,669 21st Century Viking Team Colleague Featured Poster

You should filter them to a different directory of your mailbox.

mike_2000_17 2,669 21st Century Viking Team Colleague Featured Poster

This is more of a parametrization problem than an overfitting problem, I would say. You said it yourself in your description of the pattern 1: "large increase followed by a large decrease". This is a description of a frequency-domain pattern, because it describes the period of change in values. Similarly, pattern 0 is a DC pattern (zero frequency). The problem of detecting patterns 0 and 1 are basically the problem of detecting the amplitude of the DC component and of the Nyquist frequency component, respectively.

Methods for data fitting or other machine learning approaches will never be able to solve a problem that resides in the parametrization of the features that you are trying to detect or examine. If you search for frequency-domain patterns by feeding spatial-domain data to a generic fitting algorithm, then you are not going to get anywhere. Perform an FFT, and you'll get exactly the patterns you are looking for, or more complex frequency-domain patterns if you want.

mike_2000_17 2,669 21st Century Viking Team Colleague Featured Poster

The display drivers for Intel graphics are integrated by default in Linux. Intel contributes their display driver codes to the open-source community, and have been doing so for a long time. This means that you shouldn't have to install any kind of special display drivers for Intel graphics, i.e., they are built-in.

In any case, the Intel graphics drivers and related tools are all available through Ubuntu's repositories. If you search for Intel graphics or drivers through the software center, you will find everything you might need, but they should be already installed. Look for packages like:

xserver-xorg-video-intel
libva-intel-vaapi-driver
intel-gpu-tools
lsb-graphics
libva-x11

But most of these are installed by default, especially if you have intel graphics. I have a laptop with Intel graphics and I never had to install anything more than what was installed by default.

To change your display settings, you have to go through the default Ubuntu system settings menus, because Intel graphics are natively supported by Linux systems (as opposed to other graphics card drivers like Nvidia or ATI, that come with separate configuration tools).

mike_2000_17 2,669 21st Century Viking Team Colleague Featured Poster

I live in california... so i get both...

That's a joke, right? When does it get cold in California? I mean, even in the Sierra mountains in the middle of winter, it barely goes below -10C (15F). I think you don't know what cold is. For us canadians, Vancouver is considered "warm". Not to mention that I used to live in a place where they were lucky if the temperature was above freezing for more than a month in the summer.

I don't know what it's like to live in a warm place, I've never lived anywhere below the 45th parallel (but as far up as the 68th). All I know is that after a long hot summer, I'm glad to see the cold come back again. Like RJ said, you can always put more layers on, but if your almost naked already and still sweating like a pig, there's nothing to do but to be in misery, or never get too far from an AC'd place.

If I had to live in a hot place, like California, I would need benefits like a beach nearby, preferrably my backyard should be a beach ;)

Not having to deal with some of the winter hassles, like plowing, slipping, closures, ... so on.. would be kinda nice too.

mike_2000_17 2,669 21st Century Viking Team Colleague Featured Poster

I've always found the Pro Git book, freely available online here, to be very helpful and definitely good for beginners.

The thing with Git is that you'll eventually get an epiphany and it'll all become clear. It's mainly about understanding how diffs, commits and branches work together.

One thing though is that most tutorials on Git will at least assume that you are already familiar with some other version control system, like cvs, svn, mercurial, bazaar, etc.. So, that might be a bigger problem if you are not already familiar with any of those. If you lack the high-level conceptual understand of how you use version control and the general day-to-day work-flows with them.

mike_2000_17 2,669 21st Century Viking Team Colleague Featured Poster

Your functions are not formatted correctly, you have things like

{
void foo();

}

when it should be:

void foo()
{

}

Notice how there is no ; after the function (when you are implementing them) and the opening bracket { appears after the function prototype (void foo()).

Your functions should be as so:

void listFlights(flightInfo flights[], int count)
{
    /*...*/
}
void insertFlight(flightInfo[], newFlight)
{
    /*...*/
}
void searchFlights(flightInfo, flights[], int flightnumber)
{
    /*...*/
}

Notice also that you had an error in your definition of the "listFlights" prototype, you had cout instead of count, which would cause a conflict with the standard output std::cout that you use within that function, which might explain why it wasn't working (or causing compilation errors). Also, the = 100 cannot be where it was. You could put that default value on the function declaration (near the start of the code) as so:

void listFlights(flightInfo flights[], int count = 100);
void newFlight();
void flightNumber();

But I would not recommend doing this, because there should not be magic numbers like that.

For your choices, you have char choice = 'a'; 'p'; 's'; 'q'; which is not really meaningful at all, all this does is initialize the choice character to 'a' and then there are 3 meaningless statements (still valid code, but it has no effect at all). The choice character is something that you get from the user (through cin) which means that its value will …

mike_2000_17 2,669 21st Century Viking Team Colleague Featured Poster

A 350W PSU seems under-powered for the hardware that your computer packs. An i5 CPU, 4GB of RAM, and a 9400T graphics card would require quite a bit more power than that, at least a 500W PSU.

mike_2000_17 2,669 21st Century Viking Team Colleague Featured Poster

Can I say that a library is an API for the programmers?

You can say that a library has an API.

mike_2000_17 2,669 21st Century Viking Team Colleague Featured Poster

Cool little snippet of code! It's nice to see some flexing of C++ muscles once in a while.

There are a few issues though.

First, you should also default the move constructor and move assignment operator. You could be wrapping an array of non-trivial types that are actually cheaper to move than to copy, you shouldn't pessimize your code. When you only explicitly default the copy functions, it implicitly deletes the compiler-generated move functions. So, you either have to leave them all to be implicitly defaulted or you need to explicitly default them all. So, either you have this:

    ugaw(const ugaw&) = default;
    ugaw(ugaw&&) = default;
    ugaw & operator =(const ugaw&) = default;
    ugaw & operator =(ugaw&&) = default;

Or you can have none of them. And because your type holds an array, you cannot write your own copy or move functions, they have to be default, explicit or not.

That's another issue with your code, on a practical level. Compilers currently have pretty unreliable behavior when it comes to this stuff (implicit and explicit defaulting rules of C++11). I just solved a critical bug in Boost related to this, which stemmed from the fact that the compilers GCC 4.5, GCC 4.6, GCC 4.9, MSVC (<12), and SGI MIPSpro, all have different, incorrect, and mutually incompatible behavior or support as far as those rules are concerned. The best way to get around it is to either provide user-defined functions for all of them, or leave them all …

mike_2000_17 2,669 21st Century Viking Team Colleague Featured Poster

I can confirm the behavior that cereal described as "Problem 2". I get the same weird jumpy cursor issue. I'm also on Chrome (Chromium, actually) in Kubuntu.

Problem 3

Another issue that I have noticed is that, as you all know, I tend to write long posts. The editor box starts small and expands with every additional line until some point when a scroll-bar appears on the side. At that point, the editor does not automatically scroll down as I write, meaning that every new line that I write ends up below the bottom edge of the editor box, until I manually scroll down some more to see what I'm writing.

mike_2000_17 2,669 21st Century Viking Team Colleague Featured Poster

The term API is a bit of a fuzzy term today. Back in the early days of computing, you had an operating system and you had applications. The company making the operating system would ship it along with a document that specified all the functions (in C) that the operating system provided and that applications could be programmed to call to perform certain tasks (e.g., opening files, opening network connections, etc..). This set of functions became known as the "Application Programming Interface" (API) because it was all the things that application programmers needed (or were allowed) to know about the operating system to be able to interact with it.

Today, the number of layers of software has greatly increased and diversified, and any typical system has a whole ecosystem to libraries, languages, interpreters, utility programs, and so on. So, the term API has taken a much more general meaning, which should really be just "interface", dropping the AP part, which was only meaningful in the simple OS vs. Apps paradigm.

So, the broad definition of what an API is is still pretty much the same as before. If you write code that you intend to package up as a library to be used by other libraries or applications, then you will naturally have to determine how the users of your library are supposed to be using it. This includes figuring out what they should know and be able to do, and what should be hidden from them (a general concept …

mike_2000_17 2,669 21st Century Viking Team Colleague Featured Poster

Where is you "PBL.cpp" file?

You have to understand that ordering matters in the list of linked libraries. The "ASp" library should appear in the list of link libraries after the library in which the "PBL.cpp" file is compiled. That's because the PBL-containing library depends on the ASp library. In fact, you PBL-containing library should list the ASp library in its dependencies. In other words, let's say your PBL-containing library is called "PBL", then you would have something like this in its relevant cmake file:

add_library(PBL STATIC PBL.cpp)

target_link_libraries(PBL ASp)

Which will tell cmake that ASp must be compiled before PBL, and that anything that links with PBL must also link with ASp afterwards, to resolve the symbols in PBL to their definitions found in ASp.

This is why I think your whole cmake configuration is terrible, because you are not supposed to just list all the libraries in one variable like you do in the ME_LINK_LIBS variable. Instead, each library should list whatever other libraries it depends on, and your executables should just list the few top-level libraries that it uses and needs to link to, and cmake is going to do the rest of the work of pulling all the other dependencies (in the correct order) into the linking command for your executables / shared-libs.

mike_2000_17 2,669 21st Century Viking Team Colleague Featured Poster

will have new extensions for every single thing and will run all the things of other extensions!

That sounds like a microkernel design.

You are definitely going to need to learn programming before you can really do anything with this idea / project.

Once you have learned a decent amount of programming (especially in C and C++), you might want to be looking into existing open-source microkernel projects, because they are generally small (less than 10,000 lines of code) and therefore easier to study and modify, compared to, say, a massive beast like the Linux kernel. Good microkernel projects include Minix-3 and GNU Hurd.

mike_2000_17 2,669 21st Century Viking Team Colleague Featured Poster

I changed the password... be careful in the future with that kind of stuff. You can't depend on others to protect your personal information.

mike_2000_17 2,669 21st Century Viking Team Colleague Featured Poster

it does not have the cpp explicity, as in Asp.cpp

That makes no difference. I assume that Asp.cpp is one of the source files listed in the definition of SOURCE_FILES for that directory's CMakeLists.txt.

because my cmake did not complain "no rules for building the 'ASp' target"

From the looks of it, your CMakeLists.txt files seem correct (assuming that the SOURCE_FILES variable definitions in them contain the correct cpp files listed in them). This should work. Something else must be causing the issue.

should I still reset the cmake configuration by following the steps below?

Yes. It never hurts to do this (except that it will recompile everything, which could take time, that's all). Whenever I feel a bit weird about how cmake is behaving, I do this kind of a reset. Another, lesser option is to do make clean and then make, which is basically to clean any previous compilation and compile everything again, but that usually doesn't solve cmake problems, but it can help sometimes when cmake has gotten out-of-sync with the source file modifications.

So, you should first do a make-clean-make, and if that doesn't work, to a full reset with the procedure I gave you earlier.

If that doesn't work, you should post all the console messages you got when running cmake and when running "make ME".

If everything is in order, as it appears to be now, then the problem must be in the source code. A big part …

mike_2000_17 2,669 21st Century Viking Team Colleague Featured Poster

One thing that is weird is that you keep talking about the files that are under /ws/larryburns/cmake/ME/Build/sc/, which are files that are copied from the files under /ws/larryburns/cmake/ME/sc/, at least, that's how it's normally done. You should not modify the files under the "Build" directory. Instead, modify the files under the ME/sc directory, because that's the ones that cmake looks at.

In the file /ws/larryburns/cmake/ME/sc/ASp/CMakeLists.txt, you should have the following:

add_library(ASp ASp.cpp)

with whatever other source files you have in that directory.

In the file /ws/larryburns/cmake/ME/sc/Blu/CMakeLists.txt, you should have the following:

add_library(Blu ms.cpp)

with whatever other source files you have in that directory.

In the file /ws/larryburns/cmake/ME/sc/ME/CMakeLists.txt, you should have the following:

ADD_EXECUTABLE (ME MACOSX_BUNDLE ${SOURCE_FILES})
TARGET_LINK_LIBRARIES(ME ${ME_LINK_LIBS})

notice that the ${INCLUDE_FILES} should not be needed, you only need to compile source files, not headers, that's a classic beginner's mistake.

I don't see "add_library(ASp ASP_function_implementation.cpp ...)" anywhere. I think the /ws/larryburns/cmake/ME/Build/sc/ME/CMakeLists.txt compiled the executable using it?

Again, whatever is in the "Build" directory is not something you should touch, because these are files generated by cmake, and if you modify them without modifying the original files, in the "ME/sc" folder, you might simply be corrupting the existing cmake configuration with weird or no effects on the actual build process.

If you don't have a add_library(ASp ..) line in the ME/sc/ASp/CMakeLists.txt file, then cmake should complain that it has "no rules for building the 'ASp' target" when you try to compile the ME …

mike_2000_17 2,669 21st Century Viking Team Colleague Featured Poster

This is a very weird looking cmake file. It must be very incomplete.

Why do you have such a long list of targets... are all these targets from your own source code?

I assume that your file /ws/larryburns/cmake/ME/Build/sc/ASp/CMakeLists.txt contains the following line (where "ASP_SOURCES" would be some cmake variable containing a list of source files, including ASp.cpp):

add_library(ASp STATIC ${ASP_SOURCES})

And that you have for your application, which has the main() function, a cmake file with lines like this:

add_executable(my_exec my_exec.cpp)
target_link_libraries(my_exec ${ME_LINK_LIBS})

That should give a successful compilation and linking of my_exec.cpp (or whatever you name it).

I have to say that your CMakeLists.txt files look very bizarre and that you almost certainly are not doing things correctly. It's weird to have all the targets collected into long lists like that and link everything together at the end like that. If things belong to a single library, they should be compiled as such, in general. Otherwise, they should be linked with in a more fine-grained fashion. Object targets can also be used in this kind of context.

Also, on another note, to avoid having to add all those sub-directories with manual add_subdirectory calls, I use this simple custom macro as follows:

macro(add_all_valid_subdirs)
  file(GLOB children RELATIVE "${CMAKE_CURRENT_SOURCE_DIR}" "${CMAKE_CURRENT_SOURCE_DIR}/*")
  foreach(child ${children})
    if(IS_DIRECTORY ${CMAKE_CURRENT_SOURCE_DIR}/${child})
      if(EXISTS "${CMAKE_CURRENT_SOURCE_DIR}/${child}/CMakeLists.txt")
        add_subdirectory(${child})
      endif()
    endif()
  endforeach(child)
endmacro(add_all_valid_subdirs)

Which can simply be called as:

add_all_valid_subdirs()

and it will replace all your "add_subdirectory" calls by adding all the subdirectories that …

mike_2000_17 2,669 21st Century Viking Team Colleague Featured Poster

The easiest is probably to use chown to change the owner of the directory after you've extracted it. And you can use chmod too to remove some permissions for group and others too.

Let's say you have the file my_archive.tar.gz which contains a top-level directory called my_folder, and the user you want to make the owner of the folder is called "myself". Then, you can do this:

$ sudo su
# tar -xzf my_archive.tar.gz
# chown -R myself my_folder
# chmod -R go-w my_folder
# exit
$

This goes into superuser mode. Extracts the archive into the current directory (use cd to navigate to where you want to extract it first). Changes the ownership recursively (-R) to the user myself. Changes the access permissions recursively such that write access is removed from the "group" and "others", such that the read and execute permissions are preserved, and all permissions are preserved for the owner. Finally, it exits the superuser mode (you don't want to stay in that mode any longer than you need to).

mike_2000_17 2,669 21st Century Viking Team Colleague Featured Poster

I recently played video game, where you can launch your own space program, you simply build a rocket, form sequences appropiately and with enough skill, you can get it to orbit or even further.

Sounds like a cool little game. Any links to it that you could provide?

I really'd like to set things up in sky, calculate all the trajectories and vectors apply gravity to it and really like get it orbiting.

The main thing is that you're gonna need a lot of fuel, and some rocket engines too...

I know you have like these camera's that you attach to some kind of parachute with GPS, but these are only to get object into sky and then get it down.

That's because escaping the gravity of the Earth to actually reach a sufficient speed to be in orbit (around 10 km/s) is a big step away from simply being able to go above the atmosphere (to come back down). The rockets that reach very far up, but don't go into an orbital trajectory (i.e., rockets that fall back down again) are called "sounding rockets".

For example, I spent some time at Esrange, in Kiruna, Sweden. They do these kinds of sounding rocket experiments. Most notable is the MAXUS rocket which can get up to 700km above the atmosphere, well above the ISS (about 400km altitude), and comes back down. You can see a complete flight (from rocket stage detachment to landing) like that …

mike_2000_17 2,669 21st Century Viking Team Colleague Featured Poster

You cannot write to a string literal. When you do char* data = "Top Secret Message.";, the pointer data points to a section of read-only data. You cannot write to it, and that's why you get a access violation (or segmentation fault) in the decrypt function when you try to write to it. In reality, that statement should be const char* data = "Top Secret Message.";, and you should use a different (writable) buffer for storing the result of the decryption.

I'm not sure about the other problem you mentioned. You must first solve this access violation issue first.

mike_2000_17 2,669 21st Century Viking Team Colleague Featured Poster

On that note, where I used to live, in Kiruna, Sweden. Today, the sunrise was at 9:37 and the sunset was at 13:14, for a total of 3h34min of daylight. The people of Kiruna will not see the Sun at all from December 10th to January 2nd.

mike_2000_17 2,669 21st Century Viking Team Colleague Featured Poster

Libraries on Linux are usually located in /usr/lib and /usr/local/lib, or some 32-64 variants of that. Usually, you should be able to simply pass the option -lSDL to the compiler, and it will find it for you (at least, on my system, this works fine). It works if I take your code and run this:

$ g++ simple_sdl_test.cpp -lSDL -o simple_sdl_test

If not, you can also use a simple locate command to find what you are looking for, like $ locate libSDL, which should print out all files on your system that contain this name, for example, I get this on my system:

$ locate libSDL
/usr/lib/x86_64-linux-gnu/libSDL-1.2.so.0
/usr/lib/x86_64-linux-gnu/libSDL-1.2.so.0.11.4
/usr/lib/x86_64-linux-gnu/libSDL2-2.0.so.0
/usr/lib/x86_64-linux-gnu/libSDL2-2.0.so.0.2.0
/usr/lib/x86_64-linux-gnu/libSDL_image-1.2.so.0
/usr/lib/x86_64-linux-gnu/libSDL_image-1.2.so.0.8.4
$ 

But normally, with Linux projects, you would setup a cmake script that will automatically find the libraries and header files you need (by recursively looking for the best matches (and version numbers can be specified too) in the most likely folders). Something like this in a file called "CMakeLists.txt":

# Anounce your required cmake version:
cmake_minimum_required(VERSION 2.8)
# Name your project:
project(SimpleSDLTest)

# Ask cmake to locate the SDL library:
find_package(SDL REQUIRED)

# Add SDL's header file directory to the includes:
include_directories(${SDL_INCLUDE_DIR})

# Create an executable target from your source(s):
add_executable(simple_sdl_test simple_sdl_test.cpp)

# Tell cmake to link your target with the SDL libs:
target_link_libraries(simple_sdl_test ${SDL_LIBRARY})

Where the simple_sdl_test.cpp would be your source file, as so:

#include <SDL.h>
#include <cstdio>
#include <cstdlib>

SDL_Surface* g_pMainSurface = NULL;
SDL_Event g_Event;

int …