rubberman 1,355 Nearly a Posting Virtuoso Featured Poster

Stop running MS operating systems, and install Linux. Your chance of getting viruses and/or other malware will drop about 99.995%...

rubberman 1,355 Nearly a Posting Virtuoso Featured Poster

The device is from Interepoch, and is from about 8 years ago, now "phased out". I doubt you will get Win7 support for it. Here is their web page for it: http://www.interepoch.com.tw/support/iwe100_u.asp

rubberman 1,355 Nearly a Posting Virtuoso Featured Poster

What about the Steam game engine?

rubberman 1,355 Nearly a Posting Virtuoso Featured Poster

You said the onboard diagnostic lights indicate a problem with memory. If that is the case, then changing the video hardware will not likely help. If you don't get a boot screen so you can go into the BIOS setup, then you need to send the system in for repair. Contact Dell for tech support. Their online chat with technicians will usually determine what you need to do, and whether or not you still qualify for warranty repairs.

rubberman 1,355 Nearly a Posting Virtuoso Featured Poster

Sure, why not? I do this frequently. I install a live Linux on one partition, and use the other for data that I want to keep.

rubberman 1,355 Nearly a Posting Virtuoso Featured Poster

What are you looking for in a "remote support appliance"? What features and capabilities are important to you?

rubberman 1,355 Nearly a Posting Virtuoso Featured Poster

Actually, one should not put PATH exports in bashrc files. In the .bash_profile is more appropriate. In any case to answer your question, the dot means "local directory", the $PATH expands to the current path, and $M2_HOME/bin expands to the bin directory in whatever directory the M2_HOME environment happens to point to. So, when PATH is exported, and you tell the computer to run a command from a command line window, it will first look in the current directory, then in the previous PATH, and finally in $M2_HOME/bin. So let's say that PATH was "/usr/bin:/usr/local/bin", and M2_HOME is "/home/m2_home. The new PATH would be ".:/usr/bin:/usr/local/bin:/home/m2_home/bin", so if I said to execute the command "foobar", the system would look for the executable file "foobar" in my current directory, then in /usr/bin, then in /usr/local/bin, and finally in /home/m2_home/bin before giving up.

rubberman 1,355 Nearly a Posting Virtuoso Featured Poster

Remember that SSDs mostly use MLC (Multi-Level Cell) chips which have a fairly limited number of write cycles per cell before they fail - typically about 10,000 cycles. The controllers of modern SSDs will wear-level the writes, so that when it is going to write to an already written sector, it will write to one less used and map that into the logical location where the old version was. This gives better longevity to the devices, but they will "wear out" after some time, depending upon how much writing is going on. These are factors to keep in mind.

All that said, for the most part, using an SSD will considerably speed up performance of your system, especially when doing a lot of disc access. Read operations will blaze, and writes will be much better as well. As for size (bigger is better), the bigger the device, if you do not go much above 50% utilization (most of your data will be read-mostly), then it should last pretty well as that will enable the system to balance the wear on the rest of the drive pretty well.

rubberman 1,355 Nearly a Posting Virtuoso Featured Poster

Does it come up with the boot splash screen? Can you access the BIOS? If not, it's time to send it in for repair.

rubberman 1,355 Nearly a Posting Virtuoso Featured Poster

Yes, well I asked for the output (rather than a synopsis) in order to get some better idea of what is going on. Please post. Thanks.

rubberman 1,355 Nearly a Posting Virtuoso Featured Poster

nullifying the entire buffer means that appending any textual data will result it a properly terminated string.

rubberman 1,355 Nearly a Posting Virtuoso Featured Poster

Quoting that great programming wizard Rubberman:

Great ideas come from banging head on wall after all other ideas don't work.

rubberman 1,355 Nearly a Posting Virtuoso Featured Poster

Interesting. This is news to me AD. I'll have to check that out for myself. If it works generically on Linux 2.6 kernels, it would simplify a lot of cruft for embedded systems as well as bigger boxes. Thanks for the heads-up! :-)

rubberman 1,355 Nearly a Posting Virtuoso Featured Poster

What if your read heads are already at the high end of the device? Why seek back to the beginning to start reading? You need to factor that into the algorithm, which is what elevator seeking does. At the root, this is an elevator algorithm, but it needs to be modified by the QOS data that you get, so as to give priority to the higher QOS data, but not starve the other data.

rubberman 1,355 Nearly a Posting Virtuoso Featured Poster

That's why it's called school work - it is supposed to be work! :-) Since I'm not getting your degree, why should I spend an inordinate amount of time to do the grunt work? The math is simple enough, and you need to think through the problem, looking at it logically - you have a disc that you need to seek and grab data when you are presented with a number of logical sectors that some application or other wants to access. Reordering the sectors to read in the most efficient manner based upon where you are at the moment is the first part of the problem. Then you need to modify that algorithm to apply quality-of-service metrics that should be available from the application along with the sectors it wants. Not simple, but not difficult either.

So, putting on my "Mister Professor" hat, what is your first step?

rubberman 1,355 Nearly a Posting Virtuoso Featured Poster

Oh Ancient One, I use "dll" as a generic term for shared libraries. A lot of us *nixers use the term that way. Strictly speaking, you are correct, but so many people have come up from the Windows ranks that the term has stuck.

rubberman 1,355 Nearly a Posting Virtuoso Featured Poster

So, the consensus is to buy now or wait for latest (better) hardware and pay a bit more. Sounds like dealer's choice to me... :-)

rubberman 1,355 Nearly a Posting Virtuoso Featured Poster

Yes, the dll static variable scheme should be fine for Windows. That won't work for other (Unix/Linux) operating systems unfortunately as a static variable declared like that is not shared with other instances of the dll; only the initial value when the variable is initialized will be shared. Changes to the variable will not be.

rubberman 1,355 Nearly a Posting Virtuoso Featured Poster

Compilers: Principles, Techniques, and Tools by Andrew S. Tanenbaum

That's a good one. Another is Holub's "Compiler Design in C".

rubberman 1,355 Nearly a Posting Virtuoso Featured Poster

Then there is this, using the page file for shared memory:

http://msdn.microsoft.com/en-us/library/aa366551(v=vs.85).aspx

So yes, it seems that Windoze requires some sort of physical file interface for shared memory. Linux also uses a file-based approach, but then everything in Linux/Unix, including physical memory, are mapped to files, at least symbolically, if not physically.

rubberman 1,355 Nearly a Posting Virtuoso Featured Poster

the scanner should have a programming interface you can access.
Most likely it stores nothing at all, that's up to you! It just sends you data (when activated) using a specific data format that you then need to interpret in some way to figure out what's there.

Indeed. You need to look at the scanner SDK/API documentation. You might be able to configure it to return an image of the print, that you then need to interpret, or you might be able to configure it to return some sort of checksum that you can compare to a database entry for the user. How you do this is very dependent upon the hardware device. If you need to support multiple devices from different manufacturers, then you need some sort of plugin architecture for you application that can load the appropriate plugin code for each supported device, yet that provides a common interface that your main program can use.

rubberman 1,355 Nearly a Posting Virtuoso Featured Poster

Time to visit your teaching assistant for some help I think. This is a variant of an elevator seek algorithm, so you might want to look at this:

http://en.wikipedia.org/wiki/Elevator_algorithm

and this:

http://en.wikipedia.org/wiki/Disk_scheduling

rubberman 1,355 Nearly a Posting Virtuoso Featured Poster

Also, what processor(s) do each represent? A Nehalem cpu has a better memory bus architecture than a Penryn or similar cpu.

rubberman 1,355 Nearly a Posting Virtuoso Featured Poster

I have 3 eSata arrays with good cooling and a dual drive docking station where you plug the drive into a slot in a vertical orientation. The dock uses ambient air flow to cool the drives, and it seems to work just fine for extended periods of time. The one I use is from StarTech.com. It has both dual eSata connections as well as a single USB 2.0 connection, so you can use either, though the eSata throughput is much better than USB 2.0.

As for using the drive for running/testing Linux, I would recommend that initially you format the drive with NTFS and use it to store virtual machine images and test out your Linux/BSD systems in a virtual machine such as VirtualBox or VMware, rather than trying to use the drive for bare metal multi-boot capabilities - fewer problems until you are experienced enough to go that route, and even then you might not want to.

rubberman 1,355 Nearly a Posting Virtuoso Featured Poster

Wait awhile. There is a new crop of Mac Pros and Minis coming out soon from what I have read in the past few days.

rubberman 1,355 Nearly a Posting Virtuoso Featured Poster

You also have to change your LOCALE environment variable in your ~/.bash_profile and re-login. Also remember that input for Hangul either requires multiple input keys for one ideograph, or a chorded key input (multiple keys at a time, such as shift-alt-something). I set up systems for this on QNX and Unix many years ago (early 90's) and haven't futzed with it since, so my technical expertise in this area is not up-to-date.

rubberman 1,355 Nearly a Posting Virtuoso Featured Poster

Also, change the function signatures to use ANSI instead of early K&R standards. IE, instead of this:

int insert(s, tok)
char s[];
int tok;
{
.
.
.
}

int expr()
{
.
.
.
}

use this:

int insert(char s[], int tok)
{
.
.
.
}

int expr(void)
{
.
.
.
}
rubberman 1,355 Nearly a Posting Virtuoso Featured Poster

Qt Creator works well for developing Qt-based GUI applications. It will help deal with all the platform-dependent cruft quite well. You might want to try it out if that is what you are planning at some time.

rubberman 1,355 Nearly a Posting Virtuoso Featured Poster

The other site is a SIG (special interest group) that is part of the ACM (Association of Computing Machinery). The ACM is something like the IEEE, but for computer systems specifically. SIGGRAPH specializes in graphics and image processing technologies. My sister-in-law who teaches animation and graphics processing, and is dean of the animation department at a university in Oregon is a member of SIGGRAPH. I used to be a member of the ACM SIGSOFT (software SIG), but now spend most of my time with the IEEE, of which I am an affinity group director and chair.

In any case, if you are serious about a career in these fields, membership in the ACM in general, and SIGGRAPH in particular, should be high on your priority lists. They have a lot of resources to call on, including academic publications that target your specific areas of interest.

salah_saleh commented: this was really helpful +0
rubberman 1,355 Nearly a Posting Virtuoso Featured Poster

Mr. Koenig is someone to pay attention to. If you are spending too much time/cycles in memory allocation/initialization, then the usual solution is to cache the memory so it can be reused without reallocating. In truth, memory management is one of the gnarlier problems that complex and long running applications have to resolve. To determine what are the best approaches to take requires an in-depth analysis and understanding of your code, and its trade-offs with regard to system resource utilization.

So, let's say that analysis indicates that your program uses, and frees, 512 byte segments (it can be any size, or combinations of sizes), frequently, yet uses a maximum of 1000 of these at any time (give or take). Then you can build a specialty initiator, allocator, and releaser that your code can use. When the program starts, it automatically allocates and clears 1000 of these 512 byte segments with the initiator. When some code needs a 512 byte segment, it requests it from the allocator which will take a buffer off its free list, add it to the used list, and give it to the requester. When it is done with it, the program calls the releaser which takes it off the used list and adds it back to the free list. If you want, you can clear the buffer in the releaser at that time, but that may not be necessary. FWIW, this also reduces system memory fragmentation and overall utilization that occurs with the normal malloc/free …

Ancient Dragon commented: yes, standard approach to the problem. +17
rubberman 1,355 Nearly a Posting Virtuoso Featured Poster

Instances of dll usage share code, not data. You need to create a shared memory segment by the first instances of the dll to start and make that accessible by subsequent instances as they start up. You can also make that shared memory persistent so that it stays around until the system reboots.

rubberman 1,355 Nearly a Posting Virtuoso Featured Poster

What is the actual structure of this variable? What is its definition?

rubberman 1,355 Nearly a Posting Virtuoso Featured Poster

First, profile the application to see where it is taking most of its time. Then, do an in-depth analysis of the code for the problematic sections. There are good tools for detecting memory errors and profiling, most of which seem to be owned by IBM these days. I have had very good results with Purify (memory checking, debugging, and analysis), Quantify (in-depth profiling), and PureCoverage (code coverage tool). These are link-time code-insertion tools developed by Purify, which was subsequently purchased by Rational, and Rational was subsequently purchased by IBM. IMO, for commercial tools, they are the best since you can instrument parts of the code for which you have no source code, such as system or other 3rd party libraries.

In any case, for large-scale systems, we were able to general an overall 60-80% performance improvement, and a very large stability improvement, using these tools.

Tellalca commented: definitive +4
rubberman 1,355 Nearly a Posting Virtuoso Featured Poster

Myself, I prefer a good syntax-highlighted editor such as nedit, Makefiles, a good C++ compiler (g++ works for me) and a good debugger (gdb to go with g++). I've used Eclipse and others, but I prefer to know exactly what is going on, so I stay with the basic tools that these IDE's rely upon for their real compile/debug activities. I do use an advanced UML and MDA (Model-Driven Architecture) tool (Sparx Enterprise Architect) for my design and modeling. It will take my models and turn them into code, which I then work on, and use EA's reverse engineering capabilities to turn code changes back into model elements.

rubberman 1,355 Nearly a Posting Virtuoso Featured Poster

Not a small subject. Check into the ACM SIGGRAPH special interest group. You can find it here: http://www.siggraph.org/

Then there is support from various graphics card vendors - nVidia is a leader in this area: http://developer.download.nvidia.com/SDK/9.5/Samples/vidimaging_samples.html

Anyway, this is an area of active R&D, so I wish you the best.

rubberman 1,355 Nearly a Posting Virtuoso Featured Poster

What is the output of the command "sudo ifconfig"?

rubberman 1,355 Nearly a Posting Virtuoso Featured Poster

anybody ever heard about Opera it is a wonderful browser

Yes. It has a very good reputation although I haven't used it myself. These days I am running Chrome, currently at version 12.0+.

rubberman 1,355 Nearly a Posting Virtuoso Featured Poster

See Knuth Vol. 3, Sorting and Searching.

rubberman 1,355 Nearly a Posting Virtuoso Featured Poster

Well, you deleted your Ubuntu partition when installing Windows, so naturally you only see one "drive" - the Windows partition identified as the "C" drive. So, what is the surprise/problem here?

rubberman 1,355 Nearly a Posting Virtuoso Featured Poster

Time to contact your vendor for warranty support...

rubberman 1,355 Nearly a Posting Virtuoso Featured Poster

Actually, it doesn't run "over the electricity". It just uses your house wiring as a carrier, just like any network cable, but it shares the cable with the electric power. So, just consider your house wiring as a "built in" ethernet cable that goes everywhere in the house. Just a thing to consider, is that neighbors will also have access to the signal, so when you set up the plugs, change the encryption key and the IP address(es) of the plugs (if that capability is available). You do that by connecting the plug to your network and communicating with it via your web browser. Instructions on how to do all of that should be in the documentation that comes with the plugs.

rubberman 1,355 Nearly a Posting Virtuoso Featured Poster

Actually, it is a 750W PSU. Here is a link to it: http://www.newegg.com/Product/Product.aspx?Item=N82E16817153036

The main thing are the power ratings: +3.3@30A,+5V@28A,+12V1@18A,+12V2@18A,+12V3@18A,+12V4@18A,-12V@0.8A,+5VSB@3A

Note that the -12V is rated 0.8A, and the +5VSB is rated 3A. It may be that 0.8A is as good as you will find for -12V. In any case, always get a bigger supply than you will probably need. This is definitely a case where more is really better.

rubberman 1,355 Nearly a Posting Virtuoso Featured Poster

Let me look at what I have in my workstation. The form factor you can use depends upon the enclosure you have. My workstation is a large, under-desk model that uses a full-sized motherboard (Intel S5000XVN) and 1000VA power supply.

rubberman 1,355 Nearly a Posting Virtuoso Featured Poster

Stuff that requires the negative voltage levels usually is pretty low power, so 1.5 to 2 Amps should be fine. I'm trying to remember what that stuff is, but I haven't run into the problem for quite awhile. However, I have seen it occur, and it either results in fried components or PSU failure due to voltage drop and associated current increase (v x A == power) when the PSU tries to supply the power drawn. I had to replace the PSU on one of my systems some years ago with a better -5/-12V spec for this very reason.

rubberman 1,355 Nearly a Posting Virtuoso Featured Poster

Try one with better specs for the -5 and -12 V feeds. Yours are -5V@0.8A, -12V@0.8A, which isn't much.

rubberman 1,355 Nearly a Posting Virtuoso Featured Poster

I need a program to plot routes for my salespeople. The program should take an arbitrary set of cities that should be visited on a given tour of sales, and plot the cheapest itinerary. I don't mind if they have to make a lot of connections, but I want to ensure that I'm sending them on the cheapest route that covers all of the cities, visiting each exactly once. (I don't want to send them back to Omaha if they've already been there). It is a requirement that it be demonstrably the cheapest available route - this is something our investors insist on.

Why don't you run that by your professor and see what he says?

I think that's called a variation on the Travelling Salesman Problem! You're just substituting "cheapest" for "shortest" route... :-)

rubberman 1,355 Nearly a Posting Virtuoso Featured Poster

Maybe this will help: http://en.wikipedia.org/wiki/Causal_consistency

One point, unrelated to your causal linkage, is that your conditional statements of if (x = 0) and if (y = 0) should more appropriately be if (x == 0) and if (y == 0) so that one can distinguish between an assignment and a boolean comparison.

So, as for your question, the end value for Z is indeterminate since either process may assign x or y in any order, and at any time. IE, it could be that Z ends up with a value of 0 or 1. Consider possible steps:

1. A assigns 1 to x
2. B assigns 1 to y
3. A tests and finds y != 0 - no z increment results
4. B tests and finds x != 0 - no z increment results

end: z == 0 (you can switch order of process operations and get same z result)

or

1. A assigns 1 to x
2. A tests y and finds y == 0 - z increment results
3. B assigns 1 to y
4. B tests x and finds x != 0 - no increment results

end: z == 1 (you can switch order of A and B operations and get same z result)

So, z is causally linked to the values of x and y via the order they are set and read.

rubberman 1,355 Nearly a Posting Virtuoso Featured Poster

I had always thought they were the same. But the wiki knows!

Yeah, well, this is one of those subjects that CS boffins can nuke a penthouse party with by interminable arguing. It's all in how one defines certain terms, and frankly, the differences are minimal. IMO, the main difference is that in multi-programming, a process might not relinquish the CPU until it becomes I/O bound or goes into some sort of wait state. In true multi-tasking, a process can be interrupted at any time in order to allow another task to have CPU cycles, usually according to some scheduling algorithm enforced by the system kernel.

rubberman 1,355 Nearly a Posting Virtuoso Featured Poster

You can set up the second AP as either a separate AP, but as a bridge, not a router, that way. That would be a separate SSID, though you can use the same passphrases. Alternatively, you can set it up as a wireless bridge that uses the same SSID. Myself, I use the first method. We can connect to either device if we are in range of them both, or one or the other if not. My router and AP is in the basement office near the front of the house, and the bridged AP is upstairs in my wife's office near the back of the house. So, if we are in the front part of the house, we use the AP in the basement, and if in the back (where the bedroom is) we use the bridge AP in her office. We use the power plug ethernet devices, similar to your "HomePlug" device to connect the bridge and router together. Works a treat.

rubberman 1,355 Nearly a Posting Virtuoso Featured Poster

Rubberman

It's really how and what the poster is wanting the machine for.

I have purchased new machines and changed things.

If you look at the spec's you can save a lot and I have still, returned WD, seagates drives and soo on even from OEM machines.

I have even recently dealt with a friends Dell laptop which is out of warranty from the store, but have agreed with DELL, since it less than 18months old, certain things should not fail in it, and it's not down to misuse.

Like anything, you can pay the extra money, but if you just want a working machine a ready built machine is ok. I always look at what's inside, and it's not like they are using cheap parts.

Again, many years ago it was cheaper to build your own, now their is little mileage in it.

There isn't even much difference in a Laptop these days, as you can get i5's for £499 or less and i7's £699 and soo on. The cost of a i5 or i7 as desktop is much more than Laptop.

Why a lot of people I know, have just gone with Laptop's unless they are serious gamers.

I don't disagree with any of that. I think the original poster wants to feel, at least, that they have some control over their system, making it truly theirs. I built my current workstation, saving several thousand USD that way, and I have purchased many others over the years from …