0

Why do they make computers harder to use and program than they need to be? Here are some examples:

- Why do most spreadsheets and programming languages do trigonometric functions in radians, when most people use degrees?

- Why do most of the C-based languages (including Java, JavaScript, and Perl) have two different syntax forms to call functions with, depending on which function you want?

- Why does Microsoft change the name of everything from what others called it before (a "directory" suddenly became a "folder", and a "procedure" became a "method").

- In the 1980s, kids all over were programming computers. With the change to Windows, this suddenly stopped, as Microsoft made programming a lot harder to do. Why?

- Why have all of the easy-to-use programming languages been replaced with the hard-to-use C derivatives?

- Why are they making HTML harder to use by taking away features and not replacing them?

- Why do the operating systems get more complicated and take up more memory? They aren't any more useful.

12
Contributors
29
Replies
30
Views
10 Years
Discussion Span
Last Post by Dave Sinkula
Featured Replies
  • [QUOTE=Ancient Dragon;420116]Who do you mean by [b]they[/b]? Well Intell and Microsoft gave them their wish. If you don't like that then you are always free to remove the MS-Windows and *nix from your computer and replace it the MS-DOS version 6.X, then you will be back to where we all … Read More

0

>Why do they make computers harder to use and program than they need to be? Here >are some >examples:

>- Why do most spreadsheets and programming languages do trigonometric functions >:in radians, >when most people use degrees?

Because these things have been designed by geniuses for geniuses. I guess it is a classical case of if you are not with us then you must be against us!

>- Why do most of the C-based languages (including Java, JavaScript, and Perl) have >two different >syntax forms to call functions with, depending on which function you want?

Same as the above

>- Why does Microsoft change the name of everything from what others called it before >(a >"directory" suddenly became a "folder", and a "procedure" became a "method").


To facilitate stealing. Let me give a practical example. If you make a programming called java which runs byte-code on virtual machine then I cant just steal it from you and sell it as it is? No at the very least I have to change its name. so java becomes C#, byte-code become IL, and Java virtual machine becomes CLR. The functioning does not have to differ just as long as the name is different. Now you cant take me to court so easily.

>- In the 1980s, kids all over were programming computers. With the change to >Windows, this >suddenly stopped, as Microsoft made programming a lot harder to do. >Why?

Who knows how that evil genius’s mind works

>- Why have all of the easy-to-use programming languages been replaced with the >hard-to-use C >derivatives?

I have already answered that in part in the first answer. I guess as more geniuses join our society we have to increase the number of hard to use programming languages. How else are we going to separate the hardcore from the wanabes?

>- Why are they making HTML harder to use by taking away features and not >replacing them?

Same as above

>- Why do the operating systems get more complicated and take up more memory? >They aren't any >more useful.

Ah. Now that was a stroke of genius! Grudgingly I have to admit so myself! When microsoft first stole the GUI and he sold it they acquired a taste for money. Now despite the fact that more victims started buying windows they realized that the fools can be exploited continuously! Here is what you do. You make an operating system now. Everyone buys it. then you create a new one and drop support for the old one. This is how you force the fools to buy the new one. But since there are intelligent people such as yourself who might suspect the trick you make the new one more complicated and pretend that that makes it better. A classic case of make it cheap, sell it cheap, sell it twice! (well in this case more than twice!)

0

- Why do most spreadsheets and programming languages do trigonometric functions in radians, when most people use degrees?

For the same reason Americans use Imperial system of measurememt while most europeans use the metric system. Just because.

- - Why do most of the C-based languages (including Java, JavaScript, and Perl) have two different syntax forms to call functions with, depending on which function you want?

Don't know what you mean. C does not permit overloaded functions.

- - Why does Microsoft change the name of everything from what others called it before (a "directory" suddenly became a "folder", and a "procedure" became a "method").

Microsoft had little, or nothing, to do with the naming of "procedure" and "method". C and C++ languages do not have such a thing as a "procedure". Instead, they have functions and methods and those two terms are often used interchangably.

- - In the 1980s, kids all over were programming computers. With the change to Windows, this suddenly stopped, as Microsoft made programming a lot harder to do. Why?

If you want to stick with MS-DOS operating systems then go right ahead. You can still run 16-bit applications in the newest version of MS-Windows and compile with with 16-bit compilers such as Turbo C.

-- Why have all of the easy-to-use programming languages been replaced with the hard-to-use C derivatives?

They haven't been replaced by C. You can still use QBASIC if you want to. But you will probably not be able to do very much with it.

-- Why do the operating systems get more complicated and take up more memory? They aren't any more useful.

Not more useful ???? I suppose you don't think Vista is any more useful then MS-DOS Version 1.0 ? :icon_eek:

0

Why do they make computers harder to use and program than they need to be? Here are some examples:

- Why do most spreadsheets and programming languages do trigonometric functions in radians, when most people use degrees?

This is Math, trgometric functions use radians, it is standart.

- In the 1980s, kids all over were programming computers. With the change to Windows, this suddenly stopped, as Microsoft made programming a lot harder to do. Why?

Computer became very complex, therefore languages become harder to learn and use -> that's why people stop programming.

- Why have all of the easy-to-use programming languages been replaced with the hard-to-use C derivatives?

Same answer as above.

- Why are they making HTML harder to use by taking away features and not replacing them?

HTML is simple language, if you consider it as hard to learn maybe programming is not for you.

- Why do the operating systems get more complicated and take up more memory? They aren't any more useful.

No one makes you use OS like eating memory monsters like MS product or heavy UNIX OS.

-1

> Why do they make computers harder to use and program than they need to be? Here are some examples:

Why do you assume that computers are harder to use and program, instead of assuming you were a lot smarter when you were younger?

> Why do most spreadsheets and programming languages do trigonometric functions in radians, when most people use degrees?

Why do you expect them to behave differently than the behavior they have had in all the history of mathematics? Why do you expect it to default to a version of the function that's usually slower to implement?

> Why do most of the C-based languages (including Java, JavaScript, and Perl) have two different syntax forms to call functions with, depending on which function you want?

What makes you think that having braces anywhere in its syntax makes a language "C-based"?

> In the 1980s, kids all over were programming computers. With the change to Windows, this suddenly stopped, as Microsoft made programming a lot harder to do. Why?

What makes you think programming was any harder to do?

> Why have all of the easy-to-use programming languages been replaced with the hard-to-use C derivatives?

Why do you act as if the invention of new programming languages means that older ones are replaced?

> Why are they making HTML harder to use by taking away features and not replacing them?

They haven't removed any features.

> Why do the operating systems get more complicated and take up more memory? They aren't any more useful.

Yes they are.

Votes + Comments
Well, shit ya.. and so do you obviously..
0

- Why do most spreadsheets and programming languages do trigonometric functions in radians, when most people use degrees?

I guess because radians simplify calculations and it's easy enough to turn the result in radians into degrees for human consumption. :)

- Why do most of the C-based languages (including Java, JavaScript, and Perl) have two different syntax forms to call functions with, depending on which function you want?

I can only think of one: function(args). What's the other?

- Why does Microsoft change the name of everything from what others called it before (a "directory" suddenly became a "folder", and a "procedure" became a "method").

Maybe they were trying to make the names make more sense to the average computer user. The icon for a subdirectory in windows is a folder after all. :)

- In the 1980s, kids all over were programming computers. With the change to Windows, this suddenly stopped, as Microsoft made programming a lot harder to do. Why?

There's a huge learning curve from programming on a command line to programming GUIs.

- Why have all of the easy-to-use programming languages been replaced with the hard-to-use C derivatives?

Easy and hard are subjective. I think a lot of the newer languages are easier to use because they're really expressive. You can do a lot with a little bit of code.

- Why are they making HTML harder to use by taking away features and not replacing them?

The only features that are taken away are the ones that only a single dying browser supports and nobody uses. Like <blink>. ;)

- Why do the operating systems get more complicated and take up more memory?

Because users want more features and more flashy. :)

0

>Why do they make computers harder to use and program than they need to be? Here are some examples:

Are they? Never noticed. But I have more than half a braincell.

> - Why do most spreadsheets and programming languages do trigonometric functions in radians, when most people use degrees?

Most people that count when it comes to spreadsheets and programming languages use radians.
All scientists and mathematicians for example. Others using those tools usually have little interest in mathematics, especially trigonometric functions (and those that do know how to do the conversion easily enough).

> - Why do most of the C-based languages (including Java, JavaScript, and Perl) have two different syntax forms to call functions with, depending on which function you want?

flexibility.

> - Why does Microsoft change the name of everything from what others called it before (a "directory" suddenly became a "folder", and a "procedure" became a "method").

first is marketing, it was decided after speaking with a lot of non-technical users that the term more easily fits their concept of what they're doing (so it's to make computers easier to use), second is not Microsoft.

> - In the 1980s, kids all over were programming computers. With the change to Windows, this suddenly stopped, as Microsoft made programming a lot harder to do. Why?

In the 1980s hardly anyone was programming because hardly anyone had a computer.
The percentage of kids programming may have gone down, the total number is now greater than ever.
Ergo, your argument is once again bogus.

> - Why have all of the easy-to-use programming languages been replaced with the hard-to-use C derivatives?

Not hard to use at all... And they've not been replaced either. But you may have to look for them a bit harder as they are no longer hyped, it has long since been discovered that they're less potent.

> - Why are they making HTML harder to use by taking away features and not replacing them?

They're not in fact, they're making it a lot more flexible and safe by making it harder to do stupid things and making it less easy to get away with blatant ignorance and mistakes that lead to ambiguous code (and then blaming the browser for not doing what you wanted it to).

> - Why do the operating systems get more complicated and take up more memory? They aren't any more useful.

Precisely because they ARE more useful.

Of course idiot kids don't know the first thing about what they're ranting about and come up with silly ideas like yours, blaming their own incompetence and ignorance on a hostile world that doesn't give them everything they want on a golden platter even before they knew they wanted it.

0

> Why do they make computers harder to use and program than they need to be? Here are some examples:

Why do you assume that computers are harder to use and program, instead of assuming you were a lot smarter when you were younger?

Because most of the change came with the change to Windows.

> Why do most spreadsheets and programming languages do trigonometric functions in radians, when most people use degrees?

Why do you expect them to behave differently than the behavior they have had in all the history of mathematics? Why do you expect it to default to a version of the function that's usually slower to implement?

Because only mathematicians and Metric nuts use radians.

The trigonometric functions in math textbooks do not have an inherent preference for one measure orver the other. They return the actual angle, not a particular measure of the angle.

> Why do most of the C-based languages (including Java, JavaScript, and Perl) have two different syntax forms to call functions with, depending on which function you want?

What makes you think that having braces anywhere in its syntax makes a language "C-based"?

Nothing. I have followed the devopedment of most of the languages. (and the use of == for an equality comparison is more of a givaway)

> In the 1980s, kids all over were programming computers. With the change to Windows, this suddenly stopped, as Microsoft made programming a lot harder to do. Why?

What makes you think programming was any harder to do?

All of a sudden, schoolchildren stopped writing programs.

> Why have all of the easy-to-use programming languages been replaced with the hard-to-use C derivatives?

Why do you act as if the invention of new programming languages means that older ones are replaced?

Try to get them for Windows, and you will see what I mean.

Bill Gates designed Windows so you have to do most real programming in C variants.

> Why are they making HTML harder to use by taking away features and not replacing them?

They haven't removed any features.

They are in the process of removing quite a few features, with no practical and 100% workable replacements in sight:

- Centering images
- Quoting part of a numbered list as a reference from another work
- Hyperlinks to the middle of a page
- Creating multiple columns inside other objects

Doi you really expect the transitional doctypes to continue to be supported.

> Why do the operating systems get more complicated and take up more memory? They aren't any more useful.

Yes they are.

I have not seen much added to the utility, other than the ability to use a mouse instead of a command line, and I have seen one feature removed.

I used to be able to rename a group of files using a template. This is no longer possible.

0

I guess because radians simplify calculations and it's easy enough to turn the result in radians into degrees for human consumption.

I guess it's less work for the people who make the languages.

I can only think of one: function(args). What's the other?

target.function(args)

Maybe they were trying to make the names make more sense to the average computer user. The icon for a subdirectory in windows is a folder after all.

Not originally. Originally the folder icon was a text file, and a box with lines in it was a directory. The change to Windows 3.0 made the change in names and icons.

There's a huge learning curve from programming on a command line to programming GUIs.

That's because Microsoft intentionally MADE it hard. They also require you to BUY an expensive kit to do any programming for programs intended to be sold. QuickBasic had all that was needed to make window objects. It just wasn't compatible with Microsoft's structure for Windows.

I think Microsoft saw itself losing the market to amateurs, and so decided to stop this "dangerous" activity to preserve its monopoly.

Easy and hard are subjective. I think a lot of the newer languages are easier to use because they're really expressive. You can do a lot with a little bit of code.

Wrong. They changed things to allow multitasking.

The old methods had static scoping of variables. The code always did what you told it to do. Now, you have to know about how the variables (and especially arrays and objects) are structured inside the computer in order to know whether or not a function will return the proper value. And they took away the ability to pause execution for a given time interval and to generate sounds or waveforms in the program.

This is not just an annoyance. It prevents certain kinds of scientific research from being done. MS-DOS computers are still in use for this research, much to the annoyance of computer support departments of research facilities.

The only features that are taken away are the ones that only a single dying browser supports and nobody uses. Like <blink>.

Not so. In a few years, you will not be able to do any of these:
- Center an image reliably
- Quote from a reference source only part of a numeric list
- Make a link to a point in the middle of a web page
- Create columns inside other objects
- Make existing pages written under the old standards work

Do you really expect the transitional doctypes to continue to be supported?

Because users want more features and more flashy.

Yes. BUSINESS users. Science has been sacrificed to please business.

0

Wrong. They changed things to allow multitasking.

Who do you mean by they? One reason operating systems support multitasking is because the CPU chips allow it, and as you know the CPU chip is not the operating system. Another reason is to allow programs to access lots more memory. Under MS-DOS the maximum memory possible was 640K, and the operating system and all device drivers consumed part of that. Programmers all over the world were screming for help getting more memory. Well Intell and Microsoft gave them their wish. If you don't like that then you are always free to remove the MS-Windows and *nix from your computer and replace it the MS-DOS version 6.X, then you will be back to where we all were 15 years or so ago. Of course you will not be able to play any of the current games or access the internet.

I used to be able to rename a group of files using a template. This is no longer possible.

Oh yes it is -- using Windows Explorer highlight a group of files then change the file extension of one of them -- they will all be changed to the same file extension.

And they took away the ability to pause execution for a given time interval and to generate sounds or waveforms in the program

What! Those are still available in win32 api functions and *nix functions. C and C++ languages never ever supported them as part of the language.

0

That's because Microsoft intentionally MADE it hard.

That statement is completely ridiculous. For one thing, GUI programming is more challenging than console programming on all operating systems, not just ones made by your "evil Microsoft". Secondly, Microsoft has no motive for discouraging amateur programming, and if you need any more assurance that developers are not an important component of Windows, please watch Steve Balmer's "Developers!" video.

They also require you to BUY an expensive kit to do any programming for programs intended to be sold.

Again you are wrong. Visual Studio Express edition's license allows for commercial development. And it is more than adequate for most freelancing programmers. In fact, for the longest time ever Microsoft was giving out and mailing free copies of Visual Studio 2005 Standard edition to everyone who watched their webcast series. I jumped at the opportunity and got two copies, completely free of charge.

QuickBasic had all that was needed to make window objects. It just wasn't compatible with Microsoft's structure for Windows.

For BASIC programming, Visual Basic is easy enough to use as a GUI development tool.

The old methods had static scoping of variables. The code always did what you told it to do. Now, you have to know about how the variables (and especially arrays and objects) are structured inside the computer in order to know whether or not a function will return the proper value.

C wasn't created by Microsoft. And you can use static variables if you want.

And they took away the ability to pause execution for a given time interval

Ever heard of sleep()?

Of course idiot kids don't know the first thing about what they're ranting about and come up with silly ideas like yours, blaming their own incompetence and ignorance on a hostile world that doesn't give them everything they want on a golden platter even before they knew they wanted it.

I think that statement sums up this thread pretty well.

1

Who do you mean by they? Well Intell and Microsoft gave them their wish. If you don't like that then you are always free to remove the MS-Windows and *nix from your computer and replace it the MS-DOS version 6.X, then you will be back to where we all were 15 years or so ago. Of course you will not be able to play any of the current games or access the internet.

They gave BUSINESS its wish, at the expense of other users.

But we can't run the special scientific applications we need on Windows So we have to use DOS.

Except that we can't find new computers that run DOS 6.2 anymore, and the old ones are dying.

Oh yes it is -- using Windows Explorer highlight a group of files then change the file extension of one of them -- they will all be changed to the same file extension.

But suppose I need to rename the series of files:

ted001.txt, ted002.txt, ted003.txt ... ted246.txt

to

bev001.txt, bev002.txt, bev003.txt ... bev246.txt

It doesn't work! I end up with:

bev001.txt, Copyofbev001.txt, Copy2ofbev001.txt ... Copy244ofbev001.txt

What! Those are still available in win32 api functions and *nix functions. C and C++ languages never ever supported them as part of the language.

But the languages they took away DID support them. They took those functions away because they don't work with Windows running all the time under them.

I wrote a DOS video game in GWBASIC to help children learn their math facts while they have fun. I was about to put it on the market, when WINDOWS happened. The game won't run right in the DOS shell (because those functions were taken away), and I have been searching for a way to make the game WITHOUT paying the MS "toll" on games developed using their development tools.

What the game needs that doesn't work anymore:

- The full screen, not a window.
- Specific pauses (not delayed activations) for time intervals.
- Keyboard scanning during the pauses, to detect an answer being input.
- Sounds generated instantly by the program to reflect what is going on on the screen (not canned sound clips).
- The ability to draw objects differently, based on what the user does (not canned images).
- ONE path of execution in a definite order, not multiple spawned processes

Everyone tells me I have to do this in assembly language, and take control away from Windows to do it. But if I do that, I also have to do my own I/O and memory refresh.

What was once easy to do has now become very complicated.

Votes + Comments
I can understand but your arguments are illogical.
0

But we can't run the special scientific applications we need on Windows So we have to use DOS.

You can. Don't blame the world for moving on when you keep standing still and refuse to upgrade your software...

Do you also blame the world for making cars which are so fast and loud that they scare the horse that's pulling your carriage?
As that's what people did a hundred years ago...

0

Rofl Midi, this thread is hilarious...

But we can't run the special scientific applications we need on Windows So we have to use DOS.

Except that we can't find new computers that run DOS 6.2 anymore, and the old ones are dying.

Funny, seems to me a lot of people use *nix variants when they need specialized systesms. Not that they necessarily need it for scientific needs, because as we all know, Cray makes some kickass gaming machines...

But the languages they took away DID support them. They took those functions away because they don't work with Windows running all the time under them.

I wrote a DOS video game in GWBASIC to help children learn their math facts while they have fun. I was about to put it on the market, when WINDOWS happened. The game won't run right in the DOS shell (because those functions were taken away), and I have been searching for a way to make the game WITHOUT paying the MS "toll" on games developed using their development tools.

Aside from DOS being 15+ years ago, you could have just ported your code. If you were shafted, I'm sure others were as well but they seem to have gotten by just fine...

As for the "toll" of having to pay to use someone's product, thats a load of crap. As has been mentioned, Microsoft has been fairly liberal about giving away copies of its development products, especially with the release of the Express editions. If you don't like those, then get another IDE.

What the game needs that doesn't work anymore:

- The full screen, not a window.
- Specific pauses (not delayed activations) for time intervals.
- Keyboard scanning during the pauses, to detect an answer being input.
- Sounds generated instantly by the program to reflect what is going on on the screen (not canned sound clips).
- The ability to draw objects differently, based on what the user does (not canned images).
- ONE path of execution in a definite order, not multiple spawned processes

- Games still do the first, so you should be good.
- see sleep() as mentioned
- or instead of sleep() you could poll for a keyboard event
- how about canned sound clips that play depending on what's happening on teh screen?
- Doable.
- So don't spawn any.

Everyone tells me I have to do this in assembly language, and take control away from Windows to do it. But if I do that, I also have to do my own I/O and memory refresh.

What was once easy to do has now become very complicated.

Everyone you listen to is probably wrong. As for managing your I/O and memory, you certainly had to do that back in the glorious days you were referring to. Unless of course you were using a higher level language like BASIC which really isn't that different from most of the easy languages we have today.

0

What the game needs that doesn't work anymore:

- The full screen, not a window.
- Specific pauses (not delayed activations) for time intervals.
- Keyboard scanning during the pauses, to detect an answer being input.
- Sounds generated instantly by the program to reflect what is going on on the screen (not canned sound clips).
- The ability to draw objects differently, based on what the user does (not canned images).
- ONE path of execution in a definite order, not multiple spawned processes

I just found and started playing a new game Dungeon Runners on Vista Home edition. I like it a lot and does everything that you are complaining about in the quote above.

0

> I like it a lot and does everything that you are complaining about in the quote above.
Of course it supports all of it AD, every commercial game out there has been supporting those _long_, time back.

0

> I like it a lot and does everything that you are complaining about in the quote above.
Of course it supports all of it AD, every commercial game out there has been supporting those _long_, time back.

Yes, you know it, I know it, and apparently everyone else knows it too except the OP.

0

See what I meant about making it more complicated?

More complicated than what? MS-DOS ? you can't really make that comparison because they are two entirely different animals. That's like trying to compare Horse & Buggy with a Mercedes Benz.

0

Funny, seems to me a lot of people use *nix variants when they need specialized systesms. Not that they necessarily need it for scientific needs, because as we all know, Cray makes some kickass gaming machines...

But they WANT multiple events going on. I DON'T!

Aside from DOS being 15+ years ago, you could have just ported your code. If you were shafted, I'm sure others were as well but they seem to have gotten by just fine...

You don't know the half of it. The companies that made the special equipment we needed for the experiments stopped making the equipment, because nobody made an operating system it would work with. So. last year, they closed our labs.

As for the "toll" of having to pay to use someone's product, thats a load of crap. As has been mentioned, Microsoft has been fairly liberal about giving away copies of its development products, especially with the release of the Express editions. If you don't like those, then get another IDE.

The problem is not getting the development software. The toll comes when you go to market your program. Microsoft OWNS part of it, and wants copyright royalties.

- or instead of sleep() you could poll for a keyboard event

That's what I was doing.

- how about canned sound clips that play depending on what's happening on the screen?

That's what I DIDN'T want. I was calculating the sound pitch mathematically, based on what the user was doing.

- So don't spawn any.

That means I can't have any functions. It sends me back to the stone-age days of spaghettibowl programming.

Everyone you listen to is probably wrong. As for managing your I/O and memory, you certainly had to do that back in the glorious days you were referring to. Unless of course you were using a higher level language like BASIC which really isn't that different from most of the easy languages we have today.

I was using compiled BASIC. It is a lot different from the languages we have today, because there was a definite order of execution that you could implicitly control without having to play with handles and sleep.

Let's look at it this way.

When I started programming, the user's program was in control of all but the large multiuser computers. The operating system wasn't even running, unless the user's program wanted something. Then the operating system ran just long enough to satisfy the request, and then it shut off again. When the user's program ended, the operating system started to let the user choose the next program.

Now, like the MCP in the movie TRON, the operating system is in control. That's what BUSINESS wanted, so bosses could control what the employees do on the computers. But that has taken away the freedom to write programs which do things the operating system is not designed to allow.

The problem we could never get around in Windows or UNIX was the fact that the operating system always got its own timeslice. This caused serious problems:

- We needed to sample a set of sensors ever 1/1000 second, and have changes ready to send out in 1/2000 second.
- If we let Windows do the I/O, we could sample the real world only once every 1/18 second, and we could not send a change out until another 1/18 second had passed.
- If we didn't let windows do the I/O, but did our own port reading, we had a 1/50 second gap every 1/18 second when our program was not in control. because Windows got its timeslice.
- If we used DMA, we could have automated I/O equipment read at the 1/1000 second speed. But since our program was not in control at all times, we couldn't calculate the needed changes in time for the 1/2000 second response time needed.

We could do it in DOS, because the DOS timeslice lasted only 1/4000 second.

0

That means I can't have any functions. It sends me back to the stone-age days of spaghettibowl programming.

Not true. This once again shows your utter ignorance.
Method calls don't start new threads you know...

0

Not true. This once again shows your utter ignorance.
Method calls don't start new threads you know...

That website seemed to imply that they do.

Something did. Maybe it was the system I/O calls.

We wrote a C+ program to do a simple sequence for an automated experiment:

1. Turn on the lights, so the subject knows to begin.
2. Measure exactly 4 seconds of subject activity at 1/1000 second intervals, using position and force sensors. Put it all in a 2d array.
3. Save the entire array to a disk file after the trial is over.

I wrote the code so that each of these things were done in a given order.

But that's not what we got.

- Our data were collected not once every 1/1000 second, but once every 1/18 second. So the experiment code ran for 3 minutes and 42 seconds if left to itself.
- Each time a "one millisecond" block of data was collected, the data were immediately saved to disk, without waiting for the entire array to be filled.
- After all of the disk I/O was finished, it turned on the lights to tell the subject to begin.

Somehow the compiler or operating system decided which I/O processes were "more important" from its own viewpoint, and changed the order of events to be "more efficient".

We never got to fully troubleshoot it. The administrator said we had wasted too much time on it (HIS project was delayed by our working on this project). Then he threw the new $5000 system in the dumpster, and told us to go back to the old DOS one.

0

So in DOS you were changing the default timer?

And if you needed an accurate timer in Windows you chose not to use one?

[edit]If hard real time is a constraint, have you considered devices which support this?

0

Computers are getting harder to use and program?

You obviously have not seen one of the early computers, where all the programmig was done with binary switches!

0

But they WANT multiple events going on. I DON'T!

You don't know the half of it. The companies that made the special equipment we needed for the experiments stopped making the equipment, because nobody made an operating system it would work with. So. last year, they closed our labs.

The problem is not getting the development software. The toll comes when you go to market your program. Microsoft OWNS part of it, and wants copyright royalties.

That's what I was doing.

That's what I DIDN'T want. I was calculating the sound pitch mathematically, based on what the user was doing.


That means I can't have any functions. It sends me back to the stone-age days of spaghettibowl programming.

I was using compiled BASIC. It is a lot different from the languages we have today, because there was a definite order of execution that you could implicitly control without having to play with handles and sleep.

Let's look at it this way.

When I started programming, the user's program was in control of all but the large multiuser computers. The operating system wasn't even running, unless the user's program wanted something. Then the operating system ran just long enough to satisfy the request, and then it shut off again. When the user's program ended, the operating system started to let the user choose the next program.

Now, like the MCP in the movie TRON, the operating system is in control. That's what BUSINESS wanted, so bosses could control what the employees do on the computers. But that has taken away the freedom to write programs which do things the operating system is not designed to allow.

The problem we could never get around in Windows or UNIX was the fact that the operating system always got its own timeslice. This caused serious problems:

- We needed to sample a set of sensors ever 1/1000 second, and have changes ready to send out in 1/2000 second.
- If we let Windows do the I/O, we could sample the real world only once every 1/18 second, and we could not send a change out until another 1/18 second had passed.
- If we didn't let windows do the I/O, but did our own port reading, we had a 1/50 second gap every 1/18 second when our program was not in control. because Windows got its timeslice.
- If we used DMA, we could have automated I/O equipment read at the 1/1000 second speed. But since our program was not in control at all times, we couldn't calculate the needed changes in time for the 1/2000 second response time needed.

We could do it in DOS, because the DOS timeslice lasted only 1/4000 second.

I understand the problems you mention because I too worked for a company that manufactured large-character printers on assembly lines that printed information/barcodes on the outside of product containers. The computer programs I wrote communciated with those printers and scanners in real-time. It worked with MS-DOS 6.X because it was the closest os we had that was a real-time os. But as you know MS-DOS is pretty much dead today.

I have not used it myself but there is a 3d party add-on that makes MS-Windows a real-time os.

>>That's what BUSINESS wanted
It wasn't all businesses that wanted it. Every home in America had a copy of Windows 95. Now, nearly everyone at home use XP or Vista. And that is NOT business but users like you and me. People voted with their wallets and they voted FOR operating systems that do many things concurrently.

0

So in DOS you were changing the default timer?

No. But I could read it at any time (unlike Windows). So I read it when the event to start the timing occurred, calculated the proper time to end the interval, and waited for it, and other events, by polling.

And if you needed an accurate timer in Windows you chose not to use one?

- I couldn't start the timer within the 500 microsecond deadline after the moment the event occurred, because I couldn't detect the event until Windows let me do the math (note that detecting the onset event required a calculus first-derivative calculation on the fly from values which were just read).

- If Windows was using its own timeslice when the interval ended, the stimulus selection was delayed until the Windows timeslice ended.

- All of the systems proffered for this kind of experiment used a "time stamp" to record the exact moment of the event from the system clock, and used software to sort it out after the trial. But I can't put a timestamp on the "hardware" of the organism under study, somehow telling it that the stimulus should have really occurred 10 milliseconds earlier.

- The first-derivative calculation was not the kind of event which triggers timer events by itself. The program had to have control at the moment it occurred.

[edit]If hard real time is a constraint, have you considered devices which support this?[/QUOTE]

I built one. And I thought I had solved the problem.

But it required bit-level port programming, and calibrating the analog first-derivative circuitry's trigger level and time constant to match the subject organism took over an hour of repeated trials (the old DOS method needed two trials). Only one scientist knew enough about port programming to use it. The others demanded a software solution (they are the kind who think that increasing the processor speed would solve this kind of problem). But a user-friendly driver used up too much time.

0

I was thinking of something like this to grab the data at hard intervals and pump the data to the PC via USB. [edit]Maybe even do a little data processing.

0

We did try one of these:

onset tt8

It was able to do half of the job (it had either a speed limit or a number of channels limit). But it again required bit-level programming. And the user interface required DOS to write the programs (they have since fixed this).

This topic has been dead for over six months. Start a new discussion instead.
Have something to contribute to this discussion? Please be thoughtful, detailed and courteous, and be sure to adhere to our posting rules.