Excellent! Thank you Gribouillis...

Thanks Gribouillis So (sorry if this is obvious question) would it be correct to say the main purpose of name is to verify what module the particular function/class was defined in? (As opposed to where it is called from). Other than passing self or something similar as an argument to a function, is there any dynamic builtin variable a function can use to print out where it was called from, not knowing ahead of time how it might be used. (Other than toggling the debugger).

By the way, does nametest.name = .... reset the original variable or create a new one?

#module nametest

def showName():
   print("__name__ is: " + str(__name__))

If I import nametest into another module or into the shell interpreter and call nametest.showName() I find that name = "nametest", in other words name gets the module name it is a built-in member of?

The only exception I know of is if I define showName directly in the shell without importing it in a module, then name shows "main". Is there any other way to get the function "showName()" to print a value other than "nametest".

In Python:

#python
class myClass:
    istat = 111 #exists as soon as import class
    def __init__(self):
        print("in ctor")
        self.iauto = 222 #does not exist until instance created
.....
mc1 = myClass()
mc2 = myClass()

In C++:

//C++
class myClass
{
public:
    static int istat;
    int iauto;
};
.....
//main
myClass mc1, mc2;

I am trying to refresh my C++ and learn Python. It is interesting to compare them. Would the following be correct (regarding static class members of each language?)

  • Python: class members are by default static, and values can be accessed from either class or instances of class, but propagating a change through all instances only happens if class value is changed, if member in an instance is changed it becomes essentially a new member for that instance which masks the previous class member. Only static members are defined in class, to create an automatic variable create it in the constructor, it is only defined in instances of the class but class does not know it. Essentially class is being redefined for each instance of the class.

  • C++: class members are by default automatic. If declared static, value can be changed either through class (myClass::istat) or through instance (mc1.istat) and will propagate to all instances in that space. both static and auto members are defined in the class and all instances of the class, no class redefinition.

Thank you for all your help.

if there is no module in sys.modules['myScript']. This module may exist even if the name myScript is not defined in the current global namespace.

Thank you! Exactly what I needed to know! So closing and re-opening the shell loses (or may lose) the definition but does not clear or delete from sys.modules. It seems a bit counter-intuitive but explains everything. Thank you "Gribouillis"

I am finding myself a little confused about how Python imports. Would the following be a correct statement:

In any Python interactive shell, if the shell (since it was initiated or opened) has never before imported "myScript.py" then if the user types in "myScript" he will get an error like: NameError, name 'myScript' is not defined. At this point, entering the command "import myScript" will import myScript.py from the native OS diskfile system, i.e. from Windows folder or Unix directory, based upon a search path directed by "sys.path". Even if "myScript" is defined or has been imported in some other currently existing Python shell, the shell we are currently in will import from disk and not from the other shell.

The reason I ask is, the Blender 3D application I am working with comes with a built in Python shell. Blender can run continuously while the user chooses to open, close, re-open the built in Python shell. If, after importing "myScript" into a shell, I close the shell, make some changes to myScript.py through an external editor, and re-open a new Python shell, the definition for "myScript" is no longer there (NameError: name 'myScript' is not defined). But when i import myScript, it imports the old version, so apparently Blender has saved the old version somewhere and is allowing the Python shell to reload it. This violates Python rule that if not currently defined in namespace, import myScript should always import from the native OS file system, never from some ...

I am writing a script in Blender 3D, which (Blender) uses a built in Python interpreter as its API. There's too much code to paste here but essentially the script does a 3D mesh manipulation. Because I have not found any info yet on how to debug interactively, I run from console and break the script into as many small functions as possible for debugging (vertex selection, vertex transform, etc..). Sometimes I'll modify 2 or 3 funtions, then reload the module which runs main and all sub funtcions. Sometimes I reload the module but just want to test one or two sub-functions manually without calling main. I was trying to (at the python command line) control how I import the module i.e. "imp.reload(xxx) run_main" or imp.reload(xxx) dont_run main" then "xxx.f1()" and "xxx.f5()" etc...

import os

def a():
   print("A")
def b():
   print("B")
# I want to sometimes skip the main body
if os.path.isdir("C:\\Blender\\mainbody"):
   print("main body of module_A")

Is there a way to control whether or not the main body of a module gets executed by passing in arguments to the import command, such as

import module_A "mainbody"

Right now I am controlling it by checking for existence of a directory, I know I could also open some kind of config file which is essentially the same approach but how about just passing in an argument along with import xxx ????

Thanks! I find it a little confusing but I think what you are saying is that since I didn't pass by ref, a local copy of arrayParam was created in calledFunc(), but only the starting address was copied, not the entire array. Any references to array elements in calledFunc therefore changed the original array, but when I tried to reset the address I was reseting a local copy of the address so it was not returned. I hope I got that right?

Just starting out with C#, thought (incorrectly) arrays were automatically passed as ref. Thanks!

[CODE]
private void testFunc()
{
int[] arrayParam = new int[88];
arrayParam[2] = 67;
calledFunc(arrayParam);
Console.WriteLine("testFunc Done");

    }
    private void calledFunc(int[] intArray)
    {
        Console.WriteLine(intArray[2]);
        //if new array declared, values not changed in testFunc
        intArray = new int[66];
        intArray[2] = 33;
        intArray[5] = 55;
    }

[/CODE]

If calledFunc simply changes array values, the changes are passed back to the calling method. But if calledFunc allocates a new array, this is not passed back to the caller. How do I get a called method to have the ability to either change existing array elements, or pass back a new array and its new values?

I have an OpenGL program using GLUT which works fine on my Windows XP desktop using Visual C++ Express. I copied it over to my Vista laptop 64 bit and I copied the glut3.dll into Windows/system32 but although it compiles, when I try to run it I get a glut32.dll not found error.

Is there an OpenGL download for Vista 64? I tried looking at the OpenGL website and it is very confusing, I think even Vista 64 still uses glut32.dll?

Has anybody else encounered the glut32.dll not found problem on Vista, or can anyone give me some OpenGL code that works for them on Vista 64?

Thanks!

With one you need to debug functions, with the other you need to function then debug.

I am reading "Database Systems" by Kifer, Bernstein, and Lewis. In chapter 19 on transaction models, it discusses how a flat transaction can adhere perfectly to ACID, but other transaction models are needed that sacrifice some of the ACID properties (such as isolation) for gains in performance, flexibility, etc...

The archetypal example given is a flat transaction for a multiple segment international flight reservation: London/NY/Chicago/DesMoines with oversea and domestic trip segments where if a domestic segment (NY/Chic/DM) needs to be rerouted (NY/St Louis/DM) a flat transaction has to abort completely and loses a successful subtransaction (London/NY), but with savepoints after each segment the transaction can roll back to NY and reroute from there.

I have 2 questions:

If adding savepoints is the only change made to the flat transaction, does it still belong to the flat transaction model category or is it called something else (it is not Distributed, Nested, or Chained, what is it called: flat with rollbacks?)

Which of the ACID properties, if any, are sacrificed by adding the savepoints? It seems to me even with savepoints/rollbacks it is still atomic, consistent, isolated, and durable. Is the only drawback the cost of creating the savepoints?

Here is some old VBA code inside a Word 2003 document to open and then save a word document:

[code]
Documents.Open FileName:=docFileName, ConfirmConversions:=True,
ReadOnly:=False, AddToRecentFiles:=False, PasswordDocument:="",

PasswordTemplate:="", Revert:=False, WritePasswordDocument:="", _
WritePasswordTemplate:="", Format:=wdOpenFormatAuto

   'do stuff to the opened file here...

    ActiveDocument.SaveAs _
         FileName:=docFileName, _
         FileFormat:=wdFormatDocument, _
         LockComments:=False, _
         Password:="", _
         AddToRecentFiles:=True, _
         WritePassword:="", _
         ReadOnlyRecommended:=False, _
         EmbedTrueTypeFonts:=False, _
         SaveNativePictureFormat:=False, _
         SaveFormsData:=False, _
        SaveAsAOCELetter:=False

[/code]

I ported the code to a word 2007 document, but now if I open a word 2007 doc the save code still saves it in the old format even if I specify a docx extension. What are the new parameters for FileFormat:= ..... now that MS is using the new open XML format. Can I save as either new/old format? Are there any other pitfalls to file I/O with the new MS open format?

Thanks ahead, for any help

This is pertaining to debugging VBA macros for Word 2003. I am debugging a document with a LOT of buttons which launch macros. If I place a breakpoint in a subroutine, and execute the macro by running it from the VBE debugger, execution stops at the breakpoint. But is there a way to have the VBE debugger launched as soon as I call a macro from a document even if VBE is not open, e.g. if buttons a, b, c, ... z launch macros then I want to open the document, click on a, b, or c, ... z and have the appropriate macro code for a, b, ... appear in the VB Editor window. The only way I could think of is putting a divide by zero error at the beginning of each macro, and selecting debug when the error msgbox pops up. One problem is I'm not even sure which macros are being run by each button because there are so many template files/macro names, and some of the macro names are the same in different templates. It would be so much easier if I could specify that any macro being run should automatically pop up in the VBE.

In VC++ 2008 Express, I open a new "Hello World" project (CLR Console Application). If I right-click the project in the solution explorer, and select "properties", I get a "... Property Pages" window which shows a line entry for: Configuration Properties / C,C++ / General / Additional Include Directories. It also has a line entry for: Configuration Properties / Resources / General / Additional Include Directories. (Both lines are empty).

If I go to the main menu, and select: Tools / Options / Projects and Solutions / VC++ Directories, and select "Show Directories for Include Files", I see another multiline entry. This one lists many directories, such as:
$(VCInstallDir)Include
....
$(FrameworkSDKDir)Include
....
etc...

Why so many places to specifiy include directories? Could I place my list of include directories in any and/or all of those places? And secondly, where in the IDE can I found out what the environment variables $(VCInstallDir) etc.... are set to? (Without going out to a command window and checking $PATH).

Thanks ahead of time :-)

Rich

Well I made it through my first semester of an MSCS. Almost. This summer I want to build a game. I am trying to understand a little bit of the history of Windows graphics and am a bit confused. There's WinAPI. There's OpenGL. There's DirectX. As I understand it, OpenGL was initially offered on NT platforms for high end engineering (CAD) apps and required high end (at the time) hardware. But isn't it a Unix based graphics library? So then some guys at Microsoft (Eisler, Engstrom, etc...) decided lower end PC's needed graphics libraries for games and introduced DirectX. Direct3D is a subset of DirectX for 3D routines specifically. Does Microsoft still offer OpenGL? Is it open source or must be bought from SGI? If a game program calls a DirectX routine which can be done in hardware with the right graphics card, DirectX is responsible for implementing the call either through the available hardware (faster) or through its own software routine (slower)? When does DirectX know the graphics card capabilities, at run-time, or when the game is compiled? Where does WinAPI fit into all this, does it sit on top of DirectX or is totally separate. What is a good source of reading for all of this?

Thanks ahead of time.

In XP, or from the DOS command line, is there a nice way to list the objects inside a .dll or .lib file.

[QUOTE=Salem;591139]Does your linear algebra text book describe a "less than" operation?

If so, then it would be perfectly possible to write some code to do the same thing.

Now whether std::vector implements that or not is for you to research, or maybe you can implement your own vector class.[/QUOTE]

I had not thought of looking at C++ vector class, that was a good point. It would appear std::vector supports comparisons on a lexicographic basis, in which cas {2,3,3} is less than {2,3,4}. That's fine, but what I was really wondering is not how to implement comparisons programatically but if "<" or ">" are defined (for row/column vectors) from a mathematical, linear algebra standpoint. I don't think so, I don't see anything in any books I have. They only talk about "=", "+", "-", and "*".

In linear algebra a 1 by m array of (let's say integers) can be considered a vector (row or column). Given 2 row vectors of the same size, can they be compared using <,> ? If so, what relation between individual elements in the arrays is required for "less than" to be true?

Is vector {2,3,3} < vector {2,3,4}...

...or does every element have to be less (i.e. {1,2,3} < {2,3,4} but not so for {2,3,3})

It's tough being 50 and back in college! If anyone has recently taken an Algorithm class, my question is this: Is the proof for Prim's algorithm any different than for Kruskal? I understand both algorithms are greedy in the sense of picking the least weight edge each itiration. Prims only adds the next least expensive edge that can connect to the existing, growing subtree, whereas Kruskal just picks the next least cost edge period, as long as it creates no cycles, so that eventually all subtrees connect.
I am finding many proofs for Kruskal (e.g. Wikipedia), but trouble finding proofs for Prim. The Kruskal proof is as far as I can see, in plain English:

  1. Assume a growing, subset tree that both Kruskal and MST have in common.
  2. Look at the first edge "e" = (x,y) added by Kruskal that does not yield an MST (vertex x is in common subset, vertex y is not).
  3. The MST must eventually reach y using an edge "f" not in Kruskal (MST + e form a cycle with edge f in MST).
  4. Since f came after e, it must be that cost f >= cost e.
  5. Since MST + e froms a cycle through y, replace f with e in the MST and you still have an MST
  6. Repeat process and eventually the MST morphs into Kruskal

Is the proof for Prim identical? Since the Prim algorithm is more constrained, it seems to me there should be ...

sarehu commented: An interesting question! +1

Thanks for the feedback. Sounds like there's hope, even for an old salty dog such as myself. Funny thing, I now find myself back in college, I figured a Masters in Computer Science might add some legitimacy to my quest. I'm by far the oldest person in my classes, but I'm having a lot of fun. I was in such a hurry when I was undergrad, get the heck out and get a big $$$ IT job. Well, I guess I'll just see what happens! (life is that thing that happens while you're working furiously on your plans).

I was a Unix, C, C++ programmer for 17 years then got out of IT during the 2001 dot com crash (to teach Junior/High School). Teaching was/is rewarding but I want to finish my working years in IT. I've been taking some steps (updating my training, etc...) to return to IT but am wondering if my age will be a barrier. I wanted to ask you code warriors out there if you have many fifty year old programmers in your IT groups? All I know is the few interviews I've been to, the hiring managers and programmers all look very very young to me! If you were a hiring manager, would you prefer a college grad vs. an old man?!