MattEvans 473 Veteran Poster Team Colleague Featured Poster

Ah ok. There shouldn't be any problem with doing what you're doing from a DLL. I have only ever written DX applications that are staticly linked and DLLs that don't do anything DX, so I don't know for sure if there are any specific issues with DX calls in DLLs, but I would think that it's unlikely to be a problem.

What is the return value from CreateDevice?

MattEvans 473 Veteran Poster Team Colleague Featured Poster

Oh, and the reason that a D3DPRESENT_PARAMETERS object 'just works' is that the D3DPRESENT_PARAMETERS is a value (structure) type not a pointer type.

Learn this distinction ASAP.

MattEvans 473 Veteran Poster Team Colleague Featured Poster

You're just creating a pointer variable without actually setting it to anything, so what you're getting is 'worse' than a null pointer, it's a pointer to some random location. The message in the screenshot is caused by the debugger trying to treat whatever's at that random location as a direct3d device, which it almost certainly isn't.

You have to call a function (and I don't remember which one) to actually create and return a direct3d device, then store a pointer to it in a LPDIR...CE9 variable.

Are you following a tutorial or trying to assume/guess how to do things? I would suggest looking at some working source code that uses the API before trying to use it yourself.

MattEvans 473 Veteran Poster Team Colleague Featured Poster

What happens when you use the same mesh, different textures and no shader?

What you're doing looks right to me, assuming that all of your variables and functions are 'correct' with respect to what it is you're doing.

MattEvans 473 Veteran Poster Team Colleague Featured Poster

You call this function multiple times right? Via a timer callback or in a while loop with a sleep?

If so, you're not keeping the rect2 variable around: every time this function gets called the variable is reset to it's initial value, and then any changes you make are lost when the function exits.

Not sure why the image appears duplicated. That sounds like you're not clearing the draw buffer properly... I don't have much recent experience with DirectX though, so can't help you much there.

MattEvans 473 Veteran Poster Team Colleague Featured Poster

You can use GLUT (http://www.xmission.com/~nate/glut.html) or SDL (http://www.libsdl.org/) or other similar libraries, which basically abstract away the underlying OS windowing and input mechanisms such that you can theoretically* write OpenGL code that can compile for a large number of operating systems.

*that is, providing you dont rely on OS-specific features elsewhere

It's not 'slow' either, since GLUT/SDL calls translate into equivalent WinAPI+WGL calls on windows, and XGL calls on X-based platforms.

Depending on what you want to do - GLUT is very easy to set up, you can get started without about 10 lines of (GLUT) code. It's good to use for say, making a demo or testing something in a single OpenGL window.. and there's no reason you couldn't use it to do something more complicated, but the input handling it provides is somewhat limiting.

SDL takes longer (more code) to set up and is a more complicated library, but it handles a wider range of input devices and OS-independant threads. Plus it has a load of companion libraries that can handle graphics import, audio, network and so on.

If you want OpenGL in a windowed application, e.g. if you want to create some 'forms' with OpenGL panels; the winAPI might be the way to go on windows.. However, UI toolkits like GTK, Qt, wxWidgets, etc, allow you to use OpenGL panels in a similar manner, and keep your code OS-independant at the same time.

So, depending on what …

MattEvans 473 Veteran Poster Team Colleague Featured Poster

The machine uses 1 and 0 because it's easier to do binary arithmetic (in electronics) than denary arithmetic. 1 and 0 map very well to the intrinsic states 'on' and 'off'.

You can make bigger numbers in base 2 (binary) using only 0 and 1, just the same as you can make bigger numbers in base 10 (denary) with only 0,1,2,3,4,5,6,7,8 and 9. You can represent any number in base 10 in base 2, so it doesn't really matter.

Base 10 is quite arbitrary anyway. Why do you consider 0,1,2,3,4,5,6,7,8, and 9, as 'enough' unit numbers... why not 0,1,2,3,4,5,6,7,8,9,A,B,C,D,E and F? Or base 200 with some new unique symbol for each unit?

MattEvans 473 Veteran Poster Team Colleague Featured Poster

In DirectDraw (2D api) you could set pixels directly, I've not used either for a long time, and I'm pretty sure that DirectDraw was discontinued a while back.

This kind of thing is going to be the way to go in DirectX: http://www.drunkenhyena.com/cgi-bin/view_net_article.pl?chapter=2;article=20

Basically, DirectDraw (used to?) be a 2D API with fast functions for blitting and pixel plotting in absolute screen coordinates, DirectX is a 3D API, so you get most efficiency talking to it in terms of 3D primitives ( points, lines, triangles, etc ), textures, and transforms.

MattEvans 473 Veteran Poster Team Colleague Featured Poster

It's because you are using an orthographic projection, in an orthographic projection, a higher depth doesn't make objects appear smaller.

See : http://www.songho.ca/opengl/gl_projectionmatrix.html

So, if you want this effect, use gluPerspective, with a FOV (first parameter) of about 45.. (90 is waaay too wide a field-of-view, it will make the 'viewer' seem infinitly small). You may well need to scale your objects, to make them 'the right size' under this new projection

MattEvans 473 Veteran Poster Team Colleague Featured Poster

Matrix transforms are perhaps harder to initially understand than applying offsets and rotations directly to points, but it's IMO the 'right choice' to use matrices, because it makes everything in the future easier. (e.g. to do anything more complicated, you either have to use matrices anyway, or do what matrices would do in longhand)

What level would you say that you're at? What are you having trouble with at this moment/ what was the last thing you had trouble with?

MattEvans 473 Veteran Poster Team Colleague Featured Poster

What kind of 2D library? Graphics, or physics? or something else?

I like this book alot, it says collision detection, but it also has a lengthly intro to many of the concepts of simulated 'space and time': http://www.amazon.com/exec/obidos/tg/detail/-/1558607323?tag=realtimecolli-20. Is 3D rather than 2D, but it's (a hell of) alot easier to convert 3D stuff to 2D than the other way around.

But, I don't have masses of books, (infact, I only really have that book, and this one, http://www.amazon.com/Calculus-Analytic-Geometry-George-Simmons/dp/0070576424, which is way to deep for what I tend to need), so perhaps there are more appropriate choices.

For basic 2D math.. A 'middle-level' education textbook should probably suffice.. In UK terms, an A-level or equivalent textbook should have enough material to understand most things you'd need to do w.r.t to 2D geometry, linear algebra, integration/differentiation, and that's a good basis for doing a whole lot of stuff.

Often, the hardest thing to work out is where & how to actually apply what you know, and what extra you'll need to know in order to do x. If you can classify exactly what x is, searching for how x is usually done, reading that material, following references / researching what you don't know, and recursing tends to work quite well, and you'll pick up useful transferable stuff along the way.

A quick question, what sort of mathematical architecture are you using? By that I mean, when you think about your code, do …

MattEvans 473 Veteran Poster Team Colleague Featured Poster

It's easy enough to arbitrarily set the rotation point, e.g. this will work:

case 'E':
case 'e':
{
	//Rotation of the red square.
	glMatrixMode(GL_MODELVIEW_MATRIX);
	glLoadMatrixd(redTransformationMatrix);
	glTranslated(200.0, 600.0, 0.0);
	glRotated(1.0,0.0, 0.0, 1.0);
	glTranslated(-200.0, -600.0, 0.0);
	glGetDoublev(GL_MODELVIEW_MATRIX, redTransformationMatrix);
	glutPostRedisplay();
	break;
}

The general rule is, translate to the origin, rotate, and then translate back again.

However, you are making your life abit more difficult by offsetting the square outside of the matrix.. that is, you are doing matrix transforms, and then offseting by 200, 600 in that space.. you'll find it easier to initialize your matrix to that 200, 600 offset, and then always draw the square at 0,0 in the space of your matrix transform.

That is, if you do this:

void init()
{
	glMatrixMode(GL_MODELVIEW);
	glLoadIdentity();
	[b]glTranslated (200.0, 600.0, 100.0);[/b]
	glGetDoublev(GL_MODELVIEW_MATRIX, redTransformationMatrix);
}
...
void display()
{
...
	glPushMatrix();
	glLoadMatrixd(redTransformationMatrix);
	glColor3d(1.0, 0.0, 0.0);
	[b]DrawSquare(0.0, 0.0, 0.0);[/b]
	glPopMatrix();
...
}

Then, you can just do this:

case 'E':
case 'e':
{
	//Rotation of the red square.
	glMatrixMode(GL_MODELVIEW_MATRIX);
	glLoadMatrixd(redTransformationMatrix);
	glRotated(1.0,0.0, 0.0, 1.0);
	glGetDoublev(GL_MODELVIEW_MATRIX, redTransformationMatrix);
	glutPostRedisplay();
	break;
}

You will find, when you do get it rotating properly, that any subsequent translation will be in the local space of the previous transform. That is, what you consider to be moving in 'x' will actually be moving in the rotated x direction. If that's what you want, great, otherwise, you can keep transforming in global 'x' by reversing the translate multiplication order, like this:

case 'D':
case 'd':
{
	//Translation of the red …
Nick Evan commented: Your knowledge of 3d-development is somewhat frightening ;) +17
MattEvans 473 Veteran Poster Team Colleague Featured Poster

You setup an orthographic projection matrix, and then in the pushmatrix/popmatrix block, you load a matrix in model space, this 'undoes' your projection until the matrix is popped...

You could either replace this:

glPushMatrix();
glLoadMatrixd(redTransformationMatrix);
glColor3d(1.0, 0.0, 0.0);
DrawSquare(200.0, 600.0, 100.0);
glPopMatrix();

with this:

glPushMatrix();
[b]glMultMatrixd[/b](redTransformationMatrix);
glColor3d(1.0, 0.0, 0.0);
DrawSquare(200.0, 600.0, 100.0);
glPopMatrix();

Which will work in this case, or much better, use the projection matrix aswell.

The projection matrix is automatically multiplied onto the modelview matrix during vertex transform, and that's typically where glOrtho/glFrustum/etc are used. Eg:

//  VIEWPORT AND BUFFER CLEAR FIRST, BECAUSE THEY ARE MATRIX-MODE INDEPENDANT

glViewport(0, 0, windowX, windowY);
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);

// SETUP ORTHOGRAPHIC PROJECTION

glMatrixMode(GL_PROJECTION);
glLoadIdentity();
glOrtho(0.0, windowX, 0.0, windowY, -1.0, 1.0);

// NOW RESET THE MODELVIEW

glMatrixMode(GL_MODELVIEW);
glLoadIdentity();

// ... ETC...
MattEvans 473 Veteran Poster Team Colleague Featured Poster
#include <vector>
#include <iostream>

int main ( void )
{
  std::vector < int > myvector;
  for ( int i = 0; i != 10; ++i ) {
    myvector.push_back ( i );
  }
  int * myarray = &( myvector [ 0 ] );
  for ( int i = 0; i != 10; ++i ) {
    std::cerr << myarray [ i ];
  }
  return 0;
}

Should always work because a vector is internally just an array. But, if they wanted you to use it as an array, there'd be a member for accessing the array (like c_str in std::string ).

sciwizeh commented: Very useful thanks +1
MattEvans 473 Veteran Poster Team Colleague Featured Poster

Don't optimize too early. Use a vector if it's more convenient to do so. You'll probably find that you can have more points than any maximum you'd consider before the cost of occasionally resizing the underlying array ever becomes a problem.

If you do find it's easier to use an array (which you might, because you can send arrays to the GL directly with only a few API calls), then break the data up into chunks (separate arrays, either of N bytes, or broken at more natural divisions [e.g. vertices of faces with the same material, same number of points, etc]), and keep a vector of pointers to these (dynamically allocated) chunks.

I wouldn't use a linked list - the overhead per node is probably at least the size of the data per node (in the case of simple vertices), and you have no (fast) random access.

What I would probably do is either: use a vector and an (outside of the C++ standard) trick to use it as an array for sending to GL, or write a wrapper to arrays that basically does what std::vector does, which is NOT to redimension the physical array every time an item is added, but to double in size when necessary which tends to result in quite efficient behaviour in the long run.

MattEvans 473 Veteran Poster Team Colleague Featured Poster
<!DOCTYPE html PUBLIC "-//W3C//DTD HTML 4.01 Transitional//EN" "http://www.w3.org/TR/html4/loose.dtd">
<html xmlns="http://www.w3.org/1999/xhtml" xml:lang="en" lang="en" dir="ltr">

minor bug with this, the doctype is html 4, so xml namespaces are illegal.

MattEvans 473 Veteran Poster Team Colleague Featured Poster

You need to initialize each sampler uniform to the correct unit, usually you do that from host (C/C++/etc) code, I'm not sure how shader builder does that, if at all.

So, to test something for me, try to render just the first texture, and then try to render just the second texture. if reading from a sampler always seems to come from texture unit 0, then that's the problem, and your code would just output black ( because x - x == 0 ).

So, without changing anything else, try these two fragment shaders (just so that I can get a better idea of what's happening):

uniform sampler2D blurImage;
uniform sampler2D moreBlurImage;

void main()
{
float blurGray = texture2D(blurImage,gl_TexCoord[0].xy).r; 
gl_FragColor = vec4(vec3(blurGray),1.0);
}
uniform sampler2D blurImage;
uniform sampler2D moreBlurImage;

void main()
{
float moreBlurGray = texture2D(moreBlurImage, gl_TexCoord[0].xy).r;
gl_FragColor = vec4(vec3(moreBlurGray),1.0);
}

Does texturing work for the first, the second, or both shaders?

More importantly, are both images identical?

MattEvans 473 Veteran Poster Team Colleague Featured Poster

do you get a compile error ( use glGetShaderInfoLog ) ?

what code are you using to:

- generate, compile, and link the shader
- initialize the shader for each render pass
- render each pass

what do you see when rendering ? anything? nothing? not what you expected?

have you tried a simpler example (e.g. check that a simple single-texture fragment shader works)

what's in the vertex shader? (if anything)

MattEvans 473 Veteran Poster Team Colleague Featured Poster

You can't really modify the .a file, it contains already compiled code and symbols. You can however modify the source code and rebuild the library with whatever changes you need. Exactly how to do that will depend on what platform you're on and what changes you want to make, but downloading the source code http://www.talula.demon.co.uk/allegro/wip.html is probably the place to start.

MattEvans 473 Veteran Poster Team Colleague Featured Poster

Look at OpenGL also, if you want to do 3D... Infact, I'd recommend doing 2D in OpenGL aswel, because SDL's drawing stuff isn't that great (it's good for getting a window to render OpenGL in, and it's good for input and event handling, and a host of other things via extensions [eg image loading, networking, sounds, fonts, etc]). But all of that can also link up to OpenGL extremely well.

There are other routes to getting stuff done, e.g. there are a few multiplatform games engines that target linux, e.g Ogre (http://www.ogre3d.org/). That route gives you more tools upfront than starting from scratch does.

MattEvans 473 Veteran Poster Team Colleague Featured Poster

E.g SDL (a graphics libary) is under the LGPL so you can link it with commercial closed source apps

Yep, SDL is a godsend.

The bigger problem here is the fact that it probably wont be very popular because virtualy no other countries apart from the USA, Japan and South Korea like baseball.

Er, with USA and Japan being about the biggest producers and consumers of computer games... =P

There aren't many (high profile) 'current gen' or even 'last gen' games on linux though, baseball or otherwise.

MattEvans 473 Veteran Poster Team Colleague Featured Poster

Probably because Linux isn't really seen as a gaming platform.. But I think that as it is being quite readily adopted as a home desktop PC platform right now, that this is going to change soonish.

Also, it's perhaps not the most attractive platform for a totally money-driven games company to develop for, because:

- there's less of a guarantee that a piece of code will run the same on a large range of linux machines (+ driver + hardware combinations) than there is that that code will run the same on all windows machines.
- there's no financial incentive for a company to release free games, and using certain potentially useful linux libs (i.e. GPL stuff) is thus a problem.

But neither of these issues are actually that big, you can always expect some kind of lowest common denominator for hardware, and it's quite easy to avoid the GPL stuff.

MattEvans 473 Veteran Poster Team Colleague Featured Poster

Of course it is possible; there is necessarily always a projection from object>screen coords and there is always a projection from screen>a subset of object coords on a plane parallel with the view plane (notice the difference). If you want a 1:1 mapping between screen and object coords, use an orthographic projection (which you are), and make sure that the args to glOrtho are the ACTUAL width and height of your viewport, and remember that opengl puts 0,0 (in screencoords) in the bottom left rather than the top left, and 0,0 (in object coords) in the center of the screen.. so, something like:

glMatrixMode ( GL_PROJECTION );
  glLoadIdentity( );
  gluOrtho2D ( 0, w, 0, h );
  glScalef ( 1, -1, 1 );
  glTranslatef ( 0, -h, 0 );
  glMatrixMode ( GL_MODELVIEW );
  glLoadIdentity( );
 /* draw stuff here */

Now, x,y in the screen will be x,y,0 in object coords.

If you use a projective view transform, you can use gluProject/gluUnproject to move between object space and screen space coords. You probably don't (although maybe you do) need to convert between screen&object coordinate systems though, the only time you'd usually need to do that is if you want to be able to 'pick' (with a mouse or similar) some point on the card.

If you want correct relative positions, just work on the assumption that object units are e.g. millimeters, the relative position/sizes of things will always be correct, and you can then pre-scale the modelview matrix to …

Nick Evan commented: Helpful as always +15
MattEvans 473 Veteran Poster Team Colleague Featured Poster

Indeed.

But absolute positioning (I mean moving the displayed order of things while keeping the actual markup in a different order) is worse for a screenreader user than a (simple) table.. e.g.

<table>
<tr>
<td>this is all going to be read</td>
<td colspan="3">in the correct order</td>
</tr>
<tr>
<td>although, perhaps the rows, and</td>
<td>the columns might be announced</td>
<td>by some screenreaders</td>
<td>using certain settings</td>
</tr>
</table>

But...

<div>&nbsp;</div>
<div>in backwards order</div>
<div style="position:absolute;left:0;top:0;">this will probably be announced</div>

although, it would appear to be in the correct order to a sighted user.

MattEvans 473 Veteran Poster Team Colleague Featured Poster

That's a myth. Screenreaders aren't confused by tables, they just read them in a specific way. Weird markup order and other 'unusual' div constructs can confuse screenreaders just as much, sometimes more.

MattEvans 473 Veteran Poster Team Colleague Featured Poster

In your for loops, you are adding 2 to i and k. You probably want to add 1 to i and k, since your trying to get 2*i and 2*k-1, i.e the odds and evens... so it looks like you are skipping half of the points by adding 2 instead of 1. I can't see where (or if) you're performing the redundant multiplication in the code that you posted, but when I made that change, the result needed to be divided by 2 to be correct (it was correct before for ranges like, 0-1, 0-10 etc, so if that's not the problem, what ranges ARE you trying?).

Second question, pass a function pointer as an argument.. the signature of your function is long double () ( long double ), so change your simps function to be:

long double simps (double begin, double end,long double h,double n, [b]long double ( *f ) ( long double )[/b] )

and that's actually all you need to do.. you can pass any function in place of cos, now, providing that the signature is exactly the same.

Btw, the signature for forward delcaring simps will change to:

long double simps(double,double,long double,double,long double(*)(long double));

Actually, I may aswell just go ahead and post the modified version, all changes emboldened:

/*
 * Simprule.cpp
 *
 *  Created on: Mar 14, 2009
 *      Author: keith burghardt
 */
# include <iostream>
using namespace std;
#include <cmath>
#include<iomanip>
double begin,end,area,h;

long double [b]coswrapper[/b](long double x){
	return(//State …
MattEvans 473 Veteran Poster Team Colleague Featured Poster

use nodelay ( [yourscreen], true ); , apparently, to make getch asynchronous (i.e. it will return even if no key is pressed).

MattEvans 473 Veteran Poster Team Colleague Featured Poster

Brute force isn't too bad.. you don't have to check every pair of pixels between the two sprites, you only have to check each pixel space to see if it's occupied twice, and that's not very costly ( it's no more costly than looping all of the pixels in the screen, and the calculation can be performed for all objects at the same time ). This is what a z-buffered application (e.g. OpenGL or DirectX) does to determine occlusion, so it's not really 'inneficient'.

There are some optimizations you can make though.. for any candidate pair of objects, calculate the 2d minimum bounding rectangles of the two sprites, and you will only possibly get a collision within the intersection of these two rectangles. Obviously, if the intersection is empty, you can return false straightaway, otherwise, check the pixels in the intersection.

Heuristically, for two 'convex' sprites, you're more likely to get a collision towards the (actual) centroid of the sprite so check the pixels in the intersection rectangle from the 'outside' in.. and finish the algorithm early if a collision is detected.

You can also group up pixel states into pre-defined groups of 8/16/32/etc pixels, and then it just takes just a bitwise 'and' to check a whole group of pixels at a time. 16 is good because it's square - effectively you can test two 4x4 squares of pixels in one cycle, rather than just testing 1 pair of pixels. Massaging your data into the correct …

Ezzaral commented: Great post. +19
MattEvans 473 Veteran Poster Team Colleague Featured Poster

I've not used Gamemaker myself, but I'm quite aware of the setup. I think you might benefit from looking at SDL (http://www.libsdl.org/) rather than curses, it's got facility to create and work with a drawable surface (on any supported platform), handles audio and input methods and other kinds of useful event processing, handles image loading etc.. and it's pretty easy to get started with (somewhat easier than curses IMHO).

I develop on Linux also (Fedora 10). I use the KDevelop (http://www.kdevelop.org/) IDE. It has acceptable code completion (although it can get a bit confused sometimes) and acts as a frontend to the gdb debugger, and to the valgrind profiling suite.

I don't use the autocomplete/gdb integration myself: I dont use auto complete because on a big project it can start to get a bit memory intensive, and I don't use the gdb integration because it only does 'full project' debugging, and I usually need to debug a single target of the project at a time.

But anyway, it gets the job done, and I think for smaller projects (which would probably only have one executable target), the autocomplete+built in debugger would be quite acceptable.

If you don't find a complete IDE that works for you; you can use gdb (debugger) from the command line, or, use the ddd (http://www.gnu.org/software/ddd/) or kdbg (http://www.kdbg.org/screenshot.php) frontends to gdb.

MattEvans 473 Veteran Poster Team Colleague Featured Poster

Immediate follow up; if the 'co-ordinates on screen' are just derived from some other properties of the entity, then there's probably no need to store them anywhere... e.g. (in OpenGL) actual screen co-ordinates are derived from object space co-ordinates and a perspective transform, so each 'entity' only needs keep it's object-space co-ordinates, the perspective transform is a 'global', and no-one keeps the screen space co-ordinates: because they get thrown away at the end of the frame, and because theres not much useful that can be done with them.

MattEvans 473 Veteran Poster Team Colleague Featured Poster

I can't think of an advantage of keeping coordinates outside of the entity, providing that you don't intend entities to be duplicated in the list. If you do intend that, it's better to split the entity into a flyweight with the position and other 'per-entity' properties and a 'resource' class with the duplicated data (which the entity holds a pointer/reference to).

But, if your using curses, I'm guessing you don't have masses of (graphical) data associated with entities.. =)

If you don't keep the co-ordinates in the class itself, you'll end up needing to give each entity it's own 'id' (and a pointer to its owner) that it can use to query its position if you ever do discover that an entity might need access to that information. if you have to do that, you might aswell of put the co-ordinates in the class to begin with.

In short, if you put the co-ordinates in the entity class, then you already have access to them from the entity _and_ from the container. If you only have the co-ordinates outside, then you might find at some point need to move them inside the class.

So, there shouldn't really be any question, if the only thing that you're worried about is future flexibility.

MattEvans 473 Veteran Poster Team Colleague Featured Poster

I agree that using tables solely for spacing probably isn't the best way to go about things.

But, I firmly believe that using tables for creating robust related columns and rows of any type is 'the right thing to do' (i.e., not just tables of raw, sortable/headed, data): mostly because there is a lack of a viable alternative.

Obviously the 'ideal' is to have one HTML nodeset look completely different and still function for it's original purpose simply by flipping a style... but that only really works in specific, quite simple settings.

MattEvans 473 Veteran Poster Team Colleague Featured Poster

Why do you want to never use tables and only use divs?

(BTW: no-one ever answers this with a decent -- rarely even an accurate -- answer. infact, most people go and answer a completely different question, and if you dont have an answer to this question because it doesn't apply to you, then contending the original points on whatever grounds is arguing at cross-purposes: because I don't believe that MidiMagic is saying only use tables)

In every other part of software development, heck, in almost every other field, period: people use the tool that best fits the job, and rarely think twice about it. there's no for-loop appreciators club, or cult of the saucepan.. although, come to think of it, there probably is...

the occasional "goto" never really harmed anyone; and likewise, the occasional table is sometimes both 'justifiable' and the 'best choice'.

saucepans work well as as long as you dont try and use them as for loops; divs work well and are totally 'easy': as long as you don't try and make a table-like layout with them. if you do, then you suffer from a lack of one of the following information bridges:

- the one that keeps the column widths the same
- the one that keeps the row heights the same

you can hack in one of those, but not both. it should be evident from the specification of HTML and CSS that you can't.

there's a …

MattEvans 473 Veteran Poster Team Colleague Featured Poster

What library/code are you using to import / render the MDL? because that's more than just opengl.

If the library you're using doesn't have the facility to load / render animation from MDL files, you'll have alot of work to do: if working from scratch, you have to read the animation data ( probably a list of times and updated vertex positions, or a list of times and matrices if the model uses 'skinned' animation ) then, in each frame, interpolate either the vertices or the matrices and update the positions that you're rendering. This is not an easy task and is particularly difficult to make efficient, even if you've written the rest of the model loading code (or even defined the entire model format) yourself and know exactly what data is involved.

MattEvans 473 Veteran Poster Team Colleague Featured Poster

Heh, that's the kind of error you don't see everyday.

Look at the arguments to specialFunc:

void specialFunc( int key, [b]int x[/b], int y)

Function argument scope beats global scope.

MattEvans 473 Veteran Poster Team Colleague Featured Poster

Are your transforms 'full 3d' or 'half 3d'? That is, when your character rotates, can they rotate over all axes, or just around an 'up' vector? Are you using matrices as the authoritative representation of the character's transform, or some scalars ( typically x, y,z and rotation ).

MATRICES

If you're using transform matrices, the vector that points 'forward' in the character's space is one of the vectors in the upper 3x3 submatrix, the actual vector depends on the way you define 'up' and 'forward', and whether the vector is a row or column of the matrix depends on whether you treat vectors as rows or columns.. but for e.g., in OpenGL, it would usually be:

m00 m01 [b]m02[/b] m03
m10 m11 [b]m12[/b] m13
m20 m21 [b]m22[/b] m23
m30 m31 m32 m33

the usual 'forward' vector (z) is emboldened. If you extract 'forward' from the character's matrix, and add if to the character's position, it will move the character in their own 'forward'.

I know for a fact that this is different in most DX setups, but it will be quite similar, most likely the row m20->m22, assuming that z is forward, which it isn't always.

SCALARS

If you're just using x, y, z and rotation scalars.... then assuming that positive z is 'up' and positive x is 'forward'... the forward direction vector is:

x = cos ( r )
y = sin ( r )

so,

player.x += cos ( r ) * speed; …
Ezzaral commented: Another great post. +18
MattEvans 473 Veteran Poster Team Colleague Featured Poster

looks like you have not called glEnable ( GL_DEPTH_TEST ); otherwise, post code

MattEvans 473 Veteran Poster Team Colleague Featured Poster

Midi,

Although you have designed by those rules it really isn't much of a design at all! It is really quite unattractive. I would really like to see a list of popular websites that follow your rules.... I can't find any at all.

I found the content on midis website quite interesting, and there was way more content than I had leisure to read. Things were also laid out quite intuitively.

Good content, being able to find things, and not having to leave because a site is totally flash-centric ( or because a feature -- like navigation -- wont work in my browser ) beats good design everytime IMHO. Quite often, the sites I find most useful are either minimalist or practically text only. If a source of information, e.g. some kind of documentation, is available either intermixed within a flashy distracting scenescape or as plain black-on-white linked pages, guess what I always pick?

You can follow most of this advice without going completely minimalist. Often it's just a case of getting a good graphic designer for backgrounds/layout, and laying off the dizzy gimmicks. Take any popular site as an example: if it has any of these purported 'bad features', could it work without them? Quite often, it could, and quite often, it wouldn't take a complete -- or even non-trivial -- redesign. Unfortunately, for the most part, subtlety seems to be a forgotton art these days.

I won't claim to follow these rules myself. When I …

MattEvans 473 Veteran Poster Team Colleague Featured Poster

Possibly. If I remember correctly GLUT (and SDL) are the same. Calling openwindow or equivalent provides the necessary 'startup arguments' to OpenGL (i.e. buffer sizes), so it's only then that the state machine can really be set-up, I guess =)

MattEvans 473 Veteran Poster Team Colleague Featured Poster

It seems that GLFW doesn't properly initialize the opengl state machine until you call glfwOpenWindow.

Move the call to glEnable ( GL_DEPTH_TEST ); to somewhere after the call to glfwOpenWindow. It should work then.

You don't need to call glEnable ( GL_DEPTH_BUFFER_BIT ), at best it's meaningless and at worst it's trying to enable something random.

MattEvans 473 Veteran Poster Team Colleague Featured Poster

Works ok for me... Try this:

const double radius = 5.0;

  glDisable ( GL_LIGHTING );
  glColor3d ( 1.0, 1.0, 1.0 );

  glBegin ( GL_LINE_STRIP );

  for ( double y = -radius; y <= radius; y+= 0.1 ) {
    double x = sqrt ( ( radius * radius ) - ( y * y ) );
    glVertex2d ( x, y );
  }

  glEnd ( );
MattEvans 473 Veteran Poster Team Colleague Featured Poster

Some things:

- Check the return value of LoadBitmap ( i.e. does the bitmap load atall? ).

- You are loading the file "art" ( without a file extension ). If you want to fopen "art.bmp" file, you have to say so. Also check that the bitmap file is in the application's working directory, or use an absolute path to the file instead.

- You should really call glGenTextures to get a new texture index. All texture indices might work without doing this on some OpenGL implementations, but you shouldn't rely on that being the case.

- You're 'not allowed' to call glDisable within a glBegin/glEnd block.

MattEvans 473 Veteran Poster Team Colleague Featured Poster

the equation you wrote is a 2d equality, via rearangement you can make an equation in terms of either 'x' or 'y', e.g.:

x = sqrt ( r^2 - y^2 )

So, you only need to vary 'y', plug-in any such value of 'y' and you'll get one of the two values of x.

as rashakil said, you really don't need to name five and two. if five is meant to be the radius, name it radius, since that's a sensible name.

as for using floating-point pow ( x, y ) when 'y' is 2 or 2.0... don't do that. it tends to run much slower than just doing ( x * x ). i usually have this defined somwhere:

template < typename T > inline T square ( T x )
{ return ( x * x ); }
MattEvans 473 Veteran Poster Team Colleague Featured Poster

From main, (after calling glutInit and before calling glutMainLoop), call your bitmap load function, put the return value into a global int variable ( e.g. texture1 ), repeat for however many textures you want, ( i.e. texture2, texture3, etc.. ).

When you want to 'switch a texture on' call glBindTexture ( GL_TEXTURE2D, textureX ). You must do this OUTSIDE of a glBegin/glEnd block.

For every vertex, also send a texture co-ordinate. This is done the same way as you send normals, but you will nearly always want to do this once-per-vertex, e.g.:

[b]glBindTexture ( GL_TEXTURE2D, texture1 );[/b]
glBegin ( GL_QUADS );
[b]glTexcoord2f ( 0, 0 );[/b]
glVertex3f(-1.0f,-1.0f,0.0f);
[b]glTexcoord2f ( 0, 1 );[/b]
glVertex3f(1.0f,-1.0f,0.0f);
[b]glTexcoord2f ( 1, 1 );[/b]
glVertex3f(1.0f,1.0f,0.0f);
[b]glTexcoord2f ( 1, 0 );[/b]
glVertex3f(-1.0f,1.0f,0.0f);
glEnd ( );

This will draw the entire texture on a single polygon. You don't need to call glEnd yet if you want to keep using the same texture for all of the faces.

MattEvans 473 Veteran Poster Team Colleague Featured Poster

Well, when you call glNormal(x,y,z), the current normal is set to xyz. Any subsequent call to glVertex uses the current normal. So:

glNormal3f(0,0,1);
glVertex3f(-0.5f,-0.2f,0.0f);
glVertex3f(0.5f,-0.2f,0.0f);
glVertex3f(0.5f,0.2f,0.0f);
glVertex3f(-0.5f,0.2f,0.0f);

is equivalent to:

glNormal3f(0,0,1);
glVertex3f(-0.5f,-0.2f,0.0f);
glNormal3f(0,0,1);
glVertex3f(0.5f,-0.2f,0.0f);
glNormal3f(0,0,1);
glVertex3f(0.5f,0.2f,0.0f);
glNormal3f(0,0,1);
glVertex3f(-0.5f,0.2f,0.0f);

With regards to what to set the normal to, it depends on the type of edge your trying to represent. For hard edges, you set the normals to the plane vectors of the polygons being drawn. For soft edges, you can set the normals to the (normalized) average of the plane vectors of any polygon that 'uses' the vertex. An alternative is to set the normal to the normalized position of the point in object space... this works for 'almost' spherical objects..

But basically, for correct lighting, the normals should point 'outwards' from the object. How you define 'outwards' mostly affects how the edges appear.

The normals I used before should be correct (they are 'hard' normals): if you change them to be 'soft', you'll lose the sharp appearance of the edges of the box. You can try another normal generation strategy, but the best way to get good shading of large, flat surfaces is always gonna be to tesselate them more densely (or use a fragment shader).

Here is code for the second method I suggested ( using normalized position as the normal ). This only works correctly in object space BTW, and only when the object is roughly centered about 0,0,0 in object space.

/* need this …
MattEvans 473 Veteran Poster Team Colleague Featured Poster

Some small changes ( in bold ):

[...]

 void disp(void)
 {
	glClear(GL_COLOR_BUFFER_BIT|GL_DEPTH_BUFFER_BIT);
	glMatrixMode(GL_MODELVIEW);
	glLoadIdentity();
	glPushMatrix();
	glTranslatef(0.0f,0.0f,-2.0f);

	GLfloat amb[] = {0.2f,0.2f,0.2f,1.0f};
	glLightModelfv(GL_LIGHT_MODEL_AMBIENT,amb);

	GLfloat dif[] = {0.5f,0.5f,0.0f,1.0f};
	GLfloat difPos[] = {-5.0f,1.5f,8.0f,1.0f};
	glLightfv(GL_LIGHT0,GL_DIFFUSE,dif);
	glLightfv(GL_LIGHT0,GL_POSITION,difPos);

	GLfloat dirColor[] = {0.5f,0.2f,0.0f,1.0f};
	GLfloat dirPos[] = {1.0f,0.4f,8.0f,0.0f};
	glLightfv(GL_LIGHT1,GL_DIFFUSE,dirColor);
	glLightfv(GL_LIGHT1,GL_POSITION,dirPos);

	GLfloat spec[] ={0.5f,0.5f,0.5f,1.0f};
	GLfloat position[] = {1.0f,0.5f,0.0f,0.0f};
	[b]/* You were overwriting GL_LIGHT1 here, I guess you meant GL_LIGHT2?*/
	glLightfv(GL_LIGHT2,GL_SPECULAR,spec);
	glLightfv(GL_LIGHT2,GL_POSITION,position);[/b]
	//glLightfv(GL_LIGHT0,GL_AMBIENT,amb);
	

	glColor3f(1.0f,1.0f,0.0f);
	//GLfloat mColor[] = {0.0f,1.0f,0.0f,1.0f};
	//glMaterialfv(GL_FRONT,GL_AMBIENT_AND_DIFFUSE,mColor);
	
	glRotatef(Yrot,0.0f,1.0f,0.0f);
	glBegin(GL_QUADS);
			[b]/* Most importantly; you need to add normals for each face.
			Otherwise, the lighting calculation doesn't work correctly.*/[/b]
			//FRONT
			[b]glNormal3f(0,0,1);[/b]
			glVertex3f(-0.5f,-0.2f,0.0f);
			glVertex3f(0.5f,-0.2f,0.0f);
			glVertex3f(0.5f,0.2f,0.0f);
			glVertex3f(-0.5f,0.2f,0.0f);
			
			//RIGHT SIDE
			[b]glNormal3f(1,0,0);[/b]
			glVertex3f(0.5f,-0.2f,0.0f);
			glVertex3f(0.5f,-0.2f,-0.5f);
			glVertex3f(0.5f,0.2f,-0.5f);
			glVertex3f(0.5f,0.2f,0.0f);
			
			//BACK SIDE
			[b]glNormal3f(0,0,-1);[/b]
			glVertex3f(-0.5f,-0.2f,-0.5f);
			glVertex3f(0.5f,-0.2f,-0.5f);
			glVertex3f(0.5f,0.2f,-0.5f);
			glVertex3f(-0.5f,0.2f,-0.5f);

			//LEFT SIDE
			[b]glNormal3f(-1,0,0);[/b]
			glVertex3f(-0.5f,-0.2f,-0.5f);
			glVertex3f(-0.5f,-0.2f,0.0f);
			glVertex3f(-0.5f,0.2f,0.0f);
			glVertex3f(-0.5f,0.2f,-0.5f);
	glEnd();
	glPopMatrix();
	glutSwapBuffers();
 

 }

[...]

You may want to look at : http://www.falloutsoftware.com/tutorials/gl/gl8.htm.

Also, even if you make these changes, the lighting won't look 'great', because in the default fixed-function pipeline, lighting is calculated only at vertices and interpolated across the surfaces. (See: http://en.wikipedia.org/wiki/Gouraud_shading, for an idea of what I mean). You can solve this problem as demonstrated on that page, i.e. by densely tesselating polygons, or you can try out fragment shaders (although that's a whole new can of worms.)

MattEvans 473 Veteran Poster Team Colleague Featured Poster

Lines do not have ends: although a line is often uniquely defined by two endpoints coincident with the line; a 'line' is correctly considered to be infinite in both directions.

Line segments are lines with ends... and a 'one ended line' is usually termed either a ray, or a half-line ( depending on whether or not it's considered to be directed).

An axis is generally just a direction vector - i.e. an axis has no position, only a (usually normalized) direction.

MattEpp's suggestion is roughly correct : determine the angle that object2 is relative to the reference angle of object1 ( the reference angle is at 0 degrees ), and subtract the current angle of object1 to give the angle to rotate by (which we'll call 'delta').

However, some considerations - arctan ignores the quadrant (infact, the input to arctan makes finding the quadrant impossible): so you can't differentiate between the output of arctan for certain pairs of (non-congruent-modulo-360) angles. That's basically bad, so, use arctan2 (C function is atan2). Remember output is radians, NOT degrees.

You also need to normalize the input to atan/atan2, else the result is meaningless.

So, in C:

double obj1x, obj1y, obj2x, obj2y; /* positions of objects 1 & 2 */
double obj1rot; /* rotation of object1*/

double relx = obj2x - obj1x;
double rely = obj2y - obj1y;

/* length of relative position (i.e. distance) */
double reldis = sqrt ( ( relx * relx ) + ( …
Ezzaral commented: Always good info in your posts. +16
MattEvans 473 Veteran Poster Team Colleague Featured Poster

Am guessing that you have something like:

glTranslatef ( x, y, z );
glRotatef ( t, x, y, z );
....
glLightfv ( GL_POSITION, pos );
....
[draw object]

But see: http://www.opengl.org/sdk/docs/man/xhtml/glLight.xml, specifically:

The position is transformed by the modelview matrix when glLight is called (just as if it were a point), and it is stored in eye coordinates[...]

So, change your code to:

glLightfv ( GL_POSITION, pos );
...
glTranslatef ( x, y, z );
glRotatef ( t, x, y, z );
....
[draw object]

You can also glPushMatrix and transform some, then position the light (coords are obviously in object space), and then glPopMatrix and do the actual drawing.

MattEvans 473 Veteran Poster Team Colleague Featured Poster

If the universe is deterministic, it'd be possible to simulate forward progression of events and have it exactly correspond to actual progression of events. If it's deterministic & time invertible, then it'd be possible to simulate backward progression of events. Provided that the simulation exactly corresponds to the entire universe, in every detail.

Not then, for a person to travel forwards and backwards, but to to create an identical copy/'image' of the universe and run it backwards and forwards at whatever speed desired. Of course, such a system would necessarily take all the matter and energy in the universe plus some* to implement.

*infact, multiply some.

So, if my pre-requisites hold, then it's theoretically possible to at least gather data about the past and present, just totally infeasible.

Paradoxically, if the universe truely is deterministic, it wouldn't be possible to act any differently (i.e. to 'change the future') regardless of the outcome of such a simulation; since doing anything 'differently' would violate that determinism.

MattEvans 473 Veteran Poster Team Colleague Featured Poster

Why don't you want to use tables?