Iam writing a algorithm that generates buildings using shaders and vertex buffers. Lets not go deep into it. What i need is. How to implement Field of Vision (FOV) in such a scenario ? How can i restrict the user's view ? Hope my question is clear enough.. Waiting for reply. Thanks in advance.


Surely this depends on how you have set up your clipping volume. In OpenGL for example you might call:

gluPerspective(35.0f, fAspect, 1.0f, 500.0f)

If you lowered the value of the last parameter in this instance then it would appear as though you cannot see as far into the screen.

Hi Caged,

Thanks for ur reply. Its the far plane and the near plane concept in D3D? is it so ?? I am not that much into openGL. I am working on directX.


Well I'm not familiar with directX myself (which is why I used an OpenGL reference :) ) but looking briefly into D3D I would say yes to your question(s). You could think of your FOV as a box. The sides of the box (top, bottom, front, back and sides) are clipping planes that basically define what you can and can't see. Everything inside of the box (clipping planes) is visible and everything outside isn't. As you move your box (the camera/FOV) some new objects will fall into your box and others will fall out.
If you wish to be able to 'see further' then you need to 'stretch' your box by changing the boundaries that define it.

Thanks man, But i have this one question !! Since i am going to generate buildings procedurally, upto what point should i generate so that the user sees buildings fully?? Is that upto the far plane?? Will that wont be too much resource using? Any ideas on how to optimize it ??

Ok so basically you can create as many buildings as you want. The way it works usually is that whatever is inside your FOV is rendered (drawn to screen), and therefore requires processing, and anything outside of the FOV clipping planes won't be rendered. This means that the number of buildings that you choose to create isn't all that important, instead, the more buildings that are currently visible (within the FOV) then the more processing is required. So if you could currently only see two buildings then not much processing is required. If you could see two hundred buildings then much more processing is required.

There are other factors to consider however. For example, the more building data that you store then the more memory is going to be required to store it.
I also imagine that there is some functionality in directX that allows depth testing (if it isn't done automatically). Depth testing is usually enabled so that whenever objects 'overlap', only the nearest object is completely drawn. This reflects human vision because obviously we cannot see through objects. If a wall was in front of part of a building then you would see the wall and not what's behind it.

Last but not least, if you won't be 'moving around the scene', i.e. changing your field of vision (imagine walking around a virtual 3D environment), then yes the far clipping plane would be the boundary for creating your buildings because you won't ever 'move' in order to see what is beyond it.

Hope this helps :)

Thanks for the reply mon. Btw, u are right. In directX we have the option of z-buffer testing for the depth. And yeah, in my prototype we dont get to move around the scene so we have to just take care of the buildings that has to be rendered within the far plane. Thanks again for the reply mate.


Be a part of the DaniWeb community

We're a friendly, industry-focused community of developers, IT pros, digital marketers, and technology enthusiasts meeting, networking, learning, and sharing knowledge.