以文本方式查看主题 - 中文XML论坛 - 专业的XML技术讨论区 (http://bbs.xml.org.cn/index.asp) -- 『 C/C++编程思想 』 (http://bbs.xml.org.cn/list.asp?boardid=61) ---- [推荐]NeHe OpenGL教程(中英文版附带VC++源码)Lesson 41-lesson 42 (http://bbs.xml.org.cn/dispbbs.asp?boardid=61&rootid=&id=54707) |
-- 作者:一分之千 -- 发布时间:10/31/2007 8:28:00 PM -- [推荐]NeHe OpenGL教程(中英文版附带VC++源码)Lesson 41-lesson 42
把雾坐标绑定到顶点,你可以在雾中漫游,体验一下吧。 #include <windows.h> #include "NeHeGL.h" #pragma comment( lib, "opengl32.lib" ) GL_Window* g_window; GLfloat fogColor[4] = {0.6f, 0.3f, 0.0f, 1.0f}; // 雾的颜色 // 使用FogCoordfEXT它需要的变量 typedef void (APIENTRY * PFNGLFOGCOORDFEXTPROC) (GLfloat coord); // 声明雾坐标函数的原形 PFNGLFOGCOORDFEXTPROC glFogCoordfEXT = NULL; // 设置雾坐标函数指针为NULL GLuint texture[1]; // 纹理 int Extension_Init() // 返回扩展字符串 if (!strstr(glextstring,Extension_Name)) // 查找是否有我们想要的扩展 free(glextstring); // 释放分配的内存 // 获得函数的指针 return TRUE; BOOL Initialize (GL_Window* window, Keys* keys) //初始化 // 初始化扩展 if (!BuildTexture("data/wall.bmp", texture[0])) // 创建纹理 glEnable(GL_TEXTURE_2D); glEnable(GL_FOG); camz = -19.0f; return TRUE; void Draw (void) glTranslatef(0.0f, 0.0f, camz); glBegin(GL_QUADS); //后墙 glBegin(GL_QUADS); // 地面 glBegin(GL_QUADS); // 天花板 glBegin(GL_QUADS); // 右墙 glBegin(GL_QUADS); // 左墙 glFlush ();
|
-- 作者:一分之千 -- 发布时间:10/31/2007 8:30:00 PM -- Lesson 41 Welcome to another fun filled tutorial. This time I will attempt to explain Volumetric Fog using the glFogCoordf Extension. In order to run this demo, your video card must support the "GL_EXT_fog_coord" extension. If you are not sure if your card supports this extension, you have two options... 1) download the VC++ source code, and see if it runs. 2) download lesson 24, and scroll through the list of extensions supported by your video card. This tutorial will introduce you to the NeHe IPicture code which is capable of loading BMP, EMF, GIF, ICO, JPG and WMF files from your computer or a web page. You will also learn how to use the "GL_EXT_fog_coord" extension to create some really cool looking Volumetric Fog (fog that can float in a confined space without affecting the rest of the scene). If this tutorial does not work on your machine, the first thing you should do is check to make sure you have the latest video driver installed. If you have the latest driver and the demo still does not work... you might want to purchase a new video card. A low end GeForce 2 will work just fine, and should not cost all that much. If your card doesn't support the fog extension, who's to say what other extensions it will not support? For those of you that can't run the demo, and feel excluded... keep the following in mind: Every single day I get at least 1 email requesting a new tutorial. Many of the tutorials requested are already online! People don't bother reading what is already online and end up skipping over the topic they are most interested in. Other tutorials are too complex and would require weeks worth of programming on my end. Finally, there are the tutorials that I could write, but usually avoid because I know they will not run on all cards. Now that cards such as the GeForce are cheap enough that anyone with an allowance could afford one, I can no longer justify not writing the tutorials. Truthfully, if your video card only supports basic extensions, you are missing out! And if I continue to skip over topics such as Extensions, the tutorials will lag behind! With that said... lets attack some code!!! The code starts off very similar to the old basecode, and almost identical to the new NeHeGL basecode. The only difference is the extra line of code to include the OLECTL header file. This header must be included if you want the IPICTURE code to function. If you exclude this line, you will get errors when trying to use IPicture, OleLoadPicturePath and IID_IPicture. Just like the NeHeGL basecode, we use #pragma comment ( lib, ... ) to automatically include the required library files! Notice we no longer need to include the glaux library (I'm sure many of you are cheering right now). The next three lines of code check to see if CDS_FULLSCREEN is defined. If it is not (which it isn't in most compilers), we give it a value of 4. I know many of you have emailed me to ask why you get errors when trying to compile code using CDS_FULLSCREEN in DEV C++. Include these three lines and you will not get the error! #include <windows.h> // Header File For Windows #include "NeHeGL.h" // Header File For NeHeGL #pragma comment( lib, "opengl32.lib" ) // Search For OpenGL32.lib While Linking #ifndef CDS_FULLSCREEN // CDS_FULLSCREEN Is Not Defined By Some GL_Window* g_window; // Window Structure The floating point variable camz will be used later in the code to position our camera inside a long and dark hallway! We will move forwards and backwards through the hallway by translating on the Z-Axis before we draw the hallway. // User Defined Variables To use the function glFogCoordfExt we need to declare a function prototype typedef that match the extension’s entry point. Sounds complex, but it is not all that bad. In English... we need to tell our program the number of parameters and the the type of each parameter accepted by the function glFogCoordfEXT. In this case... we are passing one parameter to this function and it is a floating point value (a coordinate). Next we have to declare a global variable of the type of the function prototype typedef. This is the first step to creating our new function (glFogCoordfEXT). It is global so that we can use the command anywhere in our code. The name we use should match the actual extension name exactly. The actual extension name is glFogCoordfEXT and the name we use is also glFogCoordfEXT. Once we use wglGetProcAddress to assign the function variable the address of the OpenGL driver’s extension function, we can call glFogCoordfEXT as if it was a normal function. More on this later! The last line prepares things for our single texture. So what we have so far... We know that PFNGLFOGCOORDFEXTPROC takes one floating point value (GLfloat coord) Because glFogCoordfEXT is type PFNGLFOGCOORDFEXTPROC it's safe to say glFogCoordfEXT takes one floating point value... Leaving us with glFogCoordfEXT(GLfloat coord). Our function is defined, but will not do anything because glFogCoordfEXT is NULL at the moment (we still need to attach glFogCoordfEXT to the Address of the OpenGL driver's extension function). Really hope that all makes sense... it's very simple when you already know how it works... but describing it is extremely difficult (at least for me it is). If anyone would like to rewrite this section of text using simple / non complicated wording, please send me an email! The only way I could explain it better is through images, and at the moment I am in a rush to get this tutorial online! // Variables Necessary For FogCoordfEXT typedef void (APIENTRY * PFNGLFOGCOORDFEXTPROC) (GLfloat coord); // Declare Function Prototype PFNGLFOGCOORDFEXTPROC glFogCoordfEXT = NULL; // Our glFogCoordfEXT Function GLuint texture[1]; // One Texture (For The Walls) This function requires a pathname (path to the actual image we want to load... either a filename or a Web URL) and a texture ID (for example ... texture[0]). We need to create a device context for our temporary bitmap. We also need a place to store the bitmap data (hbmpTemp), a connection to the IPicture Interface, variables to store the path (file or URL). 2 variables to store the image width, and 2 variables to store the image height. lwidth and lheight store the actual image width and height. lwidthpixels and lheightpixels stores the width and height in pixels adjusted to fix the video cards maximum texture size. The maximum texture size will be stored in glMaxTexDim. int BuildTexture(char *szPathName, GLuint &texid) // Load Image And Convert To A Texture If the filename does not contain a URL, we get the working directory. If you had the demos saved to C:\wow\lesson41 and you tried to load data\wall.bmp the program needs to know the full path to the wall.bmp file not just that the bmp file is saved in a folder called data. GetCurrentDirectory will find the current path. The location that has both the .EXE and the data folder. If the .exe was stored at c:\wow\lesson41... The working directory would return "c:\wow\lesson41". We need to add \\ to the end of the working directory along with data\wall.bmp. The \\ represents a single \. So if we put it all together we end up with c:\wow\lesson41 plus a \ plus data\wall.bmp... or c:\wow\lesson41\data\wall.bmp. Make sense? if (strstr(szPathName, "http://")) // If PathName Contains http:// Then... CP_ACP means ANSI Codepage. The second parameter specifies the handling of unmapped characters (in the code below we ignore this parameter). szPath is the wide-character string to be converted. The 4th parameter is the width of the wide-character string. If this value is set to -1, the string is assumed to be NULL terminated (which it is). wszPath is where the translated string will be stored and MAX_PATH is the maximum size of our file path (256 characters). After converting the path to Unicode, we attempt to load the image using OleLoadPicturePath. If everything goes well, pPicture will point to the image data and the result code will be stored in hr. If loading fails, the program will exit. MultiByteToWideChar(CP_ACP, 0, szPath, -1, wszPath, MAX_PATH); // Convert From ASCII To Unicode if(FAILED(hr)) // If Loading Failed hdcTemp = CreateCompatibleDC(GetDC(0)); // Create The Windows Compatible Device Context On to the code... we use glGetIntegerv(...) to get the maximum texture dimension (256, 512, 1024, etc) supported by the users video card. We then check to see what the actual image with is. pPicture->get_width(&lwidth) is the images width. We use some fancy math to convert the image width to pixels. The result is stored in lWidthPixels. We do the same for the height. We get the image height from pPicture and store the pixel value in lHeightPixels. glGetIntegerv(GL_MAX_TEXTURE_SIZE, &glMaxTexDim); // Get Maximum Texture Size Supported pPicture->get_Width(&lWidth); // Get IPicture Width (Convert To Pixels) If the image width in pixels is less than the maximum width supported, we resize the image to a power of two based on the current image width in pixels. We add 0.5f so that the image is always made bigger if it's closer to the next size up. For example... If our image width was 400 and the video card supported a maximum width of 512... it would be better to make the width 512. If we made the width 256, the image would loose alot of it's detail. If the image size is larger than the maximum width supported by the video card, we set the image width to the maximum texture size supported. We do the same for the image height. The final image width and height will be stored in lWidthPixels and lHeightPixels. // Resize Image To Closest Power Of Two if (lHeightPixels <= glMaxTexDim) // Is Image Height Greater Than Cards Limit // Create A Temporary Bitmap bi.bmiHeader.biSize = sizeof(BITMAPINFOHEADER); // Set Structure Size hdcTemp is our temporary device context. bi is our Bitmap Info data (header information). DIB_RGB_COLORS tells our program that we want to store RGB data, not indexes into a logical palette (each pixel will have a red, green and blue value). pBits is where the image data will be stored (points to the image data). the last two parameters can be ignored. If for any reason the program was unable to create a temporary bitmap, we clean things up and return false (which exits the program). If things go as planned, we end up with a temporary bitmap. We use SelectObject to attach the bitmap to the temporary device context. // Creating A Bitmap This Way Allows Us To Specify Color Depth And Gives Us Imediate Access To The Bits if(!hbmpTemp) // Did Creation Fail? SelectObject(hdcTemp, hbmpTemp); // Select Handle To Our Temp DC And Our Temp Bitmap Object hdcTemp is our temporary device context. The first two parameters after hdcTemp are the horizontal and vertical offset (the number of blank pixels to the left and from the top). We want the image to fill the entire bitmap, so we select 0 for the horizontal offset and 0 for the vertical offset. The fourth parameter is the horizontal dimension of destination bitmap and the fifth parameter is the vertical dimension. These parameters control how much the image is stretched or compressed to fit the dimensions we want. The next parameter (0) is the horizontal offset we want to read the source data from. We draw from left to right so the offset is 0. This will make sense once you see what we do with the vertical offset (hopefully). the lHeight parameter is the vertical offset. We want to read the data from the bottom of the source image to the top. By using an offset of lHeight, we move to the very bottom of the source image. lWidth is the amount to copy in the source picture. We want to copy all of the date horizontally in the source image. lWidth covers all the data from left to right. The second last parameter is a little different. It's a negative value. Negative lHeight to be exact. What this means is that we want to copy all of the data vertically, but we want to start copying from the bottom to the top. That way the image is flipped as it's copied to the destination bitmap. The last parameter is not used. // Render The IPicture On To The Bitmap Unfortunately the data is stored in BGR format. So we need to swap the Red and Blue pixels to make the bitmap and RGB image. At the same time, we set the alpha value to 255. You can change this value to anything you want. This demo does not use alpha so it has no effect in this tutorial! // Convert From BGR To RGB Format And Add An Alpha Value Of 255 We get the image data from pBits. When generating the texture, we use lWidthPixels and lHeightPixels one last time to set the texture width and height. After the 2D texture has been generated, we can clean things up. We no longer need the temporary bitmap or the temporary device context. Both of these are deleted. We can also release pPicture... YAY!!! glGenTextures(1, &texid); // Create The Texture // Typical Texture Generation Using Data From The Bitmap // (Modify This If You Want Mipmaps) DeleteObject(hbmpTemp); // Delete The Object pPicture->Release(); // Decrements IPicture Reference Count return TRUE; // Return True (All Good) The first thing we do is create a string with the name of our extension. We then allocate enough memory to hold the list of OpenGL extensions supported by the users video card. The list of supported extensions is retreived with the command glGetString(GL_EXTENSIONS). The information returned is copied into glextstring. Once we have the list of supported extensions we use strstr to see if our extension (Extension_Name) is in the list of supported extensions (glextstring). If the extension is not supported, FALSE is returned and the program ends. If everything goes ok, we free glextstring (we no longer need the list of supported extensions). int Extension_Init() // Allocate Memory For Our Extension String if (!strstr(glextstring,Extension_Name)) // Check To See If The Extension Is Supported free(glextstring); // Free Allocated Memory Sorry, this is one of them bits of code that is very hard to explain in simple terms (at least for me). // Setup And Enable glFogCoordEXT return TRUE; By the time we get to this section of code, our program has an RC (rendering context). This is important because you need to have a rendering context before you can check if an extension is supported by the users video card. So we call Extension_Init( ) to see if the card supports the extension. If the extension is not supported, Extension_Init( ) returns false and the check fails. This will cause the program to end. If you wanted to display some type of message box you could. Currently the program will just fail to run. If the extension is supported, we attempt to load our wall.bmp texture. The ID for this texture will be texture[0]. If for some reason the texture does not load, the program will end. Initialization is simple. We enable 2D texture mapping. We set the clear color to black. The clear depth to 1.0f. We set depth testing to less than or equal to and enable depth testing. The shademodel is set to smooth shading, and we select nicest for our perspective correction. BOOL Initialize (GL_Window* window, Keys* keys) // Any GL Init Code & User Initialiazation Goes Here // Start Of User Initialization if (!BuildTexture("data/wall.bmp", texture[0])) // Load The Wall Texture glEnable(GL_TEXTURE_2D); // Enable Texture Mapping We then need to set the fog start position. This is the least dense section of fog. To make things simple, we will use 1.0f as the least dense value (FOG_START). We will use 0.0f as the most dense area of fog (FOG_END). According to all of the documentation I have read, setting the fog hint to GL_NICEST causes the fog to be rendered per pixel. Using GL_FASTEST will render the fog per vertex. I personally do not see a difference. The last glFogi(...) command tells OpenGL that we want to set our fog based on vertice coordinates. This allows us to position the fog anywhere in our scene without affecting the entire scene (cool!). We set the starting camz value to -19.0f. The actual hallways is 30 units in length. So -19.0f moves us almost the beginning of the hallway (the hallway is rendered from -15.0f to +15.0f on the Z axis). // Set Up Fog camz = -19.0f; // Set Camera Z Position To -19.0f return TRUE; // Return TRUE (Initialization Successful) void Deinitialize (void) // Any User DeInitialization Goes Here If the F1 key is pressed, we toggle from fullscreen to windowed mode or from windowed mode to fullscreen. The other two keys we check for are the up and down arrow keys. If the UP key is pressed and the value of camz is less than 14.0f we increase camz. This will move the hallway towards the viewer. If we went past 14.0f, we would go right through the back wall. We don't want this to happen :) If the DOWN key is pressed and the value of camz is greater than -19.0f we decrease camz. This will move the hallway away from the viewer. If we went past -19.0f, the hallway would be too far into the screen and you would see the entrance to the hallway. Again... this wouldn't be good! the value of camz is increased and decreased based on the number of milliseconds that have passed divided by 100.0f. This should force the program to run at the same speed on all types of processors. void Update (DWORD milliseconds) // Perform Motion Updates Here if (g_keys->keyDown [VK_F1]) // Is F1 Being Pressed? if (g_keys->keyDown [VK_UP] && camz<14.0f) // Is UP Arrow Being Pressed? if (g_keys->keyDown [VK_DOWN] && camz>-19.0f) // Is DOWN Arrow Being Pressed? By increasing or decreasing the value of camz, the hallway will move closer or further away from the viewer. This will give the impression that the viewer is moving forward or backward through the hall... Simple but effective! void Draw (void) glTranslatef(0.0f, 0.0f, camz); // Move To Our Camera Z Position We want this wall to be in the thickest of the fog. If you look at the Init section of code, you will see that GL_FOG_END is the most dense section of fog... and it has a value of 0.0f. Fog is applied the same way you apply texture coordinates. GL_FOG_END has the most fog, and GL_FOG_END has a value of 0.0f. So for our first vertex we pass glFogCoordfEXT a value of 0.0f. This will give the bottom (-2.5f on the Y-Axis) left (-2.5f on the X-Axis) vertex a fog density of 0.0f. We assign 0.0f to the other 3 glFogCoordfEXT vertices as well. We want all 4 points (way in the distance) to be in dense fog. Hopefully by now you understand texture mapping coordinates and glVertex coordinates. I shouldn't have to explain these :) glBegin(GL_QUADS); // Back Wall Like all quads, the floor has 4 points. However the Y value is always -2.5f. The left vertex is -2.5f, the right vertex is 2.5f, and the floor runs from -15.0f on the Z-Axis to +15.0f on the Z-Axis. We want the section of floor way in the distance to have the most fog. So once again we give these glFogCoordfEXT vertices a value of 0.0f. Notice that any vertex drawn at -15.0f has a glFogCoordfEXT value of 0.0f...? The sections of floor closest the viewer (+15.0f) will have the least amount of fog. GL_START_FOG is the least dense fog and has a value of 1.0f. So for these points we will pass a value of 1.0f to glFogCoordfEXT. What you should see if you run the program is really dense fog on the floor near the back and light fog up close. The fog is not dense enough to fill the entire hallway. It actually dies out halfway down the hall, even though GL_START_FOG is 1.0f. glBegin(GL_QUADS); // Floor glBegin(GL_QUADS); // Roof glBegin(GL_QUADS); // Right Wall Of course you can always play around with the GL_FOG_START and GL_FOG_END values to see how they affect the scene. The effect does not look convincing if you swap the values. The illusion is created by the back wall being completely orange! The effect looks best in dead ends or tight corners where the player can not get behind the fog! If you plan to use this type of fog in a 3D engine you may want to adjust the START and END values based on where the player is standing, and which direction they are facing the fog from! glBegin(GL_QUADS); // Left Wall glFlush (); // Flush The GL Rendering Pipeline I wanted to make a 3D room with fog in one corner of the room. Unfortunately, I had very little time to work on the code. Even though the hallway in this tutorial is very simple, the actual fog effect is quite cool! Modifying the code for use in projects of your own should take very little effort. It is important to note that this is just ONE of many different ways to create volumetric fog. The same effect can be recreated using blending, particles, masks, etc. This tutorials shows you how to use the glFogCoordfEXT... It's fast, looks great and is very easy to use! If you modify the view so you can see outside the hallway, you will see that the fog is contained inside the hallway! As always... if you find mistakes in this tutorial let me know. If you think you can describe a section of code better (my wording is not always clear), send me an email! A lot of the text was written late at night, and although it's not an excuse, my typing gets a little worse as I get more sleepy. Please email me if you find duplicate words, spelling mistakes, etc. The original idea for this tutorial was sent to me a long time ago. Since then I have lost the original email. To the person that sent this idea in... Thank You! Jeff Molofee (NeHe) |
-- 作者:一分之千 -- 发布时间:10/31/2007 8:32:00 PM -- 第四十二课 画中画效果,很酷吧。使用视口它变得很简单,但渲染四次可会大大降低你的显示速度哦:) void ReshapeGL (int width, int height) // 当窗口移动或者大小改变时重新调整窗口 LRESULT CALLBACK WindowProc (HWND hWnd, UINT uMsg, WPARAM wParam, LPARAM lParam) // 返回窗口结构 switch (uMsg) // 处理消息 window.init.width = 1024; // 宽 #include <windows.h> #include "NeHeGL.h" #pragma comment( lib, "opengl32.lib" ) GL_Window* g_window; int mx,my; // 循环变量 const width = 128; // 迷宫大小 BOOL done; // 迷宫是否被建完 BYTE r[4], g[4], b[4]; // 随机的颜色 GLfloat xrot, yrot, zrot; // 旋转物体 GLUquadricObj *quadric; // 二次几何体对象 void UpdateTex(int dmx, int dmy) // 更新纹理 void Reset (void) srand(GetTickCount()); // 初始化随机向量 for (int loop=0; loop<4; loop++) // 循环随机生成颜色 mx=int(rand()%(width/2))*2; BOOL Initialize (GL_Window* window, Keys* keys) //初始化 g_window = window; Reset(); // 重置纹理贴图 // 设置纹理参数 glClearColor (0.0f, 0.0f, 0.0f, 0.0f); glDepthFunc (GL_LEQUAL); glEnable(GL_COLOR_MATERIAL); glEnable(GL_TEXTURE_2D); quadric=gluNewQuadric(); glEnable(GL_LIGHT0); return TRUE; void Deinitialize (void) void Update (float milliseconds) // 更新各个参数 if (g_keys->keyDown [VK_ESCAPE]) // 处理键盘信息 if (g_keys->keyDown [VK_F1]) if (g_keys->keyDown [' '] && !sp) if (!g_keys->keyDown [' ']) xrot+=(float)(milliseconds)*0.02f; done=TRUE; // 循环所有的纹理素,如果为0则表示没有绘制完所有的迷宫,返回 if (done) //如果完成停止五秒后重置 // 检测是否走过这里 dir=int(rand()%4); // 随机一个走向 if ((dir==0) && (mx<=(width-4))) // 向右走,更新数据 if ((dir==1) && (my<=(height-4))) // 向下走,更新数据 if ((dir==2) && (mx>=2)) // 向左走,更新数据 if ((dir==3) && (my>=2)) // 向上走,更新数据 UpdateTex(mx,my); // 更新纹理 void Draw (void) // 绘制 GetClientRect(g_window->hWnd, &rect); // 获得窗口大小 // 设置更新的纹理 glClear (GL_COLOR_BUFFER_BIT); for (int loop=0; loop<4; loop++) // 循环绘制4个视口 如果主窗口为1024x768, 结果就是一个起点坐标为0,384,宽512,高384的视口。 这个视口看起来象下面这张图
设置完视口后,我们选择当前矩阵为投影矩阵,重置它并设置为2D平行投影视图。我们需要以平行投影视图来填充整个视口,因此我们给左边的值为0,右边的值为window_width/2(跟视口一样),同样给底部的值赋为window_height/2,顶部的值为0. 这样给了视口同样的高度。 这个平行投影视图的左上角的坐标为0,0,右下角坐标为window_width/2,window_height/2. if (loop==0) // 绘制左上角的视口 第二个视口看起来象下面这样: if (loop==1) // 绘制右上角视口 第三个视口看起来如下: 透视视图的设置同第二个视图。 if (loop==2) // 绘制右下角视口 第四个视口看起来如下: if (loop==3) // 绘制右下角视口 glMatrixMode (GL_MODELVIEW); glClear (GL_DEPTH_BUFFER_BIT); if (loop==0) // 绘制左上角的视图 if (loop==1) // 绘制右上角的视图 glRotatef(xrot,1.0f,0.0f,0.0f); glEnable(GL_LIGHTING); if (loop==2) // 绘制右下角的视图 glBegin(GL_QUADS); if (loop==3) // 绘制左下角的视图 glEnable(GL_LIGHTING); glFlush (); 我希望你们喜欢这个教程...如果你发现代码中的任何错误,或者你感觉你能让这个教程更好,请通知我(同样的,如果你看过我的翻译,发现有不当之处,请通知我)
|
-- 作者:一分之千 -- 发布时间:10/31/2007 8:38:00 PM -- Lesson: 42 Welcome to another fun filled tutorial. This time I will show you how to display multiple viewports in a single window. The viewports will resize correctly in windowed mode. Two of the windows use lighting. One of the windows is Ortho and three are Perspective. To keep the tutorial exciting, you will also learn about the maze code used in this demo, how rendering to a texture (yet again) and how to get the current windows dimensions. Once you understand this tutorial, making split screen games or 3D applications with multiple views should be a snap! With that said, let dive into the code!!! You can use either the latest NeHeGL code or the IPicture code as the main basecode. The first file we need to look at is the NeHeGL.cpp file. Three sections of code have been modified. I will list just the sections of code that have changed. The first and most important thing that has changed is ReshapeGL( ). This is where we used to set up the screen dimensions (our main viewport). All of the main viewport setup is done in our main drawing loop now. So all we do here is set up the main window. void ReshapeGL (int width, int height) // Reshape The Window When It's Moved Or Resized // Process Window Message Callbacks // Get The Window Context switch (uMsg) // Evaluate Window Message // Program Entry (WinMain) // Fill Out Application Data // Fill Out Window // Window Title window.init.width = 1024; // Window Width We start off by including the standard list of header and library files. #include <windows.h> // Header File For Windows #include "NeHeGL.h" // Header File For NeHeGL #pragma comment( lib, "opengl32.lib" ) // Search For OpenGL32.lib While Linking GL_Window* g_window; // Window Structure mx and my keep track of which room in the maze we are currently in. Each room is separated by a wall (so rooms are 2 units apart). width and height are used to build our texture. It is also the width and height of the maze. The reason we make the maze and the texture the same size is so that each pixel drawn in the maze is one pixel in the texture. I like width and height set to 256, although it takes longer to build the maze. If your video card can handle large textures, try increasing the values by a power of 2 (256, 512, 1024). Make sure you do not increase the values too much. If the main window is 1024 pixels wide, and each viewport is half the size of the main window, the widest you should make your texture is: Width Of The Window / 2. If you make your texture 1024 pixels wide, but your viewport size is only 512, every second pixel will overlap because there is not enough room to fit all the pixels of the texture in the viewport. The same goes for the texture height. It should be: Height Of The Window / 2. Of course you have to round down to the nearest power of 2. // User Defined Variables const width = 128; // Maze Width (Must Be A Power Of 2) sp is used to check if the spacebar is being held down. By pressing space, the maze is reset, and the program starts drawing a new maze. If we don't check to see if the spacebar is being held, the maze resets many times during the split second that the spacebar is pressed. This variable makes sure that the maze is only reset once. BOOL done; // Flag To Let Us Know When It's Done tex_data points to our texture data. BYTE r[4], g[4], b[4]; // Random Colors (4 Red, 4 Green, 4 Blue) Finally, we set up a quadric object so we can draw a cylinder and sphere using gluCylinder and gluSphere. Much easier than drawing the objects manually. GLfloat xrot, yrot, zrot; // Use For Rotation Of Objects GLUquadricObj *quadric; // The Quadric Object The first line below sets the red (0) color to 255. The second line sets the green (1) color to 255 and the last line sets the blue (2) color to 255. The end result is a bright white pixel at dmx,dmy. void UpdateTex(int dmx, int dmy) // Update Pixel dmx, dmy On The Texture The first line of code does the clearing. tex_data points to our texture data. We need to clear width (width of our texture) multiplied by height (height of our texture) multiplied by 3 (red, green, blue). Clearing this memory sets all all bytes to 0. If all 3 color values are set to 0, our texture will be completely black! void Reset (void) // Reset The Maze, Colors, Start Point, Etc We have 4 viewports, so we need to make a loop from 0 to 3. We assign each color (red, green, blue) a random value from 128 to 255. The reason I add 128 is because I want bright colors. With a min value of 0 and a max value of 255, 128 is roughly 50% brightness. srand(GetTickCount()); // Try To Get More Randomness for (int loop=0; loop<4; loop++) // Loop So We Can Assign 4 Random Colors mx=int(rand()%(width/2))*2; // Pick A New Random X Position BOOL Initialize (GL_Window* window, Keys* keys) // Any GL Init Code & User Initialiazation Goes Here g_window = window; // Window Values Once everything has been reset, we need to create our initial texture. The first 2 texture parameters CLAMP our texture to the range [0,1]. This prevents wrapping artifacts when mapping a single image onto an object. To see why it's important to clamp the texture, try removing the 2 lines of code. Without clamping, you will notice a thin line at the top of the texture and on the right side of the texture. The lines appear because linear filtering tries to smooth the entire texture, including the borders. If pixels is drawn to close to a border, a line appears on the opposite side of the texture. We are going to use linear filtering to make things look a little smoother. It's up to you what type of filtering you use. If it runs really slow, try changing the filtering to GL_NEAREST. Finally, we build an RGB 2D texture using tex_data (the alpha channel is not used). Reset(); // Call Reset To Build Our Initial Texture, Etc. // Start Of User Initialization Enabling GL_COLOR MATERIAL let's you color your objects, with glColor, when lighting is enabled. This method is called color tracking, and is often used instead of performance-draining calls to glMaterial. I get alot of emails asking how to change the color of an object... hope the information is useful! For those of you that have emailed me asking why textures in your projects are weird colors or tinted with the current glColor( )... Make sure you do not have GL_COLOR_MATERIAL enabled! * Thanks to James Trotter for the correct explanation on how GL_COLOR_MATERIAL works. I had said it lets you color your textures... However, it actually lets you color objects. Finally we enable 2D texture mapping. glClearColor (0.0f, 0.0f, 0.0f, 0.0f); // Black Background glDepthFunc (GL_LEQUAL); // The Type Of Depth Testing glEnable(GL_COLOR_MATERIAL); // Enable Color Material (Allows Us To Tint Textures) glEnable(GL_TEXTURE_2D); // Enable Texture Mapping quadric=gluNewQuadric(); // Create A Pointer To The Quadric Object glEnable(GL_LIGHT0); // Enable Light0 (Default GL Light) return TRUE; // Return TRUE (Initialization Successful) void Deinitialize (void) // Any User DeInitialization Goes Here We need to set up a variable called dir. We will use this variable to randomly travel up, right, down or left. We watch to see if the spacebar is pressed. If it is, and it's not being held down, we reset the maze. If the keyboard is released, we set sp to FALSE so that our program knows it is no longer being held down. void Update (float milliseconds) // Perform Motion Updates Here if (g_keys->keyDown [VK_ESCAPE]) // Is ESC Being Pressed? if (g_keys->keyDown [VK_F1]) // Is F1 Being Pressed? if (g_keys->keyDown [' '] && !sp) // Check To See If Spacebar Is Pressed if (!g_keys->keyDown [' ']) // Check To See If Spacebar Has Been Released xrot+=(float)(milliseconds)*0.02f; // Increase Rotation On The X-Axis If tex_data[((x+(width*y))*3)] equals zero, we know that room has not been visited yet, and does not have a pixel drawn in it yet. If there was a pixel, the value would be 255. We only check the red pixel value, because we know the red value will either be 0 (empty) or 255 (updated). done=TRUE; // Set done To True if (done) // If done Is True Then There Were No Unvisited Rooms If the red pixel value of a room equals 255 we know that room has been visited (because it has been updated with UpdateTex). If mx (current x position) is less than 2 we know that we are almost to the far left of the screen and can not go any further left. If we are trapped or we are to close to a border, we give mx and my random values. We then check to see if the pixel at that location is has already been visited. If it has not, we grab new random mx, my values until we find a cell that has already been visited. We want new paths to branch off old paths which is why we need to keep searching until we find an old path to launch from. To keep the code to a minimum, I don't bother checking if mx-2 is less than 0. If you want 100% error checking, you can modify this section of code to prevent checking memory that does not belong to the current texture. // Check To Make Sure We Are Not Trapped (Nowhere Else To Move) after we get a random direction, we check to see if the value of dir is equal to 0 (move right). if it is and we are not already at the far right side of the maze, we check the room to the right of the current room. If the room to the right has not been visited, we knock out the wall between the two room with UpdateTex(mx+1,my) and then we move to the new room by increasing mx by 2. dir=int(rand()%4); // Pick A Random Direction if ((dir==0) && (mx<=(width-4))) // If The Direction Is 0 (Right) And We Are Not At The Far Right if ((dir==1) && (my<=(height-4))) // If The Direction Is 1 (Down) And We Are Not At The Bottom if ((dir==2) && (mx>=2)) // If The Direction Is 2 (Left) And We Are Not At The Far Left if ((dir==3) && (my>=2)) // If The Direction Is 3 (Up) And We Are Not At The Top UpdateTex(mx,my); // Update Current Room We can get the left, right, top and bottom values by using RECT. RECT holds the coordinates of a rectangle. The left, right, top and bottom coordinates to be exact. To grab the coordinates for our screen, we use GetClientRect( ). The first parameter we pass is our current window handle. The second parameter is the structure that will hold the information returned (rect). void Draw (void) // Our Drawing Routine GetClientRect(g_window->hWnd, &rect); // Get Window Dimensions This is a very fast way to use updated texture data without having to rebuild the texture. It's also important to note that this command will not BUILD a texture. You have to create a texture before you can use this command to update it! // Update Our Texture... This Is The Key To The Programs Speed... Much Faster Than Rebuilding The Texture Each Time glClear (GL_COLOR_BUFFER_BIT); // Clear Screen The first thing we do is set the color of the current viewport using glColor3ub(r,g,b). This may be new to a few of you. It just like glColor3f(r,g,b) but it uses unsigned bytes instead of floating point values. Remember earlier that I said it was easier to assign a random value from 0 to 255 as a color. Well now that we have such large values for each color this is the command we need to use to set the colors properly. glColor3f(0.5f,0.5f,0.5f) is 50% brightness for red, green and blue. glColor3ub(127,127,127) is also 50% brightness for red, green, blue. If loop is 0, we would be selecting r[0],g[0],b[0]. If loop is 1, we would be selecting the colors stored in r[1],g[1],b[1]. That way, each scene has it's own random color. for (int loop=0; loop<4; loop++) // Loop To Draw Our 4 Views If the main window is 1024x768, we would end up with a viewport at 0,384 with a width of 512 and a height of 384. This viewport would look like this: After setting up the viewport, we select the projection matrix, reset it and then set up our 2D Ortho view. We want the Ortho view to fill the entire viewport. So we give it a left value of 0 and a right value of window_width/2 (same width as the viewport). We also assign it a bottom value of window_height/2 and a top value of 0. This gives us the same height as the viewport. The top left of our Ortho view will be 0,0. The bottom right of our Ortho view will be window_width/2, window_height/2. if (loop==0) // If We Are Drawing The First Scene The second viewport would look like this: Again, we select the projection matrix and reset it, but this time we set up a perspective view with a 45 degree field of view and near value of 0.1f and a far value of 500.0f. if (loop==1) // If We Are Drawing The Second Scene The third viewport would look like this: We set up a perspective view exactly the same way we did for the second viewport. if (loop==2) // If We Are Drawing The Third Scene The fourth viewport would look like this: We set up a perspective view exactly the same way we did for the second viewport. if (loop==3) // If We Are Drawing The Fourth Scene glMatrixMode (GL_MODELVIEW); // Select The Modelview Matrix glClear (GL_DEPTH_BUFFER_BIT); // Clear Depth Buffer Remember that the top left of the first viewport is 0,0 and the bottom right is window_width/2, window_height/2. So that means that the top right of our quad is at window_width/2, 0. The top left is at 0,0, the bottom left is at 0, window_height/2 and the bottom right is at window_width/2, window_height/2. Notice in ortho mode, we can actually work with pixels rather than units (depending on how we set the viewport up). if (loop==0) // Are We Drawing The First Image? (Original Texture... Ortho) We enable lighting, draw our sphere and then disable lighting. The sphere has a radius of 4 units with 32 slices and 32 stacks. If you feel like playing around, try changing the number of stacks or slices to a lower number. By reducing the number of stacks / slices, you reduce the smoothness of the sphere. Texture coordinates are automatically generated! if (loop==1) // Are We Drawing The Second Image? (3D Texture Mapped Sphere... Perspective) glRotatef(xrot,1.0f,0.0f,0.0f); // Rotate By xrot On The X-Axis glEnable(GL_LIGHTING); // Enable Lighting We move 2 units into the screen and then tilt the quad back 45 degrees. This makes the top of the quad further away from us, and the bottom of the quad closer towards us! We then rotate on the z-axis to get the quad spinning and draw the quad. We need to set the texture coordinates manually. if (loop==2) // Are We Drawing The Third Image? (Texture At An Angle... Perspective) glBegin(GL_QUADS); // Begin Drawing A Single Quad We enable lighting to give the object some nice shading and then we translate -2 units on the z-axis. The reason we do this is so that our object rotates around it's center point rather than rotating around one of the ends. The cylinder is 1.5 units wide on one end, 1.5 unit wide on the other end, it has a length of 4 units and is made up of 32 slices (panels around) and 16 stacks (length panels). In order to rotate around the center we need to translate half the length. Half of 4 is 2! After translating, rotating and then translating some more, we draw the cylinder and then disable lighting. if (loop==3) // Are We Drawing The Fourth Image? (3D Texture Mapped Cylinder... Perspective) glEnable(GL_LIGHTING); // Enable Lighting glFlush (); // Flush The GL Rendering Pipeline You can use the code to display a variety of images all running in their own viewport, or you could use the code to display multiple views of a certain object. What you do with this code is up to you. I hope you guys enjoy the tutorial... If you find any mistakes in the code, or you feel you can make this tutorial even better, let me know. Jeff Molofee (NeHe) |
W 3 C h i n a ( since 2003 ) 旗 下 站 点 苏ICP备05006046号《全国人大常委会关于维护互联网安全的决定》《计算机信息网络国际联网安全保护管理办法》 |
296.875ms |