Note that all the functions that operate on degrees are "high level" functions. For OpenGL itself, it's no difference whether it receives radians or degrees -- they are internally converted to transformation matrices anyway, so there's no computational gain of using one or the other. So why complicate stuff for people if you can allow them using degrees? Anyone coding seriously in OpenGL will provide their own matrices computated from quaternions anyway. In the same spirit we could ask, why have glRotatef and gluPerspective anyway, since matrices are more elegant in every respect, and allow a higher grade of control.
Also note : all of the functions using degrees are in the current standard 3. I'd say that since OpenGL was designed with the end-user in mind, degrees were used because one can specify important angles 90 , , I think it is because you should be able to get an exact rotation matrix for certain angles like 90 or degrees.
Code is easier to read, it eases learning curve for newbies and allows quick hacking. As stated already - degrees HAVE advantage - humans are better used to degrees, compare: 0.
This is exactly the case of the red pathway. To calculate the bottom-right result we take the bottom row of the first matrix and the rightmost column of the second matrix.
To calculate the resulting value we multiply the first element of the row and column together using normal multiplication, we do the same for the second elements, third, fourth etc. The results of the individual multiplications are then summed up and we have our result. Now it also makes sense that one of the requirements is that the size of the left-matrix's columns and the right-matrix's rows are equal, otherwise we can't finish the operations!
The result is then a matrix that has dimensions of n,m where n is equal to the number of rows of the left-hand side matrix and m is equal to the columns of the right-hand side matrix. Don't worry if you have difficulties imagining the multiplications inside your head. Just keep trying to do the calculations by hand and return to this page whenever you have difficulties. Over time, matrix multiplication becomes second nature to you.
Let's finish the discussion of matrix-matrix multiplication with a larger example. Try to visualize the pattern using the colors.
As a useful exercise, see if you can come up with your own answer of the multiplication and then compare them with the resulting matrix once you try to do a matrix multiplication by hand you'll quickly get the grasp of them.
As you can see, matrix-matrix multiplication is quite a cumbersome process and very prone to errors which is why we usually let computers do this and this gets problematic real quick when the matrices become larger.
If you're still thirsty for more and you're curious about some more of the mathematical properties of matrices I strongly suggest you take a look at these Khan Academy videos about matrices. Anyways, now that we know how to multiply matrices together, we can start getting to the good stuff. Up until now we've had our fair share of vectors.
We used them to represent positions, colors and even texture coordinates. Let's move a bit further down the rabbit hole and tell you that a vector is basically a Nx1 matrix where N is the vector's number of components also known as an N-dimensional vector.
If you think about it, it makes a lot of sense. Vectors are just like matrices an array of numbers, but with only 1 column. So, how does this new piece of information help us? Well, if we have a MxN matrix we can multiply this matrix with our Nx1 vector, since the columns of the matrix are equal to the number of rows of the vector, thus matrix multiplication is defined.
But why do we care if we can multiply matrices with a vector? In case you're still a bit confused, let's start with a few examples and you'll soon see what we mean. In OpenGL we usually work with 4x4 transformation matrices for several reasons and one of them is that most of the vectors are of size 4. The most simple transformation matrix that we can think of is the identity matrix.
The identity matrix is an NxN matrix with only 0s except on its diagonal. This becomes obvious from the rules of multiplication: the first result element is each individual element of the first row of the matrix multiplied with each element of the vector. When we're scaling a vector we are increasing the length of the arrow by the amount we'd like to scale, keeping its direction the same.
Since we're working in either 2 or 3 dimensions we can define scaling by a vector of 2 or 3 scaling variables, each scaling one axis x , y or z. We will scale the vector along the x-axis by 0. Let's see what it looks like if we scale the vector by 0. Keep in mind that OpenGL usually operates in 3D space so for this 2D case we could set the z-axis scale to 1 , leaving it unharmed. The scaling operation we just performed is a non-uniform scale, because the scaling factor is not the same for each axis.
If the scalar would be equal on all axes it would be called a uniform scale. Let's start building a transformation matrix that does the scaling for us.
We saw from the identity matrix that each of the diagonal elements were multiplied with its corresponding vector element. What if we were to change the 1 s in the identity matrix to 3 s? In that case, we would be multiplying each of the vector elements by a value of 3 and thus effectively uniformly scale the vector by 3. The w component is used for other purposes as we'll see later on. Translation is the process of adding another vector on top of the original vector to return a new vector with a different position, thus moving the vector based on a translation vector.
We've already discussed vector addition so this shouldn't be too new. Just like the scaling matrix there are several locations on a 4-by-4 matrix that we can use to perform certain operations and for translation those are the top-3 values of the 4th column. This wouldn't have been possible with a 3-by-3 matrix.
With a translation matrix we can move objects in any of the 3 axis directions x , y , z , making it a very useful transformation matrix for our transformation toolkit.
The last few transformations were relatively easy to understand and visualize in 2D or 3D space, but rotations are a bit trickier. If you want to know exactly how these matrices are constructed I'd recommend that you watch the rotation items of Khan Academy's linear algebra videos. First let's define what a rotation of a vector actually is. A rotation in 2D or 3D is represented with an angle. An angle could be in degrees or radians where a whole circle has degrees or 2 PI radians. I prefer explaining rotations using degrees as we're generally more accustomed to them.
Rotations in 3D are specified with an angle and a rotation axis. The angle specified will rotate the object along the rotation axis given. Try to visualize this by spinning your head a certain degree while continually looking down a single rotation axis. When rotating 2D vectors in a 3D world for example, we set the rotation axis to the z-axis try to visualize this.
Using trigonometry it is possible to transform vectors to newly rotated vectors given an angle. This is usually done via a smart combination of the sine and cosine functions commonly abbreviated to sin and cos. A discussion of how the rotation matrices are generated is out of the scope of this chapter.
Using the rotation matrices we can transform our position vectors around one of the three unit axes. To rotate around an arbitrary 3D axis we can combine all 3 them by first rotating around the X-axis, then Y and then Z for example. However, this quickly introduces a problem called Gimbal lock.
We won't discuss the details, but a better solution is to rotate around an arbitrary unit axis e. Keep in mind that even this matrix does not completely prevent gimbal lock although it gets a lot harder. To truly prevent Gimbal locks we have to represent rotations using quaternions , that are not only safer, but also more computationally friendly.
However, a discussion of quaternions is out of this chapter's scope. The true power from using matrices for transformations is that we can combine multiple transformations in a single matrix thanks to matrix-matrix multiplication. Let's see if we can generate a transformation matrix that combines several transformations.
Say we have a vector x,y,z and we want to scale it by 2 and then translate it by 1,2,3. We need a translation and a scaling matrix for our required steps. Matrix multiplication is not commutative, which means their order is important.
When multiplying matrices the right-most matrix is first multiplied with the vector so you should read the multiplications from right to left. It is advised to first do scaling operations, then rotations and lastly translations when combining matrices otherwise they may negatively affect each other. For example, if you would first do a translation and then scale, the translation vector would also scale!
The vector is first scaled by two and then translated by 1,2,3. Now that we've explained all the theory behind transformations, it's time to see how we can actually use this knowledge to our advantage. Otherwise you'd have to break up the objects by gross distances maybe 3 or 4 and render them separately, doing the furthest first like the painters algorithm.
Get yourself a calculator either an old-fashioned physical one, or whatever is built into your OS. For a given number of bits n, the values that can be represented assuming unsigned integer types are in the range The z buffer is subject to rounding errors, just like the color buffer; however, since the z buffer is used to determine how objects overlap, rounding errors can lead to ugly results - consider two planes that are almost parallel; the z test may then return different results for different pixels due to rounding errors, and as a result, you'll see a strange pattern in one plane where the other shines through.
That's what z fighting means. As a rule of thumb, a bit z buffer is useable, but 24 or 32 bits are better. With modern hardware, there is little reason for sticking to 16 bits anymore.
Specifically, how much precision it stores and subsequently compares the depth of each individual pixel. It's a suggest only, so Allegro will try to find something even if the specific number you put in is unavailable.
The original Voodoo had a 16bit depth buffer, so you can safely assume that at least 16bits are available.
0コメント