banner
IWSR

IWSR

我永远喜欢志喜屋梦子!

Three.js —— Texture

P5r The best in the world! The junior is really cute!

Mesh is an important concept in Three.js; it is a key component used to render three-dimensional objects in a scene. A Mesh creates a visual object that can be displayed in the renderer by combining Geometry and Material.

In Three.js, Material can contain multiple Textures to describe the details of an object's surface, such as the bumps and dents on the surface using normal maps or displacement maps, environment mapping using spherical maps or cube maps, the kd (diffuse coefficient, also part of the material's properties) of the object's surface, and shadows (Ambient Occlusion map) that do not require real-time calculations can also be achieved using Textures.

Common Types of Textures#

Textures can be categorized into various types based on different usage scenarios. Below are some of the more common types.

Color Map#

Color Map is used to apply color information to the surface of a three-dimensional model. It is part of the object's material and is used to give the model a specific color. Color Texture can contain the base color of the object and can also be used to simulate the texture, patterns, and details of the object.

Its functions include:

  • Base Color: The most common use is to provide the object with a base color. You can apply a solid color texture to the object, giving it a specific appearance. This can be used to simulate various types of materials, such as metal, plastic, wood, etc.
  • Texture: You can apply a texture that contains patterns, textures, or details to the object, providing more control when simulating the visual details of the object's surface. This can make the object look more realistic, such as wood grain, stone patterns, etc.
  • Color Variation: By using different color areas in the texture, you can make the object's color vary in different parts, achieving artistic effects or emphasizing specific parts of the object.
  • Custom Effects: You can use Color Texture to achieve various visual effects, such as adding a brand logo to a vehicle model or giving a character model a unique appearance.

Alpha Map#

Alpha Map is used to specify the transparency value (Alpha channel) of each pixel on the surface of a three-dimensional model. Transparency maps allow you to control the transparency of the object during rendering, achieving transparent, semi-transparent, and opaque effects.

image

As shown in the image, the texture of the leaves on the left will have the black areas cropped out after processing with the Alpha Texture (where the white area is visible and the black area is not), resulting in the clean leaves on the right.

Its functions include:

  • Transparent Effect: Transparency maps can make certain areas of the object transparent, allowing you to achieve transparent effects like glass, water, smoke, etc.
  • Semi-Transparent Effect: Using transparency maps, you can make certain areas of the object semi-transparent, visually simulating the semi-transparent characteristics of materials, such as mist, clouds, etc.
  • Transparent Parts of Opaque Objects: Even opaque objects may have areas that need to be transparent, such as glass windows on an object.
  • Complex Patterns and Textures: Transparency maps can be combined with color maps to achieve complex patterns and textures, where some areas are transparent.

Normal Map#

Normal Map is used to simulate surface details during rendering, enhancing the visual effects of the object. Normal maps do not change the actual geometric structure of the object but change the lighting calculations by storing normal information at each pixel, making the object appear to have more detail and depth during rendering. Since it does not actually change the geometric structure, the shadows generated under real-time lighting will only reflect the original geometric structure.

Its functions include:

  • Increase Detail: Normal maps can add detail to the object's surface without increasing the polygon count. This makes the object look more realistic, as the lighting effects change due to the normals, creating a bump effect.
  • Simulate Bump Effects: Normal maps can be used to simulate bump effects on the object's surface, such as dents, protrusions, wrinkles, etc. These details will produce shadows and highlights in the lighting calculations, making the object look more textured.
  • Save Polygon Count: Using normal maps can avoid the need for a large number of polygons to represent the details of the object, thus reducing computational load and improving performance.

Ambient Occlusion Map#

Ambient Occlusion Map is used to simulate ambient occlusion effects during rendering. It is a grayscale image where each pixel represents the degree of occlusion or relative shading of the object's surface, which can be used to adjust the lighting effects of the object during rendering, increasing the sense of detail and depth.

Its functions include:

  • Simulate Occlusion Effects: Ambient Occlusion maps can simulate occlusion and shading in the environment, making dark corners and recessed areas appear darker. This increases the depth and visual detail of the object.
  • Enhance Surface Texture: By storing bump information in the ambient occlusion map, the surface of the object can appear more textured and detailed.
  • Increase Realism: Using ambient occlusion maps can enhance the realism of the rendering, making the object look closer to the lighting and occlusion effects in the real world.

Metalness Map#

Metalness Map is used to control the metallic property of the object's surface during rendering. Metalness is a property that determines whether an object is metallic or non-metallic (insulator). Metalness maps are very common in PBR (Physically Based Rendering) to achieve more realistic rendering effects.

  • Control Metalness: Each pixel of the metalness map can represent the metalness property of the corresponding area. For metallic objects, the metalness value is high, while for non-metallic objects, the metalness value is low.
  • Affect Reflection: Metalness affects how the object reflects light. Metallic objects have a high reflectivity, while non-metallic objects have relatively low reflectivity.
  • Increase Realism: By using metalness maps, you can make different parts of the object exhibit different metallic properties, thus increasing the realism of the rendering.

Roughness Map#

Roughness Map is used to control the roughness property of the object's surface during rendering. Roughness is a property that determines the smoothness of the object's surface; rough surfaces scatter light, making the object's reflection more blurred, while smooth surfaces produce sharper reflections.

Its functions include:

  • Control Smoothness: Each pixel of the roughness map can represent the roughness property of the corresponding area. A high roughness value means the surface is rougher, while a low roughness value indicates a smoother surface.
  • Affect Reflection: The roughness of the object's surface affects the degree of light scattering, thus influencing the object's reflection behavior. Smooth surfaces produce clear reflections, while rough surfaces produce blurred reflections.
  • Increase Realism: By using roughness maps, you can set different roughness levels for different parts of the object, thus increasing the realism and visual detail of the rendering.

Code#

Here I wrote a demo to further understand the performance of the corresponding textures. Below, let's discuss some important points I encountered during my learning.

Texture Magnification & Texture Minification#

Before discussing texture magnification/minification, it is important to clarify a premise—the smallest unit of texture, texels, is different from pixels; the former is determined by the texture's own resolution, while the latter is determined by the physical device. Therefore, it is possible for one texel to contain multiple pixels or for one pixel to contain multiple texels. In other words, texels and pixels do not necessarily overlap completely, which can lead to anomalies (such as distortion, moiré patterns, etc.) when textures are applied to the model's surface.

image

Texture Magnification#

For example, if there is a 200 * 200 texture image that needs to be applied to a 500 * 500 plane, distortion will inevitably occur. We need to understand that each texel of the texture needs to undergo (u, v) transformation when applied to the model. In this case, the texture will be stretched, and the content of one texture pixel will be applied to 6.25 pixels (meaning these 6.25 pixels will share the same properties—whether color or normal direction), which will obviously make the texture appear blurry (as shown in the left image below).

image

Although the above image is not the example I provided, the effect on the left is caused by multiple pixels using the same texel value. In Three.js, there is a corresponding property (Three.NearestFilter).

NearestFilter returns the value of the texture element that is nearest (in Manhattan distance) to the specified texture coordinates.

The effect of NearestFilter is visibly poor, but it does not require additional calculations, making it suitable for less important content.

In addition, Three.js also provides the LinearFilter option for texture magnification, which is based on Bilinear Interpolation.

LinearFilter is the default and returns the weighted average of the four texture elements that are closest to the specified texture coordinates, and can include items wrapped or repeated from other parts of a texture, depending on the values of wrapS and wrapT, and on the exact mapping.

As mentioned in the introduction, the value at the corresponding pixel is the weighted average of the four nearby texture pixels, which reflects the idea of bilinear interpolation (detailed introduction can be found in Games101 p9 0:28, not expanded here), and this method will give the image a softer gradient (see the middle part of the above image).

Texture Minification and mipmap#

In contrast to the magnification situation, if multiple texels are rendered within a single pixel (as illustrated below).

image

From the image, it can be seen that there are multiple texels within one pixel. If we apply the nearest method to use the value of the texel closest to the pixel center, it is evident that most information will be lost, leading to distortion (Three.NearestFilter). So, can using bilinear interpolation (Three.LinearFilter) solve the problem? If there are only four texels within that pixel, it can solve the distortion issue, but if there are more than four texels within a pixel, distortion will still occur unless the number of texels participating in the linear interpolation is continuously increased—Bicubic Interpolation, for example, increases precision by averaging the surrounding 16 points, but it is easy to see that increasing the computational load to prevent distortion places a significant demand on performance. Therefore, to reduce the performance overhead on the client, a technique called mipmap is introduced in graphics—pre-computed results from other machines are stored in the Texture, allowing mipmap to be used directly during Minification, thus reducing the computational requirements on the client (this is a trade-off of space for time; detailed introduction can be found in Games101 p9 0:43 or in the reference materials of Real-Time Rendering 4th).

image

In Three.js, the options related to mipmap include:

  • THREE.NearestMipmapNearestFilter: Selects the mipmap that best matches the size of the pixel to be shaded and generates the texture value using NearestFilter (the texel closest to the pixel center).
  • THREE.NearestMipmapLinearFilter: Selects the two mipmaps that best match the size of the pixel to be textured and generates texture values from each mipmap using NearestFilter. The final texture value is a weighted average of these two values.
  • THREE.LinearMipmapNearestFilter: Selects the mipmap that best matches the size of the pixel to be textured and generates the texture value using LinearFilter (the weighted average of the four texels closest to the pixel center).
  • THREE.LinearMipmapLinearFilter: (default) Selects the two mipmaps that best match the size of the pixel to be textured and generates texture values from each mipmap using LinearFilter. The final texture value is a weighted average of these two values.

For official examples related to texture magnification/minification in Three.js, you can refer here.

References#

GAMES101 - Introduction to Modern Computer Graphics - Yan Lingqi

Discussion on Texel Density that is Easily Overlooked

Notes on "Real-Time Rendering 4th" - Chapter 6 Texture Mapping

Direct3D Graphics Learning Guide

Loading...
Ownership of this post data is guaranteed by blockchain and smart contracts to the creator alone.