Cesium深入浅出之阴影贴图

引子

又偷懒了,说好的周更的,又拖了一个月咯。前面两篇写了可视域分析和视频投影,无一例外的都用到了ShadowMap也就是阴影贴图,因此觉得又必要单独写一篇阴影贴图的文章。当然了,还有另外一个原因,文章中视频投影是利用Cesium自带的Entity方式实现的,毫无技术性可言,在文章结尾我说了可以使用ShadowMap方式来做,原理类似于可视域分析,那么今天我就把实现方式给大家说一下。

预期效果

照例先看一下预期的效果,既然说了阴影贴图,当然不能满足于只贴视频纹理了,这里我放了三张图,代表着我用了三种纹理:图片、视频、颜色。小伙伴惊奇的发现,颜色贴图不就是可视域分析么?嘿嘿,是的,因为原理都是一样的嘛。

实现原理

上面说了实现原和可视域分析是一样的,涉及到的知识点ShadowMap、Frustum、Camera之类的请参考Cesium深入浅出之可视域分析,这里不在赘述。只简单讲一点,阴影贴图支持不同的纹理,那么我们要做的就是创建一个ShadowMap,然后把不同类型的Texture传给他就可以了。

具体实现

实现流程与可视域分析也大致相似,类→创建Camera→创建ShadowMap→创建PostProcessStage→创建Frustum,只多了一步设置Texture,当然最核心的内容是在shader里。

因为代码高度重合,这里就不贴全部代码了,只贴核心代码,如果有疑问的可以留言、私信、群里询问,我看到了都会回答的。

构造函数

 1 // 定义变量
 2 /** 纹理类型:VIDEO、IMAGE、COLOR */
 3 #textureType;
 4 /** 纹理 */
 5 #texture;
 6 /** 观测点位置 */
 7 #viewPosition;
 8 /** 最远观测点位置(如果设置了观测距离,这个属性可以不设置) */
 9 #viewPositionEnd;
10 // ...
11 
12 // 构造函数
13 constructor(viewer, options) {
14     super(viewer);
15 
16     // 纹理类型
17     this.#textureType = options.textureType;
18     // 纹理地址(纹理为视频或图片的时候需要设置此项)
19     this.#textureUrl = options.textureUrl;
20     // 观测点位置
21     this.#viewPosition = options.viewPosition;
22     // 最远观测点位置(如果设置了观测距离,这个属性可以不设置)
23     this.#viewPositionEnd = options.viewPositionEnd;
24 
25     // ...
26 
27     switch (this.#textureType) {
28         default:
29         case VideoShed.TEXTURE_TYPE.VIDEO:
30             this.activeVideo();
31             break;
32         case VideoShed.TEXTURE_TYPE.IMAGE:
33             this.activePicture();
34             break;
35     }
36     this.#refresh()
37     this.viewer.scene.primitives.add(this);
38 }

定义纹理类型,视频和图片投影需要引入纹理文件,初始化的时候设置文件路径,颜色投影不需要任何操作。

视频纹理

 1 /**
 2  * 投放视频。
 3  *
 4  * @author Helsing
 5  * @date 2020/09/19
 6  * @param {String} textureUrl 视频地址。
 7  */
 8 activeVideo(textureUrl = undefined) {
 9     if (!textureUrl) {
10         textureUrl = this.#textureUrl;
11     } else {
12         this.#textureUrl = textureUrl;
13     }
14     const video = this.#createVideoElement(textureUrl);
15     const that = this;
16     if (video /*&& !video.paused*/) {
17         this.#activeVideoListener || (this.#activeVideoListener = function () {
18             that.#texture && that.#texture.destroy();
19             that.#texture = new Texture({
20                 context: that.viewer.scene.context,
21                 source: video,
22                  1,
23                 height: 1,
24                 pixelFormat: PixelFormat.RGBA,
25                 pixelDatatype: PixelDatatype.UNSIGNED_BYTE
26             });
27         });
28         that.viewer.clock.onTick.addEventListener(this.#activeVideoListener);
29     }
30 }

视频纹理是通过html5的video标签引入,需要动态创建标签,不过需要注意的是视频标签的释放是个问题,常规方式并不能彻底释放,最好不要每次都创建新的标签。

图片纹理

 1 /**
 2  * 投放图片。
 3  *
 4  * @author Helsing
 5  * @date 2020/09/19
 6  * @param {String} textureUrl 图片地址。
 7  */
 8 activePicture(textureUrl = undefined) {
 9     this.deActiveVideo();
10     if (!textureUrl) {
11         textureUrl = this.#textureUrl;
12     } else {
13         this.#textureUrl = textureUrl;
14     }
15     const that = this;
16     const img = new Image;
17     img.onload = function () {
18         that.#textureType = VideoShed.TEXTURE_TYPE.IMAGE;
19         that.#texture = new Texture({
20             context: that.viewer.scene.context,
21             source: img
22         });
23     };
24     img.onerror = function () {
25         console.log('图片加载失败:' + textureUrl)
26     };
27     img.src = textureUrl;
28 }

图片纹理使用的是Image对象加载的,要注意的是在异步回调中设置纹理。

 

PostProcessStage

 1 /**
 2  * 创建后期程序。
 3  *
 4  * @author Helsing
 5  * @date 2020/09/19
 6  * @ignore
 7  */
 8 #addPostProcessStage() {
 9     const that = this;
10     const bias = that.#shadowMap._isPointLight ? that.#shadowMap._pointBias : that.#shadowMap._primitiveBias;
11     const postStage = new PostProcessStage({
12         fragmentShader: VideoShedFS,
13         uniforms: {
14             helsing_textureType: function () {
15                 return that.#textureType;
16             },
17             helsing_texture: function () {
18                 return that.#texture;
19             },
20             helsing_alpha: function () {
21                 return that.#alpha;
22             },
23             helsing_visibleAreaColor: function () {
24                 return that.#visibleAreaColor;
25             },
26             helsing_invisibleAreaColor: function () {
27                 return that.#invisibleAreaColor;
28             },
29             shadowMap_texture: function () {
30                 return that.#shadowMap._shadowMapTexture;
31             },
32             shadowMap_matrix: function () {
33                 return that.#shadowMap._shadowMapMatrix;
34             },
35             shadowMap_lightPositionEC: function () {
36                 return that.#shadowMap._lightPositionEC;
37             },
38             shadowMap_texelSizeDepthBiasAndNormalShadingSmooth: function () {
39                 const t = new Cartesian2;
40                 t.x = 1 / that.#shadowMap._textureSize.x;
41                 t.y = 1 / that.#shadowMap._textureSize.y;
42                 return Cartesian4.fromElements(t.x, t.y, bias.depthBias, bias.normalShadingSmooth, that.#combinedUniforms1);
43             },
44             shadowMap_normalOffsetScaleDistanceMaxDistanceAndDarkness: function () {
45                 return Cartesian4.fromElements(bias.normalOffsetScale, that.#shadowMap._distance, that.#shadowMap.maximumDistance, that.#shadowMap._darkness, that.#combinedUniforms2);
46             },
47         }
48     });
49     this.#postProcessStage = this.viewer.scene.postProcessStages.add(postStage);
50 }

后处理程序中的重点是向shader中传入uniforms参数,如纹理类型,可视域颜色、非可视域颜色等。最后就是重头戏着色器代码。

  1 export default `
  2 varying vec2 v_textureCoordinates;
  3 uniform sampler2D colorTexture;
  4 uniform sampler2D depthTexture;
  5 uniform sampler2D shadowMap_texture; 
  6 uniform mat4 shadowMap_matrix; 
  7 uniform vec4 shadowMap_lightPositionEC; 
  8 uniform vec4 shadowMap_normalOffsetScaleDistanceMaxDistanceAndDarkness;
  9 uniform vec4 shadowMap_texelSizeDepthBiasAndNormalShadingSmooth;
 10 uniform int helsing_textureType;
 11 uniform sampler2D helsing_texture;
 12 uniform float helsing_alpha;
 13 uniform vec4 helsing_visibleAreaColor;
 14 uniform vec4 helsing_invisibleAreaColor;
 15 
 16 vec4 toEye(in vec2 uv, in float depth){
 17     vec2 xy = vec2((uv.x * 2.0 - 1.0),(uv.y * 2.0 - 1.0));
 18     vec4 posInCamera =czm_inverseProjection * vec4(xy, depth, 1.0);
 19     posInCamera =posInCamera / posInCamera.w;
 20     return posInCamera;
 21 }
 22 float getDepth(in vec4 depth){
 23     float z_window = czm_unpackDepth(depth);
 24     z_window = czm_reverseLogDepth(z_window);
 25     float n_range = czm_depthRange.near;
 26     float f_range = czm_depthRange.far;
 27     return (2.0 * z_window - n_range - f_range) / (f_range - n_range);
 28 }
 29 float _czm_sampleShadowMap(sampler2D shadowMap, vec2 uv){
 30     return texture2D(shadowMap, uv).r;
 31 }
 32 float _czm_shadowDepthCompare(sampler2D shadowMap, vec2 uv, float depth){
 33     return step(depth, _czm_sampleShadowMap(shadowMap, uv));
 34 }
 35 float _czm_shadowVisibility(sampler2D shadowMap, czm_shadowParameters shadowParameters){
 36     float depthBias = shadowParameters.depthBias;
 37     float depth = shadowParameters.depth;
 38     float nDotL = shadowParameters.nDotL;
 39     float normalShadingSmooth = shadowParameters.normalShadingSmooth;
 40     float darkness = shadowParameters.darkness;
 41     vec2 uv = shadowParameters.texCoords;
 42     depth -= depthBias;
 43     vec2 texelStepSize = shadowParameters.texelStepSize;
 44     float radius = 1.0;
 45     float dx0 = -texelStepSize.x * radius;
 46     float dy0 = -texelStepSize.y * radius;
 47     float dx1 = texelStepSize.x * radius;
 48     float dy1 = texelStepSize.y * radius;
 49     float visibility = (_czm_shadowDepthCompare(shadowMap, uv, depth)
 50         + _czm_shadowDepthCompare(shadowMap, uv + vec2(dx0, dy0), depth)
 51         + _czm_shadowDepthCompare(shadowMap, uv + vec2(0.0, dy0), depth)
 52         + _czm_shadowDepthCompare(shadowMap, uv + vec2(dx1, dy0), depth)
 53         + _czm_shadowDepthCompare(shadowMap, uv + vec2(dx0, 0.0), depth) 
 54         + _czm_shadowDepthCompare(shadowMap, uv + vec2(dx1, 0.0), depth)
 55         + _czm_shadowDepthCompare(shadowMap, uv + vec2(dx0, dy1), depth)
 56         + _czm_shadowDepthCompare(shadowMap, uv + vec2(0.0, dy1), depth)
 57         + _czm_shadowDepthCompare(shadowMap, uv + vec2(dx1, dy1), depth)
 58     ) * (1.0 / 9.0);
 59     return visibility;
 60 }
 61 vec3 pointProjectOnPlane(in vec3 planeNormal, in vec3 planeOrigin, in vec3 point){
 62     vec3 v01 = point -planeOrigin;
 63     float d = dot(planeNormal, v01) ;
 64     return (point - planeNormal * d);
 65 }
 66 
 67 void main(){
 68     const float PI = 3.141592653589793;
 69     vec4 color = texture2D(colorTexture, v_textureCoordinates);
 70     vec4 currentDepth = texture2D(depthTexture, v_textureCoordinates);
 71     if(currentDepth.r >= 1.0){
 72         gl_FragColor = color;
 73         return;
 74     }
 75     float depth = getDepth(currentDepth);
 76     vec4 positionEC = toEye(v_textureCoordinates, depth);
 77     vec3 normalEC = vec3(1.0);
 78     czm_shadowParameters shadowParameters;
 79     shadowParameters.texelStepSize = shadowMap_texelSizeDepthBiasAndNormalShadingSmooth.xy;
 80     shadowParameters.depthBias = shadowMap_texelSizeDepthBiasAndNormalShadingSmooth.z;
 81     shadowParameters.normalShadingSmooth = shadowMap_texelSizeDepthBiasAndNormalShadingSmooth.w;
 82     shadowParameters.darkness = shadowMap_normalOffsetScaleDistanceMaxDistanceAndDarkness.w;
 83     shadowParameters.depthBias *= max(depth * 0.01, 1.0);
 84     vec3 directionEC = normalize(positionEC.xyz - shadowMap_lightPositionEC.xyz);
 85     float nDotL = clamp(dot(normalEC, -directionEC), 0.0, 1.0);
 86     vec4 shadowPosition = shadowMap_matrix * positionEC;
 87     shadowPosition /= shadowPosition.w;
 88     if (any(lessThan(shadowPosition.xyz, vec3(0.0))) || any(greaterThan(shadowPosition.xyz, vec3(1.0)))){
 89         gl_FragColor = color;
 90         return;
 91     }
 92     shadowParameters.texCoords = shadowPosition.xy;
 93     shadowParameters.depth = shadowPosition.z;
 94     shadowParameters.nDotL = nDotL;
 95     float visibility = _czm_shadowVisibility(shadowMap_texture, shadowParameters);
 96     
 97     if (helsing_textureType < 2){ // 视频或图片模式
 98         vec4 videoColor = texture2D(helsing_texture, shadowPosition.xy);
 99         if (visibility == 1.0){
100             gl_FragColor =  mix(color, vec4(videoColor.xyz, 1.0), helsing_alpha * videoColor.a);
101         }
102         else{
103             if(abs(shadowPosition.z - 0.0) < 0.01){
104                 return;
105             }
106             gl_FragColor = color;
107         }
108     }
109     else{ // 可视域模式
110         if (visibility == 1.0){
111             gl_FragColor = mix(color, helsing_visibleAreaColor, helsing_alpha);
112         }
113         else{
114             if(abs(shadowPosition.z - 0.0) < 0.01){
115                 return;
116             }
117             gl_FragColor = mix(color, helsing_invisibleAreaColor, helsing_alpha);
118         }
119     }
120 }`;

可以看出着色器代码并不复杂,而且其中大部分是Cesium中原生的,重点看我标注的部分,视频和图片模式时使用混合纹理,可视域模式时混合颜色。

小结

延续了以往的风格,文章以干货为主,开袋即食,穿插着讲一点原理,因为我觉得学习得从下口开始,下得去口才能慢慢吃进去,才能消化,否则就是直接挂营养液了,对身体并没有多大好处。当然了,原理的确讲的少了点,主要是因为。。。打字太累了啊T_T。好了,今天就到这里了。下期预告:信息框,是在能跟随地图的那种,不是单纯的弹框。对Cesium技术感兴趣的小伙伴,到854943530一起讨论吧,干货不容错过。

原文地址:https://www.cnblogs.com/HelsingWang/p/13884954.html