<output id="qn6qe"></output>

    1. <output id="qn6qe"><tt id="qn6qe"></tt></output>
    2. <strike id="qn6qe"></strike>

      亚洲 日本 欧洲 欧美 视频,日韩中文字幕有码av,一本一道av中文字幕无码,国产线播放免费人成视频播放,人妻少妇偷人无码视频,日夜啪啪一区二区三区,国产尤物精品自在拍视频首页,久热这里只有精品12

      AmplifyImpostors源碼閱讀

      首先看一下點擊Bake按鈕后的執行流程:

      1.AmplifyImpostorInspector部分

      首先點擊按鈕設置了bakeTexture = true

      if( GUILayout.Button( TextureIcon, "buttonright", GUILayout.Height( 24 ) ) )
      {
          // now recalculates texture and mesh every time because mesh might have changed
          //if( m_instance.m_alphaTex == null )
          //{
              m_outdatedTexture = true;
              m_recalculatePreviewTexture = true;
          //}
      
          bakeTextures = true;
      }

       

      如果展開了BillboardMesh選項或是bakeTextures為true,則都會執行下面部分:

      if( ( ( m_billboardMesh || m_recalculatePreviewTexture ) && m_instance.m_alphaTex == null ) || ( bakeTextures && m_recalculatePreviewTexture ) )
      {
          try
          {
              m_instance.RenderCombinedAlpha( m_currentData );
          }
          catch( Exception e )
          {
              Debug.LogWarning( "[AmplifyImpostors] Something went wrong with the mesh preview process, please contact support@amplify.pt with this log message.\n" + e.Message + e.StackTrace );
          }
      
          if( m_instance.m_cutMode == CutMode.Automatic )
              m_recalculateMesh = true;
          m_recalculatePreviewTexture = false;
      }

      如果緩存的m_alphaTex為空,則會先調用RenderCombinedAlpha渲染出合并alpha紋理,并緩存進m_alphaTex。

      然后再調用GenerateAutomaticMesh生成Mesh點。

       

      1.1 RenderCombinedAlpha

      該函數會遍歷一遍所有視角的模型,生成出覆蓋范圍最大的Bounds,并更新到這2個變量中:

      m_xyFitSize = Mathf.Max(m_xyFitSize, frameBounds.size.x, frameBounds.size.y);
      m_depthFitSize = Mathf.Max(m_depthFitSize, frameBounds.size.z);

      通過RenderImpostor函數的combinedAlphas變量,將所有視角模型的alpha疊加在一張RT上,再通過這張疊加RT

      修正原有Bounds:

      m_xyFitSize *= maxBound;
      m_depthFitSize *= maxBound;

       

      接著得到哪張材質的索引對應傳入RT集合的alpha材質:

      bool standardRendering = m_data.Preset.BakeShader == null;
      int alphaIndex = m_data.Preset.AlphaIndex;
      if (standardRendering && m_renderPipelineInUse == RenderPipelineInUse.HDRP)
          alphaIndex = 3;
      else if (standardRendering)
          alphaIndex = 2;

       

      用深度圖的邊緣生成alpha:

      RenderTexture tempTex = RenderTextureEx.GetTemporary(m_alphaGBuffers[3]);
      Graphics.Blit(m_alphaGBuffers[3], tempTex);
      packerMat.SetTexture("_A", tempTex);
      Graphics.Blit(m_trueDepth, m_alphaGBuffers[3], packerMat, 11);
      RenderTexture.ReleaseTemporary(tempTex);

      shader:

      Pass // copy depth 11
      {
          ZTest Always Cull Off ZWrite Off
      
          CGPROGRAM
          #pragma target 3.0
          #pragma vertex vert_img
          #pragma fragment frag
          #include "UnityCG.cginc"
      
          uniform sampler2D _MainTex;
          uniform sampler2D _A;
      
          float4 frag( v2f_img i ) : SV_Target
          {
              float depth = SAMPLE_RAW_DEPTH_TEXTURE( _MainTex, i.uv ).r;
              float3 color = tex2D( _A, i.uv ).rgb;
              float alpha = 1 - step( depth, 0 );
      
              return float4( color, alpha );
          }
          ENDCG
      }

      合并后的alpha會單獨存下來,也就是每一個sheet格子的alpha疊在一起,這樣做可以讓最終生成面片的頂點合理覆蓋:

       

      1.2 GenerateAutomaticMesh

      這個函數主要生成頂點,會存到AmplifyImpostorAsset的ShapePoints中。

      頂點數據會給接下來的GenerateMesh使用。

       

      這一步一定會設上triangulateMesh = true;

      if (m_recalculateMesh && m_instance.m_alphaTex != null)
      {
          m_recalculateMesh = false;
          m_instance.GenerateAutomaticMesh(m_currentData);
          triangulateMesh = true;
          EditorUtility.SetDirty(m_instance);
      }

       

      接著設置previewMesh:

      if (triangulateMesh)
          m_previewMesh = GeneratePreviewMesh(m_currentData.ShapePoints, true);

       

      然后會將CutMode改為手動,允許用戶二次修改:

      if (autoChangeToManual /*&& Event.current.type == EventType.Layout*/ )
      {
          autoChangeToManual = false;
          m_instance.m_cutMode = CutMode.Manual;
          Event.current.Use();
      }

      最后進入DelayedBake,調用AmplifyImpostor的RenderAllDeferredGroups函數。

       

      2.AmplifyImpostor部分

      進入函數RenderAllDeferredGroups,前面都和之前操作差不多,直到調用到RenderImpostor:

      if (impostorMaps)
      {
          commandBuffer.SetViewProjectionMatrices(V, P);
          commandBuffer.SetViewport(new Rect((m_data.TexSize.x / hframes) * x, (m_data.TexSize.y / (vframes + (impostorType == ImpostorType.Spherical ? 1 : 0))) * y, (m_data.TexSize.x / m_data.HorizontalFrames), (m_data.TexSize.y / m_data.VerticalFrames)));

      繪制時每個sheet的格子都存放對應角度的模型圖片,通過SetViewport進行繪制目標區域的裁剪。

      不同的ImpostorType對應繪制hframes、vframes的排布方式也不一樣。

       

      繪制代碼基本的邏輯結構如下:

      for (int x = 0; x < hframes; x++) //橫向圖片數量,例如hframes = 8
      {
          for (int y = 0; y <= vframes; y++) //縱向圖片數量
          {
              if (impostorMaps)
              {
                  commandBuffer.SetViewProjectionMatrices(V, P);
                  commandBuffer.SetViewport(new Rect((m_data.TexSize.x / hframes) * x, (m_data.TexSize.y / (vframes + (impostorType == ImpostorType.Spherical ? 1 : 0))) * y, (m_data.TexSize.x / m_data.HorizontalFrames), (m_data.TexSize.y / m_data.VerticalFrames)));
      
                  if (standardrendering && m_renderPipelineInUse == RenderPipelineInUse.HDRP)
                  {
                      commandBuffer.SetGlobalMatrix("_ViewMatrix", V);
                      commandBuffer.SetGlobalMatrix("_InvViewMatrix", V.inverse);
                      commandBuffer.SetGlobalMatrix("_ProjMatrix", P);
                      commandBuffer.SetGlobalMatrix("_ViewProjMatrix", P * V);
                      commandBuffer.SetGlobalVector("_WorldSpaceCameraPos", Vector4.zero);
                  }
              }
      
              for (int j = 0; j < validMeshesCount; j++)
              {
                  commandBuffer.DrawRenderer...
              }
          }
      }
      Graphics.ExecuteCommandBuffer(commandAlphaBuffer);

      優先繪制Y軸,其次X軸,每次繪制寫入commandBuffer,最后在外部執行一次ExecuteCommandBuffer。

      附一張測試例圖方便參考:

       

      2.1 Remapping

      這一步工作主要是將深度通道塞進去。

       

      合并Alpha:

      // Switch alpha with occlusion
      RenderTexture tempTex = RenderTexture.GetTemporary(m_rtGBuffers[0].width, m_rtGBuffers[0].height, m_rtGBuffers[0].depth, m_rtGBuffers[0].format);
      RenderTexture tempTex2 = RenderTexture.GetTemporary(m_rtGBuffers[3].width, m_rtGBuffers[3].height, m_rtGBuffers[3].depth, m_rtGBuffers[3].format);
      
      packerMat.SetTexture("_A", m_rtGBuffers[2]);
      Graphics.Blit(m_rtGBuffers[0], tempTex, packerMat, 4); //A.b
      packerMat.SetTexture("_A", m_rtGBuffers[0]);
      Graphics.Blit(m_rtGBuffers[3], tempTex2, packerMat, 4); //B.a
      Graphics.Blit(tempTex, m_rtGBuffers[0]);
      Graphics.Blit(tempTex2, m_rtGBuffers[3]);
      RenderTexture.ReleaseTemporary(tempTex);
      RenderTexture.ReleaseTemporary(tempTex2);

       

      shader:

      Pass // Copy Alpha 4
      {
          CGPROGRAM
          #pragma target 3.0
          #pragma vertex vert_img
          #pragma fragment frag
          #include "UnityCG.cginc"
      
          uniform sampler2D _MainTex;
          uniform sampler2D _A;
      
          fixed4 frag (v2f_img i ) : SV_Target
          {
              float alpha = tex2D( _A, i.uv ).a;
              fixed4 finalColor = (float4(tex2D( _MainTex, i.uv ).rgb , alpha));
              return finalColor;
          }
          ENDCG
      }

       

      這一步會將RT[2]的alpha合并至RT[0],將RT[0]的alpha合并至RT[3]

       

      接下來PackDepth,將深度信息寫入RT[2]的A通道:

      // Pack Depth
      PackingRemapping(ref m_rtGBuffers[2], ref m_rtGBuffers[2], 0, packerMat, m_trueDepth);
      m_trueDepth.Release();
      m_trueDepth = null;

       

      RT[2]存的是法線,a通道存深度后:

       

      RT[0]的alpha:

       

       

      FixAlbedo,m_rtGBuffers[1]對應extraTex參數,若傳參會被設置到_A采樣器。

      // Fix Albedo
      PackingRemapping(ref m_rtGBuffers[0], ref m_rtGBuffers[0], 5, packerMat, m_rtGBuffers[1]);

      alb.rgb / (1-spec)不太清楚。

      Pass // Fix albedo 5
      {
          CGPROGRAM
          #pragma target 3.0
          #pragma vertex vert_img
          #pragma fragment frag
          #include "UnityCG.cginc"
      
          uniform sampler2D _MainTex;
          uniform sampler2D _A; //specular
      
          fixed4 frag (v2f_img i ) : SV_Target
          {
              float3 spec = tex2D( _A, i.uv ).rgb;
              float4 alb = tex2D( _MainTex, i.uv );
              alb.rgb = alb.rgb / (1-spec);
              return alb;
          }
          ENDCG
      }

       

      存TGA(如果預設里勾選了TGA則調用該處,否則存PNG):

      // TGA
      for (int i = 0; i < outputList.Count; i++)
      {
          if (outputList[i].ImageFormat == ImageFormat.TGA)
              PackingRemapping(ref m_rtGBuffers[i], ref m_rtGBuffers[i], 6, packerMat);
      }

       

      DilateShader邊緣膨脹處理:

      Shader dilateShader = AssetDatabase.LoadAssetAtPath<Shader>(AssetDatabase.GUIDToAssetPath(DilateGUID));
      Debug.Log(dilateShader, dilateShader);
      Material dilateMat = new Material(dilateShader);
      
      // Dilation
      for (int i = 0; i < outputList.Count; i++)
      {
          if (outputList[i].Active)
              DilateRenderTextureUsingMask(ref m_rtGBuffers[i], ref m_rtGBuffers[alphaIndex], m_data.PixelPadding, alphaIndex != i, dilateMat);
      }

       

      shader是沿著周圍8個方向外拓一圈:

      float4 frag_dilate( v2f_img i, bool alpha )
      {
          float2 offsets[ 8 ] =
          {
              float2( -1, -1 ),
              float2(  0, -1 ),
              float2( +1, -1 ),
              float2( -1,  0 ),
              float2( +1,  0 ),
              float2( -1, +1 ),
              float2(  0, +1 ),
              float2( +1, +1 )
          };

       

      函數中會根據pixelBlend將這個shader調用N次:

      for (int i = 0; i < pixelBleed; i++)
      {
          dilateMat.SetTexture("_MaskTex", dilatedMask);
      
          Graphics.Blit(mainTex, tempTex, dilateMat, alpha ? 1 : 0);
          Graphics.Blit(tempTex, mainTex);
      
          Graphics.Blit(dilatedMask, tempMask, dilateMat, 1);
          Graphics.Blit(tempMask, dilatedMask);
      }

       

      默認值是調用32次:

      [SerializeField]
      [Range( 0, 64 )]
      public int PixelPadding = 32;

       

       

      3.Shader渲染部分

      Octahedron八面體和球面方案分別使用2種不同的對外Shader,

      八面體方案利用了它的特性,實現任何UV向量上可以tiling及沿著向量插值,而球面實現起來則會更耗性能,

      所以AmpImpostor的球面方案沒有做插值功能。

      時間原因,接下來只看球面部分。

       

      3.1 SphereImpostorVertex

      先看ForwardBase的pass:

      頂點部分執行SphereImpostorVertex( v.vertex, v.normal, o.frameUVs, o.viewPos );

      這個函數會處理Billboard的位置信息,并返回常規頂點信息和frameUVs信息。

      得到相對相機位置,并轉換至object空間,_Offset是實際模型中心偏移量,通過像素轉頂點的方式離線計算得到

      float3 objectCameraPosition = mul( ai_WorldToObject, float4( worldCameraPos, 1 ) ).xyz - _Offset.xyz; //ray origin
      float3 objectCameraDirection = normalize( objectCameraPosition );

      構建一組基向量:

      float3 upVector = float3( 0,1,0 );
      float3 objectHorizontalVector = normalize( cross( objectCameraDirection, upVector ) );
      float3 objectVerticalVector = cross( objectHorizontalVector, objectCameraDirection );

      橫向信息用arctan2,變量名作者寫錯了

      float verticalAngle = frac( atan2( -objectCameraDirection.z, -objectCameraDirection.x ) * AI_INV_TWO_PI ) * sizeX + 0.5;

      縱向信息用acos將點乘轉線性

      float verticalDot = dot( objectCameraDirection, upVector );
      float upAngle = ( acos( -verticalDot ) * AI_INV_PI ) + axisSizeFraction * 0.5f;

      yRot構建的旋轉矩陣用作細節修正

      float yRot = sizeFraction.x * AI_PI * verticalDot * ( 2 * frac( verticalAngle ) - 1 );
      
      // Billboard rotation
      float2 uvExpansion = vertex.xy;
      float cosY = cos( yRot );
      float sinY = sin( yRot );
      float2 uvRotator = mul( uvExpansion, float2x2( cosY, -sinY, sinY, cosY ) );

      最后sizeFraction用于將坐標縮放為對應sheet內格子大小

      float2 frameUV = ( ( uvExpansion * fractionsUVscale + 0.5 ) + relativeCoords ) * sizeFraction;

      3.2 SphereImpostorFragment

      frag一些邏輯都是常規操作,看下深度部分的處理,

      離近了看會有真實深度的遮擋:

       

      因為是正交相機拍攝,不存在DeviceDepth轉線性EyeDepth。

       

      深度賦值取的clipPos.z:

      fixed4 frag_surf (v2f_surf IN, out float outDepth : SV_Depth ) : SV_Target {
          ...
          IN.pos.zw = clipPos.zw;
          outDepth = IN.pos.z;

       

      _DepthSize讀的是csharp變量m_depthFitSize,在烘焙時這個值是正交相機的遠截面:

      Matrix4x4 P = Matrix4x4.Ortho(-fitSize + m_pixelOffset.x, fitSize + m_pixelOffset.x, -fitSize + m_pixelOffset.y, fitSize + m_pixelOffset.y, 0, zFar: -m_depthFitSize);

      最后深度計算這里,_DepthSize*0.5猜測是物體中心是z=0.5,是基于物體中心增加偏移深度,并且remapNormal.a之前已經隨著法線做了-1 - 1的映射操作:

      float4 remapNormal = normalSample * 2 - 1; // object normal is remapNormal.rgb

      最后乘以length( ai_ObjectToWorld[ 2 ].xyz )其實是乘以Z軸的縮放,如果沒有縮放改成1結果不變:

      float depth = remapNormal.a * _DepthSize * 0.5 * length( ai_ObjectToWorld[ 2 ].xyz );

       

      計算完后再將顏色和深度輸出:

      fixed4 frag_surf (v2f_surf IN, out float outDepth : SV_Depth ) : SV_Target {
          UNITY_SETUP_INSTANCE_ID(IN);
          SurfaceOutputStandardSpecular o;
          UNITY_INITIALIZE_OUTPUT( SurfaceOutputStandardSpecular, o );
      
          float4 clipPos;
          float3 worldPos;
          SphereImpostorFragment( o, clipPos, worldPos, IN.frameUVs, IN.viewPos );
          IN.pos.zw = clipPos.zw;
      
          outDepth = IN.pos.z;
      
          UNITY_APPLY_DITHER_CROSSFADE(IN.pos.xy);
          return float4( _ObjectId, _PassValue, 1.0, 1.0 );
      }

       

      陰影部分ShadowCaster pass用了同樣的代碼,因此impostor也有陰影。

       

      posted @ 2024-11-28 10:43  HONT  閱讀(160)  評論(2)    收藏  舉報
      主站蜘蛛池模板: 日本高清中文字幕免费一区二区| 国产精品一起草在线观看| 欧美成人精品三级网站| 国产高清自产拍av在线| 国产极品美女高潮无套| 国产小受被做到哭咬床单GV| 午夜福利影院不卡影院| 午夜福利国产精品视频| 又爽又黄无遮挡高潮视频网站| 性姿势真人免费视频放| 亚洲精品三区四区成人少| 精精国产xxx在线观看| 中文字幕人妻精品在线| 极品美女自拍偷精品视频| 四虎成人精品无码| 午夜免费福利小电影| 日本高清视频色wwwwww色| h无码精品3d动漫在线观看| 婷婷六月天在线| 国产一区二区亚洲一区二区三区| 国产自产视频一区二区三区| 成人中文在线| 少妇性bbb搡bbb爽爽爽欧美| 奶头好大揉着好爽视频| 午夜福利日本一区二区无码| 久久精品av国产一区二区| 成人av一区二区亚洲精| 午夜综合网| 国产短视频精品一区二区| 欧美熟妇乱子伦XX视频| 胸大美女又黄的网站| 老熟妇欲乱一区二区三区| 国产不卡av一区二区| 欧美视频二区欧美影视| 蜜桃av亚洲精品一区二区| 青青草国产线观看| 美女爽到高潮嗷嗷嗷叫免费网站| 久久精品日日躁夜夜躁| 亚洲国产精品自在拍在线播放蜜臀 | 久久综合久中文字幕青草| 亚洲精品久久久久久久久久吃药|