深度學習降噪專題課:總結
大家好,本課是本次專題課的最后一節課,給出了未來的研究改進方向,謝謝!
加QQ群,獲得相關資料,與群主交流討論:106047770
本系列文章為線上課程的復盤,每上完一節課就會同步發布對應的文章
本課程系列文章可進入合集查看:
深度學習降噪專題課系列文章合集
未來的研究改進方向
1.等待WebNN Polyfill 支持WebGPU后端
2.使用WebGPU的GPU Buffer或者GPU Texture作為Network的輸入和輸出
參考資料:
https://www.w3.org/TR/webnn/#programming-model-device-selection
https://www.w3.org/TR/webnn/#api-ml
https://www.w3.org/TR/webnn/#api-mlcontext-webgpu-interop
https://www.w3.org/TR/webnn/#api-mlcommandencoder
3.與path tracer結合
4.使用多幀來累積spp(temporally accumulating)
對于訓練,加入累積spp的數據(參考wspk的dataset)
對于推理,使用累積spp的場景數據作為輸入
參考資料:
- wspk論文的相關描述:
Besides, temporally accumulating consecutive 1-spp frames can effectively improve the temporal stability and increase the effec- tive spp of each frame. We employ a temporal accumulation pre- processing step before sending the noisy inputs to the denoising pipeline just like [SKW? 17, KIM? 19, MZV? 20]. We first reproject the previous frame to the current frame with the motion vector and then judge their geometry consistency by world position and shad- ing normal feature buffers. Current frame pixels that passed the consistency test are blended with their corresponding pixels in the previous frame, while the failed pixels remain original 1 spp.
- wspk的相關實現
- bmfr的相關實現
對于Motion Vector造成的ghost問題,可參考下面的論文改進:Temporally Reliable Motion Vectors for Real-time Ray Tracing
浙公網安備 33010602011771號