<output id="qn6qe"></output>

    1. <output id="qn6qe"><tt id="qn6qe"></tt></output>
    2. <strike id="qn6qe"></strike>

      亚洲 日本 欧洲 欧美 视频,日韩中文字幕有码av,一本一道av中文字幕无码,国产线播放免费人成视频播放,人妻少妇偷人无码视频,日夜啪啪一区二区三区,国产尤物精品自在拍视频首页,久热这里只有精品12

      (轉(zhuǎn))單緩沖Strip渲染降低VR中的延遲

      VR requires the support of many components in modern phones. This starts with the sensor for recording the motion of the head, the CPU driving the VR application (and everything else in the background), the GPU doing the work for the VR application and the calculations for creating the VR corrected image, to the display showing the transformed content to you, the observer.

      在現(xiàn)代手機(jī)中,虛擬現(xiàn)實(shí)需要許多組件的支持。首先是用于記錄頭部運(yùn)動(dòng)的傳感器,驅(qū)動(dòng) VR 應(yīng)用程序的 CPU (以及背景中的其他一切) ,為 VR 應(yīng)用程序工作的 GPU,以及創(chuàng)建 VR 校正圖像的計(jì)算,最后是顯示轉(zhuǎn)換后的內(nèi)容給你這個(gè)觀察者的顯示器。

      All those components need to work closely together to create that immersive experience everybody is talking about. In a lot of publicly available content, the time to achieve this is called motion to photon latency. Although this is a very generic term, it describes the problem very well: when will the change in view before my eyes eventually be recognized by my eyes and processed by my brain because of a head motion?

      所有這些組件需要緊密合作,以創(chuàng)建每個(gè)人都在談?wù)摰纳砼R其境的體驗(yàn)。在許多公開(kāi)的內(nèi)容中,實(shí)現(xiàn)這一點(diǎn)的時(shí)間稱為運(yùn)動(dòng)到光子的延遲。雖然這是一個(gè)非常通用的術(shù)語(yǔ),但它很好地描述了這個(gè)問(wèn)題: 什么時(shí)候我眼前的變化最終會(huì)被我的眼睛識(shí)別,并由于頭部運(yùn)動(dòng)而被我的大腦處理?

      In the real world, this happens instantly – otherwise you would run against walls all the time. However, creating this very same effect with a phone strapped to a head-mounted display turns out to be quite hard. There are different reasons for this: computers running at fixed clocks, the pipelining which serializes the processing of the data, and obviously the time it takes to render the final images.

      在現(xiàn)實(shí)世界中,這種情況會(huì)立即發(fā)生——否則你就會(huì)一直撞墻。然而,把手機(jī)綁在頭戴式顯示器上,要?jiǎng)?chuàng)造出同樣的效果是相當(dāng)困難的。有不同的原因: 計(jì)算機(jī)運(yùn)行在固定的時(shí)鐘,流水線序列化的數(shù)據(jù)處理,顯然需要時(shí)間來(lái)呈現(xiàn)最終的圖像。

      We as a GPU IP company are of course interested in optimizing our graphics pipeline to reduce the time it takes for the PowerVR graphics processor to render the content until it appears on the display.

      作為一家圖形處理器 IP 公司,我們當(dāng)然有興趣優(yōu)化我們的繪圖管線,以減少 PowerVR 圖形處理器在顯示內(nèi)容之前呈現(xiàn)內(nèi)容所需的時(shí)間。

      Composition requirements on Android
      Android 的組合要求
      The final screen output on Android is rendered using composition. Usually there are several producers that create images on their own such as SystemUI which is responsible for the status bar and the navigation bar; and foreground applications that render their content into their own buffers.

      Android 上的最終屏幕輸出是使用合成呈現(xiàn)的。通常有幾個(gè)生產(chǎn)者自己創(chuàng)建圖像,比如負(fù)責(zé)狀態(tài)欄和導(dǎo)航欄的 SystemUI,以及將內(nèi)容呈現(xiàn)到自己的緩沖區(qū)中的前臺(tái)應(yīng)用程序。

      Those buffers are taken by a consumer that arranges them and this arrangement is then eventually displayed on the screen. Most of the time this consumer is a system application called SurfaceFlinger.

      這些緩沖區(qū)由使用者進(jìn)行排列,然后這種排列最終顯示在屏幕上。大多數(shù)情況下,這個(gè)消費(fèi)者是一個(gè)名為 SurfaceFlinger 的系統(tǒng)應(yīng)用程序。

      SurfaceFlinger may decide to put those buffers (in the correct arrangement) directly on the screen if the display hardware supports that. This mode is called Hardware Composition and needs direct support by the display hardware. If the display doesn’t support a certain arrangement, SurfaceFlinger will use a framebuffer the size of the screen and render all images with the correct state into that framebuffer by using the GPU. That final composition render will then be shown by the display as usual.

      如果顯示硬件支持,SurfaceFlinger 可能會(huì)決定將這些緩沖區(qū)(以正確的方式)直接放在屏幕上。這種模式稱為硬件組合,需要顯示硬件的直接支持。如果顯示器不支持某種安排,SurfaceFlinger 將使用屏幕大小的幀緩沖區(qū),并使用 GPU 將所有具有正確狀態(tài)的圖像渲染到該幀緩沖區(qū)。最終的合成渲染將像往常一樣顯示。

      Illustrating the process of composition

      Both the producers and the consumer are highly independent – actually they are even different processes which talk to each other using inter-process communication. Now, if one is not careful, all those different entities will trample over each other all the time.

      生產(chǎn)者和消費(fèi)者都是高度獨(dú)立的——實(shí)際上他們甚至是不同的過(guò)程,用行程間通訊互相交流。現(xiàn)在,如果一個(gè)人不小心,所有這些不同的實(shí)體將踐踏對(duì)方所有的時(shí)間。

      For example a producer may render into a buffer which is currently displayed to the user. If this happens the user of the phone would see this as a visual corruption called tearing. One part of the screen still shows the old content and the other part already shows the new content. Tearing is easily identifiable by seeing cut-lines.

      例如,生產(chǎn)者可以呈現(xiàn)到當(dāng)前顯示給用戶的緩沖區(qū)中。如果發(fā)生這種情況,手機(jī)用戶會(huì)將其視為一種稱為撕裂的視覺(jué)損壞。屏幕的一部分仍然顯示舊內(nèi)容,另一部分已經(jīng)顯示新內(nèi)容。撕裂很容易通過(guò)看到切割線來(lái)識(shí)別。

      To prevent tearing, two key elements are necessary. The first is proper synchronization between all parties and the second is double buffering.

      為了防止撕裂,兩個(gè)關(guān)鍵因素是必要的。第一個(gè)是所有各方之間的正確同步,第二個(gè)是雙緩沖。

      Synchronization is achieved by using a framework called Android Native Syncs. Android Native Syncs have certain criteria which are important in the context of how they are being used on Android. They are system global syncs implemented in kernel space which are easily shareable between different processes by using file descriptors in usermode.

      同步是通過(guò)使用一個(gè)名為 Android NativeSyncs 的框架來(lái)實(shí)現(xiàn)的。Android 本地同步有一定的標(biāo)準(zhǔn),這些標(biāo)準(zhǔn)對(duì)于 Android 上如何使用它們非常重要。它們是在內(nèi)核空間中實(shí)現(xiàn)的系統(tǒng)全局同步,通過(guò)在用戶模式中使用文件描述符,可以方便地在不同進(jìn)程之間共享。

      They are also non-reusable binary syncs that allow just the states of “not signaled” and “signaled” and have only one state transition of going from “not signaled” to “signaled” (and not the other way around.)

      它們也是不可重用的二進(jìn)制同步,只允許“無(wú)信號(hào)”和“有信號(hào)”的狀態(tài),并且只有一個(gè)從“無(wú)信號(hào)”到“有信號(hào)”的狀態(tài)轉(zhuǎn)換(而不是反過(guò)來(lái))

      Double buffering allows the producer to render new content while the old content is still being used by the consumer. When the producer has finished rendering, the two buffers are switched and the consumer can present the new content while the producer starts rendering into the (now old) buffer again.

      雙緩沖允許生產(chǎn)者在使用者仍在使用舊內(nèi)容時(shí)呈現(xiàn)新內(nèi)容。當(dāng)生產(chǎn)者完成渲染后,兩個(gè)緩沖區(qū)被切換,使用者可以顯示新的內(nèi)容,而生產(chǎn)者開(kāi)始再次渲染到(現(xiàn)在已經(jīng)舊的)緩沖區(qū)中。

      Both mechanisms are necessary for the fluid output of the user interface on Android, but unfortunately they also have a cost which is additional latency.

      這兩種機(jī)制對(duì)于 Android 上用戶界面的流動(dòng)輸出都是必要的,但不幸的是,它們也有額外的延遲成本。

      Having double buffering means the content rendered right now will be visible to the user one frame later. Also the synchronization prevents access to anything which is on screen right now. To remove this particular latency the idea of single buffering can be used. Basically it means render always to the buffer which is on screen. Obviously to make this happen the synchronization also needs to be turned off.

      具有雙緩沖意味著現(xiàn)在呈現(xiàn)的內(nèi)容將在一幀之后對(duì)用戶可見(jiàn)。此外,同步阻止訪問(wèn)任何在屏幕上現(xiàn)在。為了消除這種特殊的延遲,可以使用單緩沖的思想。基本上,它意味著渲染總是在屏幕上的緩沖區(qū)。顯然,要實(shí)現(xiàn)這一點(diǎn),還需要關(guān)閉同步。

      We implemented this feature in the KHR_mutable_render_buffer EGL extension ratified by the Khronos Group just this March.

      我們?cè)?Khronos Group 今年 3 月批準(zhǔn)的 KHR _ mutable _ render _ buffer EGL 擴(kuò)展中實(shí)現(xiàn)了這個(gè)特性。

      This extension needs support from both the GPU driver and the Android operation system. As I explained earlier, Android goes a long way to prevent this mode of operation because it results in tearing.

      這個(gè)擴(kuò)展需要 GPU 驅(qū)動(dòng)程序和 Android 操作系統(tǒng)的支持。正如我前面所解釋的,Android 在防止這種操作模式方面做了很多工作,因?yàn)樗鼤?huì)導(dǎo)致撕裂。

      So how can we prevent tearing when running in the new single buffer mode?

      那么,在新的單緩沖區(qū)模式下運(yùn)行時(shí),如何防止撕裂呢?

      Screen technologies
      屏幕技術(shù)
      To answer this question we need to look into how displays work. The GPU driver posts a buffer to the display driver which is in the format the display can understand. There may be different requirements for the buffer memory format, like special striding or tiling alignments.

      為了回答這個(gè)問(wèn)題,我們需要研究顯示器是如何工作的。GPU 驅(qū)動(dòng)程序向顯示驅(qū)動(dòng)程序發(fā)送一個(gè)緩沖區(qū),該緩沖區(qū)采用顯示器可以理解的格式。對(duì)于緩沖區(qū)內(nèi)存格式可能有不同的要求,比如特殊的跨距對(duì)齊或平鋪對(duì)齊。

      The display normally shows this new buffer on the next vsync. Assuming the display scans out this memory from top to bottom the vsync or vertical sync is the point in time when the “beam” goes from the bottom back to the top of the screen.

      顯示器通常會(huì)在下一個(gè) vsync 上顯示這個(gè)新的緩沖區(qū)。假設(shè)顯示器從上到下掃描這個(gè)內(nèi)存,那么 vsync 或者垂直同步就是“光束”從底部返回到屏幕頂部的時(shí)間點(diǎn)。

      Obviously nowadays there is no beam anymore, but this nomenclature is still in use. Because at this point in time the display does not scan out from any buffer, it is safe to switch to a new one without introducing any tearing. The display does this procedure in a fixed time interval defined by the screen period.

      很明顯,現(xiàn)在已經(jīng)沒(méi)有光束了,但是這個(gè)術(shù)語(yǔ)仍然在使用。因?yàn)榇藭r(shí)顯示器不會(huì)從任何緩沖區(qū)掃描出來(lái),所以切換到新的緩沖區(qū)是安全的,不會(huì)引起任何撕裂。顯示器以屏幕周期定義的固定時(shí)間間隔執(zhí)行此過(guò)程。

      A common screen period is 16.7ms. This means the image on screen is updated 60 times per second.

      一個(gè)常見(jiàn)的屏幕周期是 16.7 毫秒,這意味著屏幕上的圖像每秒更新 60 次。

      The process of display scan out

      Modern phones usually have an aspect ratio of 16/9 and their screen is mounted so that the scan out direction is optimal for portrait mode, because this is the mode most users will use their phone on a daily basis.

      現(xiàn)代手機(jī)的長(zhǎng)寬比通常為 16/9,而且它們的屏幕是安裝在屏幕上的,因此掃描出來(lái)的方向?qū)τ诳v向模式來(lái)說(shuō)是最佳的,因?yàn)檫@是大多數(shù)用戶每天使用手機(jī)的模式。

      For VR this changes and the VR application will run in landscape resulting in a scan out direction from left to right.

      對(duì)于虛擬現(xiàn)實(shí),這個(gè)變化和虛擬現(xiàn)實(shí)應(yīng)用程序?qū)⑦\(yùn)行在橫向?qū)е聫淖蟮接业膾呙璺较颉?/p>

      This is important when we look now at how to update the buffer while it is still on the screen.

      當(dāng)我們現(xiàn)在考慮如何在緩沖區(qū)仍然在屏幕上時(shí)更新它時(shí),這一點(diǎn)非常重要。

      Strip rendering
      條帶渲染
      When the display is scanning out the buffer, it is doing so at a constant rate. The idea of strip rendering is to change that part of the buffer which is currently not scanned out. There are two different strategies available. We could try to change the part of the screen where the scan out happens next.

      當(dāng)顯示器掃描出緩沖區(qū)時(shí),它是以恒定的速率這樣做的。條帶渲染的思想是更改當(dāng)前未掃描出的緩沖區(qū)部分。有兩種不同的策略。我們可以試著改變屏幕上接下來(lái)掃描出來(lái)的部分。

      This is called beam racing, because we try to run in front of the beam and update the part of the memory which is just about to be shown next. The other strategy is called beam chasing and means updating the part behind the beam.

      這就是所謂的光束競(jìng)賽,因?yàn)槲覀冊(cè)噲D跑在光束前面,并更新記憶的一部分,即將顯示下一步。另一種策略稱為光束追蹤,意味著更新光束背后的部分。

      Beam racing is again better for latency, but at the same time harder to implement. The GPU needs to guarantee to have finished rendering in a very tight time window. This guarantee can be hard to fulfil in a multi-process operating system where things happen in the background and the GPU also may render into other buffers at the same time. So the easier method to implement is beam chasing.

      束流競(jìng)賽再次更好的延遲,但同時(shí)更難以實(shí)現(xiàn)。GPU 需要保證在一個(gè)非常緊的時(shí)間窗口內(nèi)完成渲染。在一個(gè)多進(jìn)程操作系統(tǒng)中,這種保證可能很難實(shí)現(xiàn),因?yàn)槭虑榘l(fā)生在后臺(tái),并且 GPU 也可能同時(shí)呈現(xiàn)到其他緩沖區(qū)中。因此,最容易實(shí)現(xiàn)的方法是波束追蹤。

      Beam chasing strip rendering

      The VR application asks the display for the last vsync time and adjusts itself to it to make sure the render has the full screen period time for the presentation. To calculate the time for a strip to start to render, we need to define the strip size first. The strip size and therefore the number of strips to use is implementation defined. It depends on how fast the display scans out and how fast we are able to render each strip. In most cases two strips are optimal. In VR this also has the benefit of having one strip per eye (remember we scan out from the left to the right) which makes the implementation much easier. In the following example we still use four strips to see how this works in general. As I mentioned earlier, we have 16.7ms to render the full image.

      VR 應(yīng)用程序要求顯示最后的 vsync 時(shí)間,并自我調(diào)整以確保渲染具有演示文稿的全屏?xí)r間。為了計(jì)算條帶開(kāi)始渲染的時(shí)間,我們首先需要定義條帶的大小。條帶的大小和使用條帶的數(shù)量是實(shí)現(xiàn)定義的。這取決于顯示器掃描出來(lái)的速度以及我們渲染每個(gè)條帶的速度。在大多數(shù)情況下,兩條帶是最佳的。在虛擬現(xiàn)實(shí)中,這也有一個(gè)好處,每只眼睛只有一個(gè)條帶(記住我們從左向右掃描) ,這使得實(shí)現(xiàn)更加容易。在下面的示例中,我們?nèi)匀皇褂盟膫€(gè)帶來(lái)了解一般情況下是如何工作的。正如我前面提到的,我們有 16.7 毫秒渲染完整的圖像。

      Consequently we have 4.17ms to render each strip. The VR application waits 4.17ms after the last vsync and then starts rendering the first strip. Now it waits again 4.17ms (which is 8.34ms after the last vsync) before rendering the second strip. This gets repeated until all four strips are rendered and the sequence can start over again. Obviously we have to take into account how long each render submission takes and adjust our timing accordingly. Using absolute times has been shown to be the most accurate method to use.

      因此,我們有 4.17 毫秒渲染每條。VR 應(yīng)用程序在最后一次 vsync 之后等待 4.17 ms,然后開(kāi)始呈現(xiàn)第一條帶。現(xiàn)在,它再次等待 4.17 ms (比上次 vsync 晚了 8.34 ms) ,然后才顯示第二條帶。這個(gè)過(guò)程會(huì)一直重復(fù),直到所有四個(gè)條帶都被渲染,并且序列可以重新開(kāi)始。顯然,我們必須考慮到每個(gè)渲染提交需要多長(zhǎng)時(shí)間,并相應(yīng)地調(diào)整我們的時(shí)間。使用絕對(duì)時(shí)間已被證明是最準(zhǔn)確的方法使用。

      Final thoughts
      最后的想法
      Reducing latency in VR applications on mobile devices is one of the major challenges facing developers today. This is different from desktop VR solutions which benefit from higher processing power and usually don’t encounter any thermal budget limitations.

      減少移動(dòng)設(shè)備上虛擬現(xiàn)實(shí)應(yīng)用程序的延遲是當(dāng)今開(kāi)發(fā)人員面臨的主要挑戰(zhàn)之一。這不同于桌面 VR 解決方案,后者受益于更高的處理能力,并且通常不會(huì)遇到任何熱預(yù)算限制。

      Make sure you also follow us on Twitter (@ImaginationTech) for more news and announcements.

      請(qǐng)確保您也在 Twitter (@ImaginationTech)上關(guān)注我們,以獲得更多的新聞和公告。

      posted @ 2022-11-27 15:02  mikaelzero  閱讀(250)  評(píng)論(0)    收藏  舉報(bào)
      主站蜘蛛池模板: 亚洲精品亚洲人成人网| 午夜精品区| 国产精品色三级在线观看| 内地自拍三级在线观看| 日本一二三区视频在线| 日韩乱码卡一卡2卡三卡四| 97欧美精品系列一区二区| 内射老阿姨1区2区3区4区| 青春草公开在线视频日韩| 最新亚洲av日韩av二区| 日韩人妻少妇一区二区三区| 欧美乱大交aaaa片if| 亚洲欧美日韩综合一区二区| 日本熟妇人妻一区二区三区| 日本高清在线观看WWW色| 日韩人妻少妇一区二区三区| 高清破外女出血AV毛片| 国产精品女生自拍第一区| 国产日韩av免费无码一区二区三区 | 亚洲国产精品久久久天堂麻豆宅男| 久久综合五月丁香六月丁香| 东京道一本热中文字幕| 亚洲中文字幕国产综合| 亚洲欧美日韩成人一区| 久久久这里只有精品10| 国内精品自线在拍| 亚洲精品无码高潮喷水A| 日本55丰满熟妇厨房伦| 汶川县| 老熟妇老熟女老女人天堂| 网友自拍视频一区二区三区| 久久综合97丁香色香蕉| 日本一区二区三区免费播放视频站| 延吉市| 免费99视频| 亚洲婷婷六月的婷婷| 99久久国产成人免费网站| 国产精品一区二区三区四区| 国产精品毛片一区二区| 国产在线一区二区不卡| 亚洲人妻精品一区二区|