<output id="qn6qe"></output>

    1. <output id="qn6qe"><tt id="qn6qe"></tt></output>
    2. <strike id="qn6qe"></strike>

      亚洲 日本 欧洲 欧美 视频,日韩中文字幕有码av,一本一道av中文字幕无码,国产线播放免费人成视频播放,人妻少妇偷人无码视频,日夜啪啪一区二区三区,国产尤物精品自在拍视频首页,久热这里只有精品12

      (翻譯 gafferongames) Sending Large Blocks of Data 發送大塊數據

      https://gafferongames.com/post/sending_large_blocks_of_data/

      In the previous article we implemented packet fragmentation and reassembly so we can send packets larger than MTU.

      This approach works great when the data block you’re sending is time critical and can be dropped, but in other cases you need to send large blocks of quickly and reliably over packet loss, and you need the data to get through.

      In this situation, a different technique gives much better results.

      在上一篇文章中,我們實現了數據包的分片與重組,從而可以發送大于 MTU 的數據包。

      這種方法在你發送的數據塊對時間非常敏感、允許丟失的情況下效果很好。但在其他情況下,如果你需要在存在數據包丟失的網絡中快速且可靠地發送大塊數據,并且必須確保數據傳輸成功,那就需要另一種技術。

      在這種情況下,使用不同的技術會帶來更好的效果。

      Background

      It’s common for servers to send large block of data to the client on connect, for example, the initial state of the game world for late join.

      Let’s assume this data is 256k in size and the client needs to receive it before they can join the game. The client is stuck behind a load screen waiting for the data, so obviously we want it to be transmitted as quickly as possible.

      If we send the data with the technique from the previous article, we get packet loss amplification because a single dropped fragment results in the whole packet being lost. The effect of this is actually quite severe. Our example block split into 256 fragments and sent over 1% packet loss now has a whopping 92.4% chance of being dropped!

      Since we just need the data to get across, we have no choice but to keep sending it until it gets through. On average, we have to send the block 10 times before it’s received. You may laugh but this actually happened on a AAA game I worked on!

      To fix this, I implemented a new system for sending large blocks, one that handles packet loss by resends fragments until they are acked. Then I took the problematic large blocks and piped them through this system, fixing a bunch of players stalling out on connect, while continuing to send time critical data (snapshots) via packet fragmentation and reassembly.

      服務器在客戶端連接時發送大塊數據是很常見的事情,比如為了補入玩家(late join)發送游戲世界的初始狀態。

      假設這塊數據有 256KB,客戶端必須在接收完成后才能加入游戲。客戶端在加載畫面中等待數據傳輸完成,因此我們顯然希望它盡可能快地傳輸。

      如果我們使用上一篇文章中的技術發送這些數據,就會遇到“丟包放大”問題——只要一個片段丟失,整個數據包就會作廢。這個問題其實非常嚴重。以我們這個例子來說,256KB 被拆分成 256 個片段,假設網絡丟包率為 1%,整個數據塊被丟棄的概率居然高達 92.4%

      由于我們只關心數據能否成功傳輸過去,那我們就只能不斷重發,直到數據最終完整地到達為止。平均下來,我們得重復發送整塊數據 10 次 才能成功。聽起來可能有點夸張,但這真的在我參與開發的一款 AAA 游戲中發生過!

      為了解決這個問題,我實現了一個新的系統,用于發送大塊數據。這個系統會在遇到丟包時,持續重發那些未被確認(acked)的片段。然后我將這些有問題的大數據塊通過新系統發送,同時繼續使用分片與重組的方法來發送那些對時效性要求較高的數據(例如快照)。最終解決了大量玩家在連接時卡住的問題。

       

      Chunks and Slices

      In this new system blocks of data are called chunks. Chunks are split up into slices. This name change keeps the chunk system terminology (chunks/slices) distinct from packet fragmentation and reassembly (packets/fragments).

      The basic idea is that slices are sent over the network repeatedly until they all get through. Since we are implementing this over UDP, simple in concept becomes a little more complicated in implementation because have to build in our own basic reliability system so the sender knows which slices have been received.

      This reliability gets quite tricky if we have a bunch of different chunks in flight, so we’re going to make a simplifying assumption up front: we’re only going to send one chunk over the network at a time. This doesn’t mean the sender can’t have a local send queue for chunks, just that in terms of network traffic there’s only ever one chunk in flight at any time.

      This makes intuitive sense because the whole point of the chunk system is to send chunks reliably and in-order. If you are for some reason sending chunk 0 and chunk 1 at the same time, what’s the point? You can’t process chunk 1 until chunk 0 comes through, because otherwise it wouldn’t be reliable-ordered.

      That said, if you dig a bit deeper you’ll see that sending one chunk at a time does introduce a small trade-off, and that is that it adds a delay of RTT between chunk n being received and the send starting for chunk n+1 from the receiver’s point of view.

      This trade-off is totally acceptable for the occasional sending of large chunks like data sent once on client connect, but it’s definitely not acceptable for data sent 10 or 20 times per-second like snapshots. So remember, this system is useful for large, infrequently sent blocks of data, not for time critical data.

      在這個新系統中,大塊數據被稱為 chunks(塊),而這些塊又被拆分成更小的 slices(切片)。這個命名上的改變是為了將塊系統的術語(chunks/slices)與數據包分片與重組的術語(packets/fragments)區分開來。

      基本思路是將切片通過網絡反復發送,直到全部成功送達。由于我們是在 UDP 上實現這一功能,雖然概念上很簡單,但實現起來會稍微復雜一些,因為我們必須構建一個基本的可靠性系統,讓發送方知道哪些切片已經被接收。

      如果有多個不同的塊同時在傳輸中,這種可靠性機制會變得相當復雜。因此我們先做一個簡化假設:我們一次只在網絡上傳輸一個塊。這并不意味著發送方不能在本地維護一個發送隊列,只是說在網絡層面上,同一時間只會有一個塊在傳輸中。

      這在直覺上也說得通,因為塊系統的目的就是要實現可靠且有序的傳輸。如果你同時發送塊 0 和塊 1,那意義何在?在塊 0 到達之前你無法處理塊 1,否則這就不是可靠有序的傳輸了。

      不過深入一點你會發現,一次只發送一個塊會帶來一個小的權衡,那就是:從接收方的角度看,從塊 n 接收完成到塊 n+1 開始發送之間,會產生一個 RTT(往返時延)的延遲。

      這個權衡對于偶爾發送的大塊數據(比如客戶端連接時只發送一次的數據)是完全可以接受的,但對于每秒發送 10 到 20 次的數據(比如快照)來說就絕對不能接受了。所以要記住,這套系統適用于那些體積大但發送頻率低的數據塊,而不適用于對時間敏感的數據

       

      Packet Structure

      There are two sides to the chunk system, the sender and the receiver.

      The sender is the side that queues up the chunk and sends slices over the network. The receiver is what reads those slice packets and reassembles the chunk on the other side. The receiver is also responsible for communicating back to the sender which slices have been received via acks.

      The netcode I work on is usually client/server, and in this case I usually want to be able to send blocks of data from the server to the client and from the client to the server. In that case, there are two senders and two receivers, a sender on the client corresponding to a receiver on the server and vice-versa.

      Think of the sender and receiver as end points for this chunk transmission protocol that define the direction of flow. If you want to send chunks in a different direction, or even extend the chunk sender to support peer-to-peer, just add sender and receiver end points for each direction you need to send chunks.

      塊系統由兩個部分組成:發送端(sender)接收端(receiver)

      發送端負責將數據塊排隊,并將切片(slices)通過網絡發送出去。

      接收端負責讀取這些切片數據包,并在另一端將塊重新組裝起來。接收端還需要向發送端反饋哪些切片已經收到(通過 ack 確認機制)。

      我通常所處理的網絡代碼是客戶端/服務器結構(client/server),在這種情況下,我通常需要能夠在服務器到客戶端、以及客戶端到服務器之間雙向發送數據塊。因此,會存在兩個發送端和兩個接收端:一個客戶端的發送端對應服務器的接收端,反之亦然。

      你可以把發送端和接收端視為這套塊傳輸協議的“端點”,它們定義了數據流的方向。如果你希望以不同的方向發送塊,甚至想要擴展這個塊發送器以支持點對點(P2P)通信,只需為你想要發送塊的每個方向增加對應的發送端和接收端即可。

      Traffic over the network for this system is sent via two packet types:

      • Slice packet - contains a slice of a chunk up to 1k in size.
      • Ack packet - a bitfield indicating which slices have been received so far.

      The slice packet is sent from the sender to the receiver. It is the payload packet that gets the chunk data across the network and is designed so each packet fits neatly under a conservative MTU of 1200 bytes. Each slice is a maximum of 1k and there is a maximum of 256 slices per-chunk, therefore the largest data you can send over the network with this system is 256k.

      通過這套系統在網絡上傳輸的數據分為兩種數據包類型:

      • Slice 數據包:包含一個最多 1KB 大小的塊切片。

      • Ack 數據包:一個位字段(bitfield),用于指示當前為止哪些切片已被成功接收。

      Slice 數據包 是由發送端發往接收端的。這是用于實際傳輸塊數據的有效載荷數據包。它的設計確保每個數據包都能穩定地控制在一個保守估計的 1200 字節 MTU 以下。每個切片最大為 1KB,一個塊最多可以包含 256 個切片,因此通過這套系統,在網絡上傳輸的最大數據塊大小為 256KB

      const int SliceSize = 1024;
      const int MaxSlicesPerChunk = 256;
      const int MaxChunkSize = SliceSize * MaxSlicesPerChunk;
      
      struct SlicePacket : public protocol2::Packet
      {
          uint16_t chunkId;
          int sliceId;
          int numSlices;
          int sliceBytes;
          uint8_t data[SliceSize];
       
          template &lt;typename Stream&gt; bool Serialize( Stream &amp; stream )
          {
              serialize_bits( stream, chunkId, 16 );
              serialize_int( stream, sliceId, 0, MaxSlicesPerChunk - 1 );
              serialize_int( stream, numSlices, 1, MaxSlicesPerChunk );
              if ( sliceId == numSlices - 1 )
              {
                  serialize_int( stream, sliceBytes, 1, SliceSize );
              }
              else if ( Stream::IsReading )
              {
                  sliceBytes = SliceSize;
              }
              serialize_bytes( stream, data, sliceBytes );
              return true;
          }
      };

      There are two points I’d like to make about the slice packet. The first is that even though there is only ever one chunk in flight over the network, it’s still necessary to include the chunk id (0,1,2,3, etc…) because packets sent over UDP can be received out of order.

      Second point. Due to the way chunks are sliced up we know that all slices except the last one must be SliceSize (1024 bytes). We take advantage of this to save a small bit of bandwidth sending the slice size only in the last slice, but there is a trade-off: the receiver doesn’t know the exact size of a chunk until it receives the last slice.

      The other packet sent by this system is the ack packet. This packet is sent in the opposite direction, from the receiver back to the sender. This is the reliability part of the chunk network protocol. Its purpose is to lets the sender know which slices have been received.

      關于 Slice 數據包,我想強調兩點:

      第一點是,即使網絡中一次只傳輸一個塊,也仍然必須在數據包中包含塊 ID(例如 0,1,2,3 等),因為 UDP 協議下的數據包可能會亂序到達。

      第二點是,由于我們對塊的切片方式,我們可以確定,除了最后一個切片外,其他所有切片的大小都固定為 SliceSize(1024 字節)我們利用這一點來節省一點帶寬 —— 只有在最后一個切片中才發送切片的實際大小。不過這樣做也有一個權衡:接收方在收到最后一個切片之前,并不知道整個塊的實際大小

      系統中另一種數據包是 ack 數據包。這個數據包方向相反,是從接收方發回發送方的。它是整個塊傳輸協議中實現可靠性的關鍵部分,用于讓發送方知道哪些切片已經成功被接收

      struct AckPacket : public protocol2::Packet 
      { 
          uint16_t chunkId; 
          int numSlices; 
          bool acked[MaxSlicesPerChunk]; 
      
          bool Serialize( Stream &amp; stream )
          { 
              serialize_bits( stream, chunkId, 16 ); 
              serialize_int( stream, numSlices, 1, MaxSlicesPerChunk ); 
              for ( int i = 0; i &lt; numSlices; ++i ) 
              {
                  serialize_bool( stream, acked[i] ); return true; } };
              }
          }
      };

      Ack 是 “acknowledgments”(確認)的縮寫。所以,當接收方發送一個針對切片 100 的 ack 時,意思是“我已經成功接收到切片 100”。這對發送方來說是非常關鍵的信息,因為:

      1. 它讓發送方知道哪些切片已經被接收,從而判斷何時可以停止發送

      2. 它還允許發送方更高效地利用帶寬——只重發那些尚未被確認(未 ack)的切片。

      進一步來看 ack 數據包,一開始可能會覺得它有些多余:為什么每個 ack 包里都要包含對所有切片的確認信息

      這是因為 ack 數據包也是通過 UDP 發送的,也可能會丟失。如果只發某一部分確認信息,而這個 ack 包丟了,就可能導致發送方和接收方對“哪些切片已被接收”的狀態產生不同步(desync),從而影響整個數據塊的正確傳輸。

      我們確實需要對 ack 提供某種程度的可靠性,但我們不想再為 ack 實現一個 ack 系統 —— 那會讓事情變得非常復雜且麻煩。

      幸運的是,最壞情況下的 ack 位字段也不過是 256 位(32 字節),所以我們就干脆在每個 ack 包中都完整發送全部的 ack 狀態。當 ack 包被接收時,只要該包中某個切片被標記為已接收,且本地還未記錄為 ack,那我們就立刻標記該切片為 ack

      這個“偏向未確認到已確認”的處理方式(就像保險絲一旦燒斷就不會再連上),讓我們可以很好地應對 ack 包的亂序到達問題。

      Sender Implementation

      Let’s get started with the implementation of the sender.

      The strategy for the sender is:

      • Keep sending slices until all slices are acked
      • Don’t resend slices that have already been acked

      We use the following data structure for the sender:

      我們現在開始實現發送端。

      發送端的策略如下:

      • 持續發送切片,直到所有切片都被確認(acked)

      • 不重發已經被確認的切片

      我們為發送端使用如下的數據結構:

      class ChunkSender
      {
          bool sending;
          uint16_t chunkId;
          int chunkSize;
          int numSlices;
          int numAckedSlices;
          int currentSliceId;
          bool acked[MaxSlicesPerChunk];
          uint8_t chunkData[MaxChunkSize];
          double timeLastSent[MaxSlicesPerChunk];
      };

      As mentioned before, only one chunk is sent at a time, so there is a ‘sending’ state which is true if we are currently sending a chunk, false if we are in an idle state ready for the user to send a chunk. In this implementation, you can’t send another chunk while the current chunk is still being sent over the network. If you don’t like this, stick a queue in front of the sender.

      Next, we have the id of the chunk we are currently sending, or, if we are not sending a chunk, the id of the next chunk to be sent, followed by the size of the chunk and the number of slices it has been split into. We also track, per-slice, whether that slice has been acked, which lets us count the number of slices that have been acked so far while ignoring redundant acks. A chunk is considered fully received from the sender’s point of view when numAckedSlices == numSlices.

      We also keep track of the current slice id for the algorithm that determines which slices to send, which works like this. At the start of a chunk send, start at slice id 0 and work from left to right and wrap back around to 0 again when you go past the last slice. Eventually, you stop iterating across because you’ve run out of bandwidth to send slices. At this point, remember our current slice index via current slice id so you can pick up from where you left off next time. This last part is important because it distributes sends across all slices, not just the first few.

      如前所述,系統一次只發送一個塊,因此我們需要一個“發送中(sending)”狀態:當我們正在發送一個塊時,該狀態為 true;當處于空閑狀態、準備好接收用戶發送新塊時,該狀態為 false。在這個實現中,當當前塊仍在通過網絡發送時,你不能發送另一個塊

      如果你不喜歡這種限制,可以在發送端前面加一個隊列來管理多個待發送的塊。

      接下來,我們記錄當前正在發送的塊的 ID,或者在沒有正在發送的塊時,記錄即將發送的下一個塊的 ID。之后是該塊的總大小,以及它被拆分成的切片數量。

      我們還會為每個切片記錄其是否已經被確認(acked),這樣可以讓我們在忽略重復確認的同時,統計目前已被確認的切片數量。

      從發送端的角度來看,numAckedSlices == numSlices 時,該塊就被視為已經完整接收

      我們還需要記錄一個當前的切片 ID,用于發送算法中決定要發送哪些切片。這個算法的工作方式如下:

      在開始發送一個塊時,從切片 ID 0 開始,從左到右依次檢查,當超過最后一個切片時回繞回到 0。這個過程會持續進行,直到耗盡可用帶寬,無法再繼續發送切片為止。

      此時,我們會通過當前切片 ID 記錄下我們遍歷到的位置,這樣下次可以從上次中斷的地方繼續發送。這一步非常關鍵,因為它能將發送操作平均分布在所有切片上,而不僅僅是集中在前幾個切片

      Now let’s discuss bandwidth limiting. Obviously you don’t just blast slices out continuously as you’d flood the connection in no time, so how do we limit the sender bandwidth? My implementation works something like this: as you walk across slices and consider each slice you want to send, estimate roughly how many bytes the slice packet will take eg: roughly slice bytes + some overhead for your protocol and UDP/IP header. Then compare the amount of bytes required vs. the available bytes you have to send in your bandwidth budget. If you don’t have enough bytes accumulated, stop. Otherwise, subtract the bytes required to send the slice and repeat the process for the next slice.

      Where does the available bytes in the send budget come from? Each frame before you update the chunk sender, take your target bandwidth (eg. 256kbps), convert it to bytes per-second, and add it multiplied by delta time (dt) to an accumulator.

      A conservative send rate of 256kbps means you can send 32000 bytes per-second, so add 32000 * dt to the accumulator. A middle ground of 512kbit/sec is 64000 bytes per-second. A more aggressive 1mbit is 125000 bytes per-second. This way each update you accumulate a number of bytes you are allowed to send, and when you’ve sent all the slices you can given that budget, any bytes left over stick around for the next time you try to send a slice.

      One subtle point with the chunk sender and is that it’s a good idea to implement some minimum resend delay per-slice, otherwise you get situations where for small chunks, or the last few slices of a chunk that the same few slices get spammed over the network.

      For this reason we maintain an array of last send time per-slice. One option for this resend delay is to maintain an estimate of RTT and to only resend a slice if it hasn’t been acked within RTT * 1.25 of its last send time. Or, you could just resend the slice it if it hasn’t been sent in the last 100ms. Works for me!

      現在讓我們討論帶寬限制。顯然,發送端不能一直不停地發送切片,否則很快就會淹沒連接。那么,我們如何限制發送帶寬呢?

      我的實現大致是這樣的:在遍歷每個切片并決定要發送哪些切片時,首先估算一下每個切片包大概需要多少字節,比如大致的切片字節數加上一些協議和 UDP/IP 頭部的開銷。然后,將所需的字節數與你可用的帶寬字節數進行比較。如果沒有足夠的字節來發送切片,就停止。如果有足夠的字節,則從可用字節中減去所需的字節數,然后繼續處理下一個切片。

      那么,可用字節數從哪里來呢?每次更新發送器之前,我們會用目標帶寬(例如 256kbps)轉換為每秒字節數,并將其乘以 delta time(dt),然后加到一個累加器中。

      一個保守的發送速率是 256kbps,這意味著每秒可以發送 32000 字節。因此,每幀將 32000 * dt 加到累加器中。一個中等的帶寬 512kbps 是每秒 64000 字節。而更激進的 1mbit 則是每秒 125000 字節。這樣,每次更新時,累加器中就會存儲你允許發送的字節數。當你根據這個預算發送完所有切片后,任何剩余的字節會被保留到下次嘗試發送切片時繼續使用。

      關于塊發送器的一個微妙點是,為了避免小塊或塊的最后幾個切片重復不斷地被發送,你最好為每個切片實現一些最小重發延遲。否則,就會出現一些切片(特別是塊的最后幾個切片)被不斷地發送的情況。

      為此,我們會維護一個每個切片上次發送時間的數組。對于這個重發延遲的一個選項,可以維護一個 RTT 的估算值,并且只有當切片在 RTT * 1.25 的時間內沒有被 ack 時,才會重發它。或者,你也可以設定一個簡單的規則:如果切片在過去的 100 毫秒內沒有被發送,就重發。對我來說,這樣做就可以了!

       

      Kicking it up a notch

      提升一個檔次

      Do the math you’ll notice it still takes a long time for a 256k chunk to get across: 

      做一下計算,你會發現即使如此,一個 256k 的塊仍然需要很長時間才能傳輸完:

      • 1mbps = 2 seconds
      • 512kbps = 4 seconds
      • 256kbps = 8 seconds :(

      Which kinda sucks. The whole point here is quickly and reliably. Emphasis on quickly. Wouldn’t it be nice to be able to get the chunk across faster? The typical use case of the chunk system supports this. For example, a large block of data sent down to the client immediately on connect or a block of data that has to get through before the client exits a load screen and starts to play. You want this to be over as quickly as possible and in both cases the user really doesn’t have anything better to do with their bandwidth, so why not use as much of it as possible?

      這有點糟糕。這里的核心目標是快速且可靠,重點是快速難道不是希望能更快地傳輸塊數據嗎?塊系統的典型使用場景就支持這一點。例如,一大塊數據在客戶端連接時立即發送,或者在客戶端退出加載畫面并開始游戲前必須傳輸的一塊數據。你希望這個過程盡可能快速地完成,而且在這兩種情況下,用戶實際上沒有更好的方式使用他們的帶寬,所以為什么不盡可能多地利用它呢?

      One thing I’ve tried in the past with excellent results is an initial burst. Assuming your chunk size isn’t so large, and your chunk sends are infrequent, I can see no reason why you can’t just fire across the entire chunk, all slices of it, in separate packets in one glorious burst of bandwidth, wait 100ms, and then resume the regular bandwidth limited slice sending strategy.

      Why does this work? In the case where the user has a good internet connection (some multiple of 10mbps or greater…), the slices get through very quickly indeed. In the situation where the connection is not so great, the burst gets buffered up and most slices will be delivered as quickly as possible limited only by the amount bandwidth available. After this point switching to the regular strategy at a lower rate picks up any slices that didn’t get through the first time.

      This seems a bit risky so let me explain. In the case where the user can’t quite support this bandwidth what you’re relying on here is that routers on the Internet strongly prefer to buffer packets rather than discard them at almost any cost. It’s a TCP thing. Normally, I hate this because it induces latency in packet delivery and messes up your game packets which you want delivered as quickly as possible, but in this case it’s good behavior because the player really has nothing else to do but wait for your chunk to get through.

      Just don’t go too overboard with the spam or the congestion will persist after your chunk send completes and it will affect your game for the first few seconds. Also, make sure you increase the size of your OS socket buffers on both ends so they are larger than your maximum chunk size (I recommend at least double), otherwise you’ll be dropping slices packets before they even hit the wire.

      Finally, I want to be a responsible network citizen here so although I recommend sending all slices once in an initial burst, it’s important for me to mention that I think this really is only appropriate, and only really borderline appropriate behavior for small chunks in the few 100s of k range in 2016, and only when your game isn’t sending anything else that is time-critical.

      Please don’t use this burst strategy if your chunk is really large, eg: megabytes of data, because that’s way too big to be relying on the kindness of strangers, AKA. the buffers in the routers between you and your packet’s destination. For this it’s necessary to implement something much smarter. Something adaptive that tries to send data as quickly as it can, but backs off when it detects too much latency and/or packet loss as a result of flooding the connection. Such a system is outside of the scope of this article.

      我過去嘗試過的一種方法,效果非常好,就是初始的突發發送。假設你的塊大小不是特別大,并且塊發送的頻率不高,我看不出有什么理由不可以直接將整個塊——所有切片——通過單獨的包一次性以一個壯麗的帶寬突發發送出去,等待 100 毫秒,然后再恢復常規的帶寬限制切片發送策略。

      為什么這樣有效?在用戶擁有良好互聯網連接(例如 10mbps 或更高)的情況下,切片會非常迅速地通過。而在連接不太好的情況下,突發數據會被緩沖,大部分切片會盡可能快地傳送,限制僅在于可用帶寬。此后,切換到常規策略以較低的速率可以接收那些第一次沒能傳送成功的切片。

      這看起來有點冒險,所以讓我解釋一下。在用戶無法完全支持這種帶寬的情況下,你依賴的是互聯網上的路由器通常會優先選擇緩沖數據包,而不是在任何情況下丟棄它們。這是 TCP 協議的行為。通常,我不喜歡這種做法,因為它會增加數據包的延遲,干擾游戲數據包的快速傳輸,但在這種情況下,這種行為是好的,因為玩家實際上沒有別的事情可做,只能等待你的塊數據傳輸完成。

      不過,千萬不要過度使用這種突發發送,否則即使塊數據發送完成,連接的擁塞也會持續,并且影響游戲的前幾秒。還有,確保在兩端都增加操作系統套接字的緩沖區,使其比最大塊大小要大(我建議至少加倍),否則你可能在數據包到達網絡之前就丟失切片數據包。

      最后,我想在這里做一個負責任的網絡公民提醒,雖然我推薦在初始階段一次性發送所有切片,但我認為這種行為僅適用于小塊數據(幾百 KB 的塊),在 2016 年以及在你的游戲沒有其他時間關鍵數據發送的情況下,這樣做才是邊界合適的。

      如果你的塊數據非常大,比如幾兆字節的數據,請不要使用這種突發發送策略,因為依賴路由器之間的緩沖區的行為太冒險。對于這種情況,必須實現更智能的策略:一種自適應系統,它盡可能快速地發送數據,但當檢測到由于過度發送造成的延遲和/或數據包丟失時會自動回退。這樣的系統超出了本文的范圍。

      Receiver Implementation

      Now that we have the sender all sorted out let’s move on to the reciever. 

      As mentioned previously, unlike the packet fragmentation and reassembly system from the previous article, the chunk system only ever has one chunk in flight.

      This makes the reciever side of the chunk system much simpler:

      接收端實現

      現在我們已經解決了發送端的問題,接下來讓我們來看接收端。

      如前所述,與上一篇文章中的數據包分片和重組系統不同,塊系統在任何時候只有一個塊在傳輸中。

      這使得塊系統的接收端變得更簡單:

      class ChunkReceiver
      {
          bool receiving;
          bool readyToRead;
          uint16_t chunkId;
          int chunkSize;
          int numSlices;
          int numReceivedSlices;
          bool received[MaxSlicesPerChunk];
          uint8_t chunkData[MaxChunkSize];
      };

      We have a state whether we are currently ‘receiving’ a chunk over the network, plus a ‘readyToRead’ state which indicates that a chunk has received all slices and is ready to be popped off by the user. This is effectively a minimal receive queue of length 1. If you don’t like this, of course you are free to add a queue.

      In this data structure we also keep track of chunk size (although it is not known with complete accuracy until the last slice arrives), num slices and num received slices, as well as a received flag per-slice. This per-slice received flag lets us discard packets containing slices we have already received, and count the number of slices received so far (since we may receive the slice multiple times, we only increase this count the first time we receive a particular slice). It’s also used when generating ack packets. The chunk receive is completed from the receiver’s point of view when numReceivedSlices == numSlices.

      我們有一個狀態來表示我們當前是否正在通過網絡“接收”一個塊,以及一個“readyToRead”狀態,表示該塊已經接收到所有切片并且準備好被用戶讀取。這實際上是一個長度為 1 的最小接收隊列。如果你不喜歡這樣,當然可以添加一個隊列。

      在這個數據結構中,我們還跟蹤塊的大小(盡管直到最后一個切片到達之前,其大小并不完全準確)、切片的數量、已接收的切片數量,以及每個切片的接收標志。這個每個切片的接收標志讓我們可以丟棄包含已經接收過的切片的數據包,并計算到目前為止接收到的切片數量(因為我們可能會多次接收到同一個切片,所以只有第一次接收到某個切片時才會增加計數)。它還在生成 ACK 數據包時被使用。當接收端認為塊接收完成時,條件是 numReceivedSlices == numSlices

      So what does it look like end-to-end receiving a chunk?

      那么,接收一個塊從頭到尾的過程是怎樣的呢?

      First, the receiver sets up set to start at chunk 0. When the a slice packet comes in over the network matching the chunk id 0, ‘receiving’ flips from false to true, data for that first slice is inserted into ‘chunkData’ at the correct position, numSlices is set to the value in that packet, numReceivedSlices is incremented from 0 -> 1, and the received flag in the array entry corresponding to that slice is set to true.

      As the remaining slice packets for the chunk come in, each of them are checked that they match the current chunk id and numSlices that are being received and are ignored if they don’t match. Packets are also ignored if they contain a slice that has already been received. Otherwise, the slice data is copied into the correct place in the chunkData array, numReceivedSlices is incremented and received flag for that slice is set to true.

      This process continues until all slices of the chunk are received, at which point the receiver sets receiving to ‘false’ and ‘readyToRead’ to true. While ‘readyToRead’ is true, incoming slice packets are discarded. At this point, the chunk receive packet processing is performed, typically on the same frame. The caller checks ‘do I have a chunk to read?’ and processes the chunk data. All chunk receive data is cleared back to defaults, except chunk id which is incremented from 0 -> 1, and we are ready to receive the next chunk.

      首先,接收端設置接收狀態,從塊 0 開始。當網絡上收到一個匹配塊 ID 0 的切片數據包時,receiving 狀態從 false 轉變為 true,該切片的數據被插入到 chunkData 中的正確位置,numSlices 被設置為該數據包中的值,numReceivedSlices 從 0 增加到 1,且對應切片的接收標志被設置為 true

      隨著剩余切片數據包的到來,每個數據包都會檢查它們是否匹配當前接收的塊 ID 和 numSlices,如果不匹配則會被忽略。如果數據包包含已經接收過的切片,也會被忽略。否則,切片數據會被復制到 chunkData 數組中的正確位置,numReceivedSlices 增加,并且該切片的接收標志被設置為 true

      這個過程會持續進行,直到接收到所有切片為止。此時,接收端將 receiving 狀態設置為 false,并將 readyToRead 設置為 true。當 readyToReadtrue 時,后續的切片數據包會被丟棄。此時,塊接收的數據包處理通常會在同一幀內完成。調用者檢查“我是否有塊可以讀取?”并處理塊數據。所有的塊接收數據都會被清空回默認值,除了塊 ID,它會從 0 增加到 1,準備接收下一個塊。

      Conclusion

      The chunk system is simple in concept, but the implementation is certainly not. I encourage you to take a close look at the source code for this article for further details.

      塊系統在概念上很簡單,但實現起來確實不容易。我鼓勵你仔細查看本文的源代碼,了解更多細節。

       

      posted @ 2025-04-05 20:48  sun_dust_shadow  閱讀(37)  評論(0)    收藏  舉報
      主站蜘蛛池模板: 国产美女在线观看大长腿| 博客| 军人粗大的内捧猛烈进出视频| 亚洲最大av一区二区| 午夜射精日本三级| 好吊视频在线一区二区三区| 免费看无码自慰一区二区| 国产成熟女人性满足视频| 成人免费A级毛片无码片2022| 永久免费av网站可以直接看的 | 精品国产一区二区三区香蕉| 国产一级区二级区三级区| 亚洲精品动漫免费二区| 亚洲日韩日本中文在线| 国产成人综合久久亚洲精品| 好男人官网资源在线观看| 先锋影音男人av资源| 精品一区二区三区四区色| 国产中文字幕在线精品| 少妇人妻综合久久中文字幕| 亚洲色偷偷偷网站色偷一区 | 亚洲精品一区二区口爆| 国产精品天干天干综合网| 日本高清www无色夜在线视频| 日韩人妻无码精品久久| 久久久精品94久久精品| 人妻少妇中文字幕久久| 天天看片视频免费观看| 亚洲男人天堂东京热加勒比| 婷婷99视频精品全部在线观看| 亚洲高清aⅴ日本欧美视频| 色悠悠国产精品免费在线| 国语精品国内自产视频| 国产精品线在线精品| 深夜福利资源在线观看| 性色av蜜臀av色欲av| 久人人爽人人爽人人片av| 亚洲av永久无码精品水牛影视| 内射毛片内射国产夫妻| 色综合久久久久综合体桃花网| 97无码人妻福利免费公开在线视频 |