“An ns-3 Implementation of a Bursty Traffic Framework for Virtual Reality Sources”学习笔记
ABSTRACT
Next-generation wireless communication technologies will allow users to obtain unprecedented performance, paving the way to new and immersive applications. A prominent application requiring high data rates and low communication delay is Virtual Reality (VR), whose presence will become increasingly stronger in the years to come. To the best of our knowledge, we propose the first traffic model for VR applications based on traffic traces acquired from a commercial VR streaming software, allowing the community to further study and improve the technology to manage this type of traffic. This work implements ns-3 applications able to generate and process large bursts of packets, enabling the possibility of analyzing APP-level end-to-end metrics, making the source code, as well as the acquired VR traffic traces, publicly available and open-source.
下一代无线通信技术将允许用户获得前所未有的性能,为新的沉浸式应用铺平道路。需要高数据速率和低通信延迟的一个突出应用是虚拟现实(VR),它的存在在未来几年将变得越来越强大。据我们所知,我们基于从商业VR流媒体软件获取的流量跟踪,提出了第一个VR应用流量模型,使社区能够进一步研究和改进管理此类流量的技术。这项工作实现了能够生成和处理大量数据包的ns-3应用程序,使分析应用程序级端到端指标成为可能,使源代码以及获取的VR流量跟踪公开且开源。
INTRODUCTION
The growing demand for high-performance telecommunication networks is driving both industry and academia to push the boundaries of the achievable performance.
对高性能电信网络的需求不断增长,推动行业和学术界突破可实现性能的界限。
The International Telecommunication Union (ITU) proposes requirements for International Mobile Telecommunication-2020 (IMT-2020) Enhanced Mobile BroadBand (eMBB) such as peak Downlink (DL) data rate of 20 Gbps with 4 ms user plane latency [8], for example by exploiting the large bandwidth available in the Millimeter Wave (mmW) spectrum. Similarly, Wireless Local Area Networks (WLANs) are also harvesting the potential of the mmW band with a family of standards known as Wireless Gigabit (WiGig), including IEEE 802.11ad and 802.11ay. While the former, first standardized in 2012 [7] and later revised in 2016 [6], is able to reach bit rates up to 8 Gbps, the latter is close to being officially standardized [14] and promises bit rates up to 100 Gbps.
国际电信联盟(ITU)提出了国际移动通信-2020(IMT-2020)增强移动宽带(eMBB)的要求,例如,利用毫米波(mmW)频谱中可用的大带宽,峰值下行(DL)数据速率为20 Gbps,用户平面延迟为4 ms[8]。类似地,无线局域网(WLAN)也在通过一系列被称为无线千兆(WiGig)的标准(包括IEEE 802.11ad和802.11ay)挖掘毫米波频段的潜力。前者于2012年首次标准化[7],后来于2016年修订[6],能够达到最高8 Gbps的比特率,而后者接近正式标准化[14],并承诺比特率最高可达100 Gbps。
These specifications for wireless systems enable a new generation of demanding applications such as high-definition wireless monitor, eXtended Reality (XR) headsets and other high-end wearables, data center inter-rack connectivity, wireless backhauling, and office docking, among others [15].
无线系统的这些规范使高清晰度无线监视器、扩展现实(XR)耳机和其他高端可穿戴设备、数据中心机架间连接、无线回程和办公室对接等高要求应用成为可能[15]。
In particular, XR, an umbrella name including technologies such as Virtual Reality (VR) and Augmented Reality (AR), has been targeted as a key application with growing interest in the consumer market [5]. Compact and portable devices with limited battery and computing power should be enabled to wirelessly support this type of demanding applications, to provide a fully immersive and realistic user experience.
特别是,XR,一个包括虚拟现实(VR)和增强现实(AR)等技术的总称,已被视为一个关键应用,在消费者市场上的兴趣日益增长[5]。电池和计算能力有限的紧凑便携式设备应能够无线支持此类要求苛刻的应用,以提供完全沉浸式和真实的用户体验。
To reach the limits of human vision, monitors with a resolution of 5073×5707 per eye with 120 FPS refresh rate will be needed [5]. These specifications suggest that rendering might be offloaded to a separate server as head-mounted displays should be light, silent, and comfortable enough to be worn for long periods of time. This poses a significant strain on the wireless connection, requiring ∼167 Gbps of uncompressed video stream. Clearly, realtime 360 video compression techniques allow to largely reduce these throughput requirements down to the order of 100–1000 Mbps, at the cost of some processing delay.
为了达到人类视觉的极限,需要每只眼睛分辨率为5073×5707、刷新率为120 FPS的监视器[5]。这些规范表明,由于头戴式显示器应该轻便、安静、舒适,可以长时间佩戴,渲染可能会被卸载到单独的服务器上。这对无线连接造成了巨大的压力,需要∼167 Gbps的未压缩视频流。显然,实时360视频压缩技术可以在很大程度上将这些吞吐量需求降低到100–1000 Mbps,但要付出一些处理延迟的代价。
While throughput requirements are already very demanding, low latencies are the key to the success or the failure of XR applications. In fact, many studies showed that users tend to experience what is called motion orcyber sickness when their actions do not correspond to rapid reactions in the virtual world, causing disorientation and dizziness [3–5, 9, 11]. Motion-to-photon latency is thus required to be at most 20 ms, translating into a network latency for video frames of 5–9 ms [5, 15].
虽然吞吐量要求已经非常苛刻,但低延迟是XR应用程序成败的关键。事实上,许多研究表明,当用户的行为与虚拟世界中的快速反应不一致时,他们往往会经历所谓的运动病或晕厥,导致定向障碍和眩晕[3-5,9,11]。因此,运动到光子的延迟要求最多为20毫秒,转化为5-9毫秒视频帧的网络延迟[5,15]。
In its simplest and most ideal form, raw XR traffic with a fixed frame rate 𝐹 could be modeled by periodic traffic, with period 1/𝐹 and constant frame size 𝑆 proportional to the display resolution. Real traffic, though, is first of all compressed with one of the many existing video codecs, and then properly optimized for real-time low-latency streaming, resulting in encoded video frames of variable size. Furthermore, the complexity of a given scene can also affect the time required to render it as well as the obtainable compression factor. Together, these factors make both the video frame size and the period random, possibly correlated both in time and in the frame-size/period domains.
在其最简单和最理想的形式中,使用固定帧速率的原始XR流量𝐹 可以通过周期性流量建模,周期为1/𝐹 和恒定的帧大小𝑆 与显示分辨率成比例。然而,真正的流量首先是使用现有的众多视频编解码器之一进行压缩,然后针对实时低延迟流进行适当优化,从而生成大小可变的编码视频帧。此外,给定场景的复杂性也会影响渲染所需的时间以及可获得的压缩因子。总之,这些因素使得视频帧大小和周期都是随机的,可能在时间和帧大小/周期域中都是相关的。
Given the interest of both industry and consumers in XR applications and the peculiarity of the generated traffic, networks and networking protocols might be optimized to better support this type of traffic, ensuring its strict Quality of Service (QoS) requirements.
考虑到行业和消费者对XR应用的兴趣以及产生的流量的特殊性,可以优化网络和网络协议,以更好地支持此类流量,确保其严格的服务质量(QoS)要求。
To the best of our knowledge, no prior work on XR traffic modeling exists. Cloud gaming [1] was identified as a closely related problem, where a remote server renders and streams a video to a client with limited computational resources, which only feeds basic information to the rendering servers, such as keys pressed and mouse movements. The main difference with the problem under analysis is given by the more restrictive QoS constraints of XR applications, mainly due to the limits imposed by motion sickness. Furthermore, in cloud gaming, client and server are often in different WLANs, making it harder to obtain reliable measurements of packet generation times. In fact, due to the specific constraints and requirements of XR applications, we expect the rendering server to be in a local network rather than being remotely accessed via the Internet.
据我们所知,XR traffic modeling之前没有相关工作。云游戏[1]被认为是一个密切相关的问题,远程服务器在计算资源有限的情况下渲染视频并将其流式传输给客户端,客户端只向渲染服务器提供基本信息,例如按键和鼠标移动。与问题分析的主要区别在于XR应用程序更严格的QoS约束,这主要是由于晕车造成的限制。此外,在云游戏中,客户端和服务器通常位于不同的WLAN中,这使得获得数据包生成时间的可靠测量变得更加困难。事实上,由于XR应用程序的特定限制和要求,我们希望渲染服务器位于本地网络中,而不是通过Internet远程访问。
Most works in the literature focus on network performance and limitations of cloud gaming [16], and we could find only two main contributions addressing traffic analysis and modeling. The authors of [2] provide a simple traffic analysis for three different games played on OnLive, a cloud gaming application that was shut down in 2015. The analysis focuses on packet-level statistics, such as packet size, inter-packet time, and bit rate. They measured the performance of the streaming service under speed-limited networks, showing an evident frame rate variability. In [12], the authors tried to model the traffic generated by two games, also played on the OnLive platform. In particular, they recognized that video frames were split into multiple fragments, and re-aggregated them before studying their statistics. A number of DL and Uplink (UL) data flows were recognized, and characterized in terms of application packet data unit size and packet inter arrival time. Unfortunately, correlation among successive video frames was not modeled and the analysis referred to a single game played with an average data rate of about 5 Mbps.
文献中的大多数工作都集中在网络性能和云游戏的局限性[16],我们只能找到两个主要贡献来解决流量分析和建模。[2]的作者为在OnLive上玩的三款不同游戏提供了一个简单的流量分析,OnLive是一款于2015年关闭的云游戏应用程序。分析的重点是数据包级别的统计信息,如数据包大小、数据包间时间和比特率。他们测量了速度受限网络下流媒体服务的性能,显示出明显的帧速率变化。在[12]中,作者试图对两个游戏产生的流量进行建模,这两个游戏也是在OnLive平台上玩的。特别是,他们认识到视频帧被分割成多个片段,并在研究统计数据之前对其进行重新聚合。识别了大量DL和上行链路(UL)数据流,并根据应用分组数据单元大小和分组到达时间对其进行了描述。不幸的是,连续视频帧之间的相关性没有建模,分析涉及的是平均数据速率约为5 Mbps的单个游戏。
The novelty of this paper can be summarized as follows:
(1) To the best of our knowledge, this is the first generative model for APP-level XR traffic based on over 90 minutes of acquired and processed VR traffic, with adaptable data rate and frame rate;
(2) we provide a flexible Network Simulator 3 (ns-3) module for simulating applications with bursty behavior able to characterize both fragment-level and burst-level performance;
(3) we provide an implementation of the proposed XR traffic model, as well as a trace-based model, on the bursty application framework;
(4) as a side contribution, several acquired VR traffic traces are made available, allowing researchers to (i) use real VR traffic in their simulations and (ii) further analyze and improve VR traffic models.
本文的创新之处可以总结如下:
(1) 据我们所知,这是第一个基于90分钟以上采集和处理的VR流量的应用级XR流量生成模型,具有自适应的数据速率和帧速率;
(2) 我们提供了一个灵活的网络模拟器3(ns-3)模块,用于模拟具有突发行为的应用程序,能够同时表征片段级和突发级性能;
(3) 我们在bursty应用框架上提供了拟议XR流量模型的实现,以及基于跟踪的模型;
(4) 作为附带贡献,提供了几个获得的虚拟现实交通轨迹,使研究人员能够(i)在模拟中使用真实的虚拟现实交通,以及(ii)进一步分析和改进虚拟现实交通模型。
In the remainder of this work, we will describe the traffic acquisition and analysis in Section 2, propose a flexible XR traffic model in Section 3, discuss about the ns-3 implementation of the bursty application framework in Section 4, validate the model and show a possible use case in Section 5, and finally draw the conclusions of this work and propose future works in Section 6.
在本工作的剩余部分,我们将在第2节中描述流量采集和分析,在第3节中提出灵活的XR流量模型,在第4节中讨论bursty应用程序框架的ns-3实现,在第5节中验证模型并展示可能的用例,最后在第6节中得出本文工作的结论并提出今后的工作。
VR TRAFFIC: ACQUISITION AND ANALYSIS
虚拟现实交通:采集与分析
The traffic model that will be described in Section 3 is based on a set of acquisitions of VR traffic. In the remainder of this section, the acquisition setup and the statistical analysis of the acquired VR traffic traces will be presented.
第3节将描述的流量模型基于一组虚拟现实流量采集。在本节的其余部分中,将介绍采集设置和采集的VR交通轨迹的统计分析。
Acquisition Setup
采集设置
The setup comprises a desktop PC (equipped with an i7 processor, 32 GB of RAM, and an nVidia GTX 2080Ti graphics card) acting as a rendering server, and transmitting the information to a smartphone acting as a passive VR headset. The two nodes are connected via USB tethering to avoid random interference from other surrounding
devices and the less stable wireless channel.
该设置包括一台台式PC(配备i7处理器、32 GB RAM和一块nVidia GTX 2080Ti图形卡),用作渲染服务器,并将信息传输到一台充当被动VR耳机的智能手机。这两个节点通过USB连接,以避免来自其他周围设备和不太稳定的无线信道的随机干扰。
To stream the VR traffic, the rendering server runs the application RiftCat 2.0, connected to the VRidge app running on the smartphone.1 This setup allows the user to play VR games on the SteamVR platform for up to 10 minutes, enough to obtain multiple traffic traces that can be analyzed later. The application allows for a number of settings, most notably (i) the display resolution and scan format (kept fixed at the smartphone’s native resolution, i.e., 1920×1080p), (ii) the frame rate, allowing the user to choose between 30 FPS and 60 FPS, and (iii) the target data rate (i.e., the data rate the application will try to consistently stream to the client) which can be set from 1 to 50 Mbps.
为了流式传输VR流量,渲染服务器运行appli 阳离子RiftCat 2.0,该应用程序连接到智能手机上运行的VRidge应用程序。1此设置允许用户在SteamVR平台上玩VR游戏长达10分钟,足以获得多个流量跟踪,以便以后进行分析。该应用程序允许多种设置,最显著的是(i)显示分辨率和扫描格式(保持固定在智能手机的本机分辨率,即1920×1080p),(ii)帧速率,允许用户选择介于30 FPS和60 FPS之间,以及(iii)目标数据速率(即,应用程序将尝试一致地传输到客户端的数据速率),其可设置为1到50 Mbps。
To simplify the analysis of the traffic stream in this first work, no VR games were played, acquiring only traces from the SteamVR waiting room with no audio. The waiting room is still rendered and streamed in real time, thus allowing the capture of an actual VR stream. Furthermore, in order for SteamVR to fully start and load the waiting room, traces were trimmed down to approximately 550 s each.
为了简化这第一项工作中对交通流的分析,没有玩虚拟现实游戏,只从没有音频的蒸汽虚拟现实候诊室获取痕迹。候诊室仍然是实时渲染和流化的,因此可以捕获实际的VR流。此外,为了让SteamVR完全启动并装载候诊室,每条记录道被削减到大约550秒。
We noticed that the encoder used to stream the video from the rendering server to the phone was able to reduce the stream size when the scene was fairly static, thus preventing us from reaching data rates higher than 20 Mbps with the phone in a fixed position. Small movements from the phone were sufficient to obtain a data rate close to the target one.
我们注意到,当场景相当静止时,用于将视频从渲染服务器流到手机的编码器能够减小流大小,从而防止我们在手机处于固定位置时达到高于20 Mbps的数据速率。手机的小动作足以获得接近目标的数据速率。
Traffic traces were obtained using Wireshark, a popular opensource packet analyzer, running on the rendering server and sniffing the tethered USB connection. The traffic analysis was performed at 30 and 60 FPS for target data rates of {10, 20, 30, 40, 50} Mbps, for a total of over 90 minutes of analyzed VR traffic.
流量跟踪是使用Wireshark获得的,Wireshark是一种流行的开源数据包分析器,运行在渲染服务器上,并嗅探栓接的USB连接。流量分析以30和60 FPS的速度进行,目标数据速率为{10,20,30,40,50}Mbps,分析的VR流量总计超过90分钟。
Traffic Analysis
流量分析
By analyzing the sniffed packet traces, we discovered that VRidge uses UDP sockets over IPv4 and that the UL stream contains several types of packets, such as synchronization, video frame reception information, and frequent small head-tracking information packets. In DL, instead, we found synchronization, acknowledgment, and video frame packets bursts. We also found out that the application stream is based on ENet, a simple and robust network communication layer on top of UDP.
通过分析嗅探到的数据包跟踪,我们发现VRidge在IPv4上使用UDP套接字,并且UL流包含几种类型的数据包,例如同步、视频帧接收信息和频繁的小头跟踪信息包。相反,在DL中,我们发现了同步、确认和视频帧数据包突发。我们还发现应用程序流是基于ENet的,它是UDP之上的一个简单而健壮的网络通信层。
Video traffic is, as expected, the main source of data transmission (Figure 1a). Video frames are easily categorized by their transmission pattern: a single frame is fragmented into multiple smaller 1278 B packets sent back-to-back. By reverse-engineering the bits composing the UDP payload, it was possible to identify 5 ranges of
information in what appears to be a 31 B APP-layer header, specifically (i) the frame sequence number, (ii) the number of fragments composing the frame, (iii) the fragment sequence number, (iv) the total frame size, and (v) a checksum. Thanks to this, we were able to reliably gather information on video frames, allowing for robust data processing.
正如预期的那样,视频流量是数据传输的主要来源(图1a)。视频帧很容易根据其传输模式进行分类:一个帧被分割成多个较小的1278b数据包,并背靠背发送。通过对组成UDP有效载荷的位进行反向工程,可以在看似31 B应用层报头中识别5个信息范围,具体是(i)帧序列号,(ii)组成帧的片段数,(iii)片段序列号,(iv)总帧大小,以及(v)校验和。由于这一点,我们能够可靠地收集视频帧上的信息,从而实现稳健的数据处理。
Figure 1b shows that UL traffic only accounts for 110 kbps for 30 FPS acquisitions and 120 kbps for 60 FPS acquisitions, while non-video DL traffic is only about 2.5 kbps for 30 FPS acquisition and 5 kbps for 60 FPS acquisitions, regardless of the target data rate, and for this reason, they were ignored in the proposed analysis.
图1b显示,无论目标数据速率如何,30 FPS采集的UL流量仅为110 kbps,60 FPS采集的UL流量为120 kbps,而30 FPS采集的非视频DL流量仅为2.5 kbps,60 FPS采集的非视频DL流量仅为5 kbps,因此,在建议的分析中忽略了这些流量。
It follows that, considering 𝑅 the target data rate and 𝐹 the application frame rate, the average video frame size is expected to be close to the ideal 𝑆 = 𝑅/𝐹 , as shown in Figure 1c. Note that the 𝑥-axis reports the empirical data rate rather than the target data rate, i.e., the average data rate estimated from the acquired traces, which differ slightly as shown in Figure 1a.
因此,考虑到𝑅 目标数据速率和𝐹 应用程序的帧速率、平均视频帧大小有望接近理想值𝑆 = 𝑅/𝐹 , 如图1c所示。请注意𝑥-axis报告的是经验数据率,而不是目标数据率,即根据采集的记录道估计的平均数据率,如图1a所示,两者略有不同。
Furthermore, Figure 1d shows that the average Inter-Frame Interarrival (IFI) perfectly matches the expected 1/𝐹 .
此外,图1d显示平均帧间到达(IFI)与预期的1完全匹配/𝐹 .
Usually, when compressing a video, both intra-frame and interframe compression techniques are exploited. Specifically, Intracoded frames (I-frames) use compression techniques similar to those of a simple static picture. Instead, Predictive-coded frames
(P-frames) exploit the temporal correlation of successive frames to greatly reduce the compressed frame size. Similarly, Bipredictivecoded frames (B-frames) exploit the knowledge from subsequent frames as well, other than previous frames, to further improve the compression efficiency at the cost of non-real-time transmission. Video compression standards such as H.264 [13] define the pattern of compressed frame types between two consecutive I-frames, which is commonly referred to as Group of Pictures (GoP).
通常,在压缩视频时,会同时利用帧内和帧间压缩技术。具体来说,帧内编码帧(I帧)使用与简单静态图片类似的压缩技术。相反,预测编码帧(P帧)利用连续帧的时间相关性来大大减小压缩帧的大小。类似地,双预测编码帧(B帧)也利用来自后续帧(而非之前帧)的知识,以非实时传输为代价进一步提高压缩效率。H.264[13]等视频压缩标准定义了两个连续I帧之间压缩帧类型的模式,通常称为图片组(GoP)。
The traffic traces show that GoPs are not deterministic, and tend to be larger at lower target data rates, likely to take advantage of the higher compression generally provided by P-frames. Furthermore, lower target data rates may introduce a small delay to also encode the more efficient B-frames, thus improving the visual quality while decreasing the overall responsiveness. However, the strategy used by the application to map the specified target rate into a certain GoP format is proprietary and undisclosed, so that we could only observe some general trends.
流量跟踪表明,GOP不是确定性的,并且在较低的目标数据速率下,GOP往往更大,可能会利用通常由P帧提供的较高压缩。此外,较低的目标数据速率可能会引入较小的延迟,以便也对更有效的B帧进行编码,从而在降低整体响应性的同时提高视觉质量。然而,应用程序用于将指定目标速率映射到特定GoP格式的策略是专有的,未公开,因此我们只能观察一些总体趋势。
TRAFFIC MODEL
In this section, we will describe how we model frame sizes and inter-frame periodicity, leaving the model validation to Section 5.
在本节中,我们将描述如何建模帧大小和帧间周期,将模型验证留给第5节。
Modeling Frame Periodicity
建模帧周期性
To fully characterize and thus generate a realistic frame period, more information is needed such as (i) the distribution of the frame period, (ii) the parameters of this distribution, (iii) the correlation between successive frame periods, and (iv) the correlation between the current frame size and the frame period.
为了充分描述并由此生成真实的帧周期,需要更多信息,例如(i)帧周期的分布,(ii)该分布的参数,(iii)连续帧周期之间的相关性,以及(iv)当前帧大小和帧周期之间的相关性。
To simplify the model, in this first analysis we assume the current frame size and the frame period to be independent, and we consider the stochastic process representing the frame period to be uncorrelated. We thus focus only on the distribution of the frame period.
为了简化模型,在这第一个分析中,我们假设帧的大小和帧周期是独立的,并且我们考虑表示帧周期的随机过程是不相关的。因此,我们只关注帧周期的分布。
Among all measurements taken at both 30 and 60 FPS, the logistic distribution was often the best-fitting one, among all of the tested distributions. For this reason, we choose to model the frame periods as independent and identically Logistic-distributed random variables 𝑋 ∼ Logistic(𝜇, 𝑠) with Probability Density Function (PDF)
在以30和60 FPS的速度进行的所有测量中,在所有测试分布中,logistic分布通常是最适合的。出于这个原因,我们选择将帧周期建模为独立且同逻辑分布的随机变量𝑋 ∼ Logistic(𝜇, 𝑠) 概率密度函数(PDF)
where 𝜇 is the location parameter, 𝑠 > 0 is the scale parameter, and E[𝑋] = 𝜇, std(𝑋) =𝑠𝜋/√3 .
Given the great accordance between the expected frame period and the empirical one (shown in Figure 1d), we can easily set E[𝑋] = 𝜇 = 1/𝐹. Instead, to model the standard deviation of the proposed Logistic random variables, we need to further process the acquired data, although for a powerful enough rendering server we expect the standard deviation to also be inversely proportional to the frame rate.
考虑到预期的框架周期和经验周期(如图1d所示)之间的高度一致性,我们可以很容易地设置E[𝑋] = 𝜇 = 1/𝐹. 相反,为了对所提出的逻辑随机变量的标准偏差进行建模,我们需要进一步处理获取的数据,尽管对于足够强大的渲染服务器,我们希望标准偏差也与帧速率成反比。
Figure 2a shows the average standard deviation of the acquired traces at both 30 and 60 FPS. We notice that the average standard deviation at 30 FPS is 2.7855 ms, while at 60 FPS it is equal to 1.3646 ms, with a perfect ratio of 2.04. This suggests that our rendering server is powerful enough to reliably handle streams at both frame rates since the standard deviation of the IFI nearly doubles as the frame rate halves, as expected.
图2a显示了在30和60 FPS下采集的记录道的平均标准偏差。我们注意到,30 FPS时的平均标准偏差为2.7855 ms,而60 FPS时的平均标准偏差为1.3646 ms,理想比率为2.04。这表明我们的渲染服务器足够强大,可以在两种帧速率下可靠地处理流,因为IFI的标准偏差几乎是帧速率一半的两倍,正如预期的那样。
While the data is not enough for a robust generalization, it does suggest that a higher frame rate 𝐹 yields lower IFI standard deviation 𝑑, keeping a roughly constant ratio with the average IFI, with an inverse law: 𝑑 = 𝑐/𝐹. Assuming that this inverse law holds, we compute the average 𝑐 obtained by the sets of acquisitions at 30 and 60 FPS, equal to 𝑐 = 0.0827.
虽然数据不足以进行稳健的泛化,但它确实表明更高的帧速率𝐹 产生较低的IFI标准偏差𝑑, 与平均IFI保持一个大致恒定的比率,并遵循一个逆定律:𝑑 = 𝑐/𝐹. 假设这个逆定律成立,我们计算平均值𝑐 以每秒30帧和每秒60帧的速度采集数据,等于𝑐 = 0.0827.
Modeling Frame Size
建模框架尺寸
Following the discussion in Section 2.2, we propose to model the frame size distribution of the video frame with a Gaussian Mixture Model (GMM), i.e., 𝑉 (𝑆) ∼ 𝐺𝑀𝑀(𝝁(𝑆), 𝝈2(𝑆)), with PDF
where 𝜒𝐼 (𝑆) is the indicator function for I-frames which takes the value of 1 with probability 𝑤𝐼 (𝑆) and 0 otherwise, and 𝑉𝑓 (𝑆) ∼ N 𝜇𝑓 (𝑆), 𝜎2𝑓 (𝑆), 𝑓 ∈ {𝐼, 𝑃}.
Clearly, the fitted normal variable with the lower mean will be associated to P-frames while the one with the higher mean will be associated to I-frames.
显然,均值较低的拟合正态变量将与P帧相关,而均值较高的拟合正态变量将与I帧相关。
To generalize the model, the parameters of the GMM should be extended to arbitrary target data rates.
为了推广该模型,GMM的参数应该扩展到任意目标数据速率。
Starting from the GMMs of the acquired traffic traces, the mean frame sizes of I- and P-frames are generalized by fitting linear models, while their standard deviations are better fitted by a power law, both as a function of the expected average frame size 𝑆. Since a target data rate approaching zero would require video frames to also approach zero, we force the linear fit to have no intercept.
从获取的流量轨迹的GMM开始,通过拟合线性模型来概括I帧和P帧的平均帧大小,而通过幂律更好地拟合它们的标准偏差,两者都是预期平均帧大小的函数𝑆. 由于接近零的目标数据速率将要求视频帧也接近零,因此我们强制线性拟合没有截获。
i.e., 𝜇𝐼 (𝑆) = 𝑠𝐼 𝑆, 𝜇𝑃 (𝑆) = 𝑠𝑃𝑆, 𝜎𝐼 (𝑆) = 𝑎𝐼 𝑆𝑏𝐼, and 𝜎𝑃 (𝑆) = 𝑎𝑃𝑆𝑏𝑃 as depicted with dashed lines in Figures 2b and 2c.
By setting E[𝑉 (𝑆)] equal to 𝑆 we get
To make the model more robust, we first fit the GMM 50 times with random initial conditions and pick the best-fitting model, and then weigh the linear fit of the parameters proportionally to the goodness of the GMM fit.
为了使模型更具鲁棒性,我们首先用随机初始条件对GMM进行50次拟合,并选择最佳拟合模型,然后按GMM拟合优度的比例对参数的线性拟合进行加权。
To improve the reliability of our model, when fitting the GMM means and standard deviations we use the empirical data rate as the independent variable instead of the target data rate since the two differ slightly as noticed in Section 2.2. In fact, we aim at modeling the general behavior of the application while trying to generate the amount of traffic requested by the user as closely as possible.
为了提高我们模型的可靠性,在拟合GMM均值和标准差时,我们使用经验数据率作为自变量,而不是目标数据率,因为二者略有不同,如第2.2节所述。实际上,我们的目标是对应用程序的一般行为进行建模,同时尽可能地生成用户请求的流量。
Figure 2b shows that the means of the GMMs are fitted well by a linear model in the 𝑆 domain, which is used to simultaneously process the data of the acquisitions obtained both at 30 and 60 FPS,
图2b显示,GMMs的平均值通过模型中的线性模型得到了很好的拟合𝑆 域,用于同时处理以每秒30帧和每秒60帧的速度获得的采集数据,
yielding the fitter parameters 𝑠𝐼 = 1.1764 and 𝑠𝑃 = 0.9008.
The fitted slopes result in 𝑤𝑃 = 0.64, which fits well the 30 FPS acquisitions, with slightly worse performance at 60 FPS.
On the other hand, the GMMs standard deviations show a more noisy fit (Figure 2c), yielding parameters 𝑎𝐼 = 26.2065, 𝑏𝐼 = 0.5730, 𝑎𝑃 = 9.0399, and 𝑏𝑃 = 0.6251.
This suggests an approximate square-root relationship between the average frame size 𝑆 and the standard deviation of the GMMs. The logarithmic plot helps the visualization by transforming a power law into a simple linear relationship.
这表明平均帧大小之间存在近似的平方根关系𝑆 以及GMMs的标准偏差。对数图通过将幂律转化为简单的线性关系来帮助可视化。
Frame sizes are independently drawn from the mixture model instead of simulating a GoP, since different GoPs were found for different target data rates, and always with a non-deterministic nature.
帧大小独立于混合模型,而不是模拟GoP,因为不同的GoP被发现用于不同的目标数据速率,并且始终具有不确定性。
NS-3 IMPLEMENTATION
To properly model and test the performance of VR traffic over a simulated network, a flexible application framework has been implemented in ns-3 and made publicly available. The framework is based on the ns-3.33 release and aims at providing a novel additional traffic model, easily customizable by the final user.
为了在模拟网络上正确建模和测试虚拟现实流量的性能,ns-3中实现了一个灵活的应用程序框架,并公开提供。该框架基于ns-3.33版本,旨在提供一种新的附加流量模型,最终用户可以轻松定制。
The proposed framework allows the user to send packet bursts fragmented into multiple packets by BurstyApplication, later re-aggregated at the receiver, if possible, by BurstSink. Since the generation of packet bursts is crucial to model a wide range of possibilities, a generic BurstGenerator interface has been defined. Users can implement arbitrary generators by extending this interface, and three examples have been provided and will be described in Section 4.2. Finally, each fragment comprises a novel SeqTsSizeFragHeader, which includes information on both the fragment and the current burst, allowing BurstSink to correctly re-aggregate or discard a burst, yielding information on received fragments, received bursts, and failed bursts.
提出的框架允许用户通过BurstySink应用程序发送分为多个数据包的数据包突发,然后在可能的情况下,通过BurstSink在接收器处重新聚合。由于数据包突发的生成对于建立各种可能性的模型至关重要,因此已经定义了一个通用的BurstGenerator接口。用户可以通过扩展该接口实现任意生成器,已经提供了三个示例,将在第4.2节中描述。最后,每个片段都包含一个新的SeqTsSizeFragHeader,其中包括关于片段和当前突发的信息,允许BurstSink正确地重新聚合或丢弃突发,产生关于接收片段、接收突发和失败突发的信息。
More details on the implementation and the rationale behind these applications will be given in the following sections.
关于这些应用的实施和原理的更多细节将在以下章节中给出。
Bursty Application
Inspired by the acquired traffic traces described in Section 2.1, BurstyApplication periodically sends bursts of data divided into multiple smaller fragments of (at most) a given size. Since burst size and period statistics can be quite general, the generation of the burst statistics is delegated to objects extending the BurstGenerator interface, later described in Section 4.2. BurstyHelper is also implemented to simplify the generation and installation of BurstyApplications with given BurstGenerators to network nodes and examples are provided.
受第2.1节所述获取的流量跟踪的启发,Bursty应用程序定期发送被分割为多个(最多)给定大小的较小片段的数据突发。由于突发大小和周期统计信息可能非常普遍,因此突发统计信息的生成被委托给扩展突发生成器接口的对象,稍后将在第4.2节中描述。还实现了BurstyHelper,以简化在网络节点上使用给定的BurstGenerator生成和安装BurstyApplication,并提供了示例。
Each fragment carries a SeqTsSizeFragHeader, an extension of SeqTsSizeHeader which adds the information on the fragment sequence number and the total number of fragments composing the burst, on top of the (burst) sequence number and size as well as the transmission time-stamp. After setting a desired FragmentSize in bytes, the application will compute how many fragments will be generated to send the full burst to the target receiver, although the last two fragments may be smaller due to the size of the burst not being a multiple of the fragment size, and the presence of the extra header.
每个片段都带有一个SeqTsSizeFragHeader,它是SeqTsSizeHeader的一个扩展,在(突发)序列号和大小以及传输时间戳的基础上,添加了关于片段序列号和组成突发的片段总数的信息。在以字节为单位设置所需的碎片大小后,应用程序将计算将生成多少碎片以将完整的突发发送到目标接收器,尽管最后两个碎片可能更小,因为突发的大小不是碎片大小的倍数,并且存在额外的头。
Traces notify the user when fragments and bursts are sent, while also keeping track of the number of bursts, fragments, and bytes sent, making it easier to quickly compute some simple high-level metrics directly from the main script of the simulation.
当发送片段和突发时,跟踪会通知用户,同时也会跟踪发送的突发、片段和字节的数量,从而更容易直接从模拟的主脚本快速计算一些简单的高级度量。
Burst Generator Interface
A generic bursty application can show extremely different behaviors. For example, an application could send a given amount of data periodically in a deterministic fashion, or the burst size or the period could be random with arbitrary statistics, successive
bursts could be correlated (e.g., the concept of GoP for video-coding standards such as H.264), and even the burst size and the time before the next burst might be correlated.
一个通用的bursty应用程序可能会显示极其不同的行为。例如,应用程序可以以确定的方式周期性地发送给定数量的数据,或者突发大小或周期可以是随机的,具有任意统计信息,连续突发可以是相关的(例如,用于视频编码标准(例如H.264)的GoP概念),甚至连突发的大小和下一次突发之前的时间也可能是相关的。
To accommodate for the widest range of possibilities, a BurstGenerator interface has been defined. Classes extending this interface must define two pure virtual functions:
为了适应最广泛的可能性,我们定义了一个突发发生器接口。扩展这个接口的类必须定义两个纯虚函数:
(1) HasNextBurst: to ensure that the burst generator is able to generate a new burst size and the time before the next burst (also called next period in the remainder of this paper);
(1) HasNextBurst:确保突发生成器能够生成新的突发大小和下一个突发之前的时间(在本文剩余部分中也称为下一个周期);
(2) GenerateBurst: yielding the burst size of the current burst as well as the next period, if it exists.
(2) GenerateBurst:生成当前突发以及下一个周期(如果存在)的突发大小。
Three classes extending this interface are proposed and briefly discussed in the remainder of this section, allowing users to generate very diverse statistics without the need to implement their own custom generator in most cases.
本节剩余部分将提出并简要讨论扩展此接口的三个类,允许用户生成非常多样化的统计数据,而无需在大多数情况下实现自己的自定义生成器。
Simple Burst Generator. Inspired from OnOffApplication, SimpleBurstGenerator defines the current burst size and the next period as generic RandomVariableStreams. Users are thus able to model arbitrary burst size and next period distributions, by: using
the distributions already implemented in ns-3; implementing more distributions; or simply defining arbitrary Cumulative Distribution Functions (CDFs) for EmpiricalRandomVariables.
简单的脉冲发生器。Sim pleBurstGenerator受OnOffApplication的启发,将当前突发大小和下一个周期定义为通用随机变量流。因此,用户可以通过以下方式对任意突发大小和下一周期分布进行建模:使用ns-3中已经实现的分布;实施更多的分配;或者简单地为经验随机变量定义任意累积分布函数(CDF)。
Limitations for this generator lie in the correlation of the generated random variables: burst size and next period are independently drawn as are successive bursts.
该发生器的局限性在于一般随机变量的相关性:脉冲大小和下一个周期与连续脉冲一样独立绘制。
VR Burst Generator. VrBurstGenerator is a direct implementation of the model proposed in Section 3, where bursts model video frames.
虚拟现实突发发生器。VrBurstGenerator是第3节中提出的模型的直接实现,其中突发模拟视频帧。
Similar to the RiftCat software described in Section 2.1, this generator makes it possible to choose a target data rate and a frame rate.
与第2.1节中描述的RiftCat软件类似,该生成器可以选择目标数据速率和帧速率。
While traces were taken at specific frame rates and target data rates, the proposed model attempts to generalize them, although without any knowledge on the quality of the generalization beyond the boundaries imposed by the streaming software.
虽然跟踪是在特定的帧速率和目标数据速率下进行的,但提出的模型试图对它们进行概括,尽管没有任何关于概括质量的知识,超出了流媒体软件施加的边界。
To generate the frame size and the next period, LogisticRandomVariable and MixtureRandomVariable have been implemented in ns-3.
为了生成帧大小和下一个周期,在ns-3中加入了Logistic RandomVariable和MixtureRandomVariable。
A validation of the proposed model based on this burst generator will be discussed in Section 5.
第5节将讨论基于该突发发生器的拟议模型的验证。
Trace File Burst Generator. Finally, users might want to reproduce in ns-3 a traffic trace obtained by a real application, generated by a separate traffic generator, or even manually written by a user (e.g., for static debugging/testing purposes). For these reasons, TraceFileBurstGenerator was introduced, taking advantage of CsvReader to parse a csv-like file declaring a (burst size, next period) pair for each row. Once traces are imported, the generator will sequentially yield every burst, returning false as output to TraceFileBurstGenerator::HasNextBurst after the last row of the trace file is yielded, thus stopping the BurstyApplication.
跟踪文件突发生成器。最后,用户可能希望在ns-3中重新生成由实际应用程序获得、由单独的流量生成器生成、甚至由用户手动编写的流量跟踪(例如,出于静态调试/测试目的)。出于这些原因,引入了TraceFileBurstGenerator,利用CsvReader解析一个类似csv的文件,为每一行声明一个(突发大小,下一个pe riod)对。导入跟踪后,生成器将按顺序生成每个突发,在生成跟踪文件的最后一行后,将false作为输出返回给TraceFileBurstGenerator::HasNextBurst,从而停止Bursty应用程序。
A StartTime can be set as an attribute, allowing the user to control which part of the file trace will be used in the simulation. This can be especially useful when the total simulation duration is shorter than the traffic trace, making it possible to decouple users by setting different start times.
StartTime可以设置为属性,允许用户控制在模拟中使用文件跟踪的哪一部分。当总模拟持续时间比流量跟踪时间短时,这一点尤其有用,从而可以通过设置不同的开始时间来解耦用户。
Several VR traffic traces using different frame rates and target data rates are available in the described format for a total of over 90 minutes of processed acquisitions, comprising some relevant metadata as part of the commented header. Interested readers can thus simulate real VR video traffic in their ns-3 simulations, or expand the analysis performed in Sections 2.2 and 3.
使用不同帧速率和目标数据速率的几个VR流量跟踪以所述格式提供,用于总共超过90分钟的已处理采集,包括一些相关元数据,作为已注释标头的一部分。因此,感兴趣的读者可以在他们的ns-3模拟中模拟真实的VR视频流量,或者扩展第2.2节和第3节中执行的分析。
Burst Sink
An adaptation of the existing PacketSink, called BurstSink, is proposed for the developed bursty framework. This new application expects to receive packets from users equipped with BurstyApplications and tries to re-aggregate fragments into packets.
对于已开发的bursty框架,提出了对现有PacketSink的一种修改,称为BurstSink。这个新的应用程序希望从配备了Bursty应用程序的用户那里接收数据包,并尝试将片段重新聚合成数据包。
While the current version of PacketSink is able to assemble byte streams with SeqTsSizeHeader, there are two reasons why BurstSink was created, specifically (i) to stress the dependence of this framework on UDP rather than TCP sockets, as the acquisitions suggested, thus expecting individual fragments sent unreliably rather than a reliable byte stream, and (ii) to trace the reception at both the fragment and the burst level.
虽然当前版本的PacketSink能够使用SeqTsSizeHeader组装字节流,但创建BurstSink有两个原因,特别是(i)强调此框架对UDP而非TCP套接字的依赖,正如收购建议的那样,因此,期望不可靠地发送单个片段,而不是可靠的字节流,以及(ii)在片段和突发级别上跟踪接收。
The application implements a simple best-effort aggregation algorithm, assuming that (i) the burst transmission duration is much shorter than the next period, and (ii) all fragments are needed to re-aggregate a burst. Specifically, fragments of a given burst are collected, even if unordered, and, if all fragments are received, the burst is successfully received. If, instead, fragments of subsequent bursts are received before all fragments of the previous one, then the previous burst is discarded. Information on the current fragment and burst can easily be recovered from SeqTsSizeFragHeader,
allowing the application to verify whether a burst has been fully received or not. If needed and suggested by real-world applications, future works might also introduce the concept of APP-level Forward Error Correction (FEC).
应用程序实现了一个简单的尽力而为聚合算法,假设(i)突发传输持续时间比下一个周期短得多,并且(ii)需要所有片段来重新聚合突发。具体地说,收集给定突发的片段,即使是无序的,并且如果接收到所有片段,则成功接收突发。相反,如果在前一个突发的所有片段之前接收到后续突发的片段,则丢弃前一个突发。有关当前片段和突发的信息可以很容易地从SeqTsSizeFragHeader中恢复,从而允许应用程序验证是否已完全接收到突发。如果现实世界的应用程序需要并提出建议,未来的工作可能还会引入应用级前向纠错(FEC)的概念。
Traces notify the user when fragments are received and when bursts are successfully received or discarded, together with all the related relevant information. Furthermore, similarly to the BurstyApplication, also the BurstSink application keeps track of the number of bursts, fragments, and bytes received.
当接收到碎片、成功接收或丢弃突发以及所有相关信息时,跟踪会通知用户。此外,与Bursty应用程序类似,BurstSink应用程序也会跟踪接收到的突发、片段和字节数。
MODEL VALIDATION AND POSSIBLE USE CASES
This section will present a comparison between the acquired VR traces and the proposed model, as well as an example use case.
本节将介绍获取的虚拟现实轨迹与拟议模型之间的比较,以及一个示例用例。
For both the comparison and the example, we show the results of full-stack simulations highlighting the importance of accurately modeling a traffic source by using (i) the proposed model and (ii) the acquired traffic traces. For full-stack simulations we consider a simple Wi-Fi network based on IEEE 802.11ac, sending data over a
single stream and using MCS 9 over a 160 MHz channel.
对于比较和示例,我们展示了全堆栈模拟的结果,强调了通过使用(i)提出的模型和(ii)获取的流量跟踪准确建模流量源的重要性。对于全栈模拟,我们考虑基于IEEE 802.11ac的简单Wi-Fi网络,在单个流上发送数据,并在160 MHz信道上使用MCS 9。
Model Validation
A comparison between the modeled distributions and the acquired traffic traces is shown in Figures 3 and 4.
图3和图4显示了模拟分布和获取的交通轨迹之间的比较。
In particular, as expected from Figure 2a, the IFI standard deviation is a loose fit, and thus CDFs shown in Figure 3 show a fairly large deviation between the model and two examples of acquired traces, namely the highest and lowest target data rates acquired for both available frame rates. This is to be expected given the large variance of the acquired data, although the objective of our model is to generalize the behavior of a real application for both data rates and frame rates, making it extremely hard to represent well the acquired data.
特别是,如图2a所示,IFI标准装置是松配合的,因此图3所示的CDF显示了模型与两个采集记录道示例之间的相当大的偏差,即两个可用帧速率的最高和最低目标数据率。考虑到采集数据的巨大差异,这是可以预期的,尽管我们模型的目标是概括实际应用程序在数据速率和帧速率方面的行为,这使得很难很好地表示采集的数据。
Figure 4 shows the CDFs for frame sizes at different frame rate and target data rate. The 30 FPS data is overall fitted well by the proposed model, and two examples at different target data rates are shown. On the other hand, the 60 FPS data shows a different behavior with respect to the model. A first explanation is that the 60 FPS traces always have a higher empirical data rate than the target one, so the red dashed lines are generally to the right of the blue lines. Furthermore, doubling the frame rate at a constant data rate halves the average video frame size, making it necessary to use more sophisticated encoding techniques to still be able to obtain a good enough video quality by biasing the relative size and frequency of I- and P-frames. Still, our model is able to capture the overall distributions, making it possible to generate novel traffic data with parameters that were never acquired.
图4显示了不同帧速率和目标数据速率下帧大小的CDF。该模型对30 FPS的数据进行了整体拟合,并给出了不同目标数据速率下的两个例子。另一方面,每秒60帧的数据显示了与模型不同的行为。第一种解释是,60 FPS记录道的经验数据率始终高于目标记录道,因此红色虚线通常位于蓝色线的右侧。此外,在恒定数据速率下,将帧速率加倍会使平均视频帧大小减半,因此有必要使用更复杂的编码技术,通过偏置I帧和P帧的相对大小和频率,仍然能够获得足够好的视频质量。尽管如此,我们的模型仍然能够捕获总体分布,从而有可能生成具有从未获得的参数的新流量数据。
End-to-end results in Figure 5 show good accordance between models and empirical data, except for the 95th percentile of the delay. In fact, the proposed model shown in blue shows good accordance with the traffic traces, suggesting that it is able to emulate sufficiently well the traffic statistics to obtain similar full-stack results.
图5中的端到端结果显示,除了延迟的第95个百分位外,模型和经验数据之间具有良好的一致性。事实上,以蓝色显示的拟议模型与流量跟踪表现出良好的协调性,这表明它能够充分模拟流量统计数据,以获得类似的全堆栈结果。
It is also important to notice the difference between fragmentwise and burst-wise statistics. For applications such as VR, where the whole burst needs to be received to acquire the desired information, it is crucial to measure burst-level performance to get a deeper understanding of the system performance. In general, fragmentlevel metrics will be more optimistic than those at the burst level, which may lead to incorrect conclusions.
注意片段统计和突发统计之间的差异也很重要。对于VR等需要接收整个突发信号以获取所需信息的应用,测量突发级性能以深入了解系统性能至关重要。一般来说,片段级指标比突发级指标更乐观,这可能会导致错误的结论。
Examples of Use Cases
To exemplify the uses of the proposed model, we discuss a possible scenario of interest where we test how well an IEEE 802.11ac network can support intense VR traffic, for example in a VR arena.
为了举例说明所提出模型的用途,我们讨论了一个可能感兴趣的场景,其中我们测试了IEEE 802.11ac网络在多大程度上可以支持密集的虚拟现实流量,例如在虚拟现实领域。
In Figure 6, we show the simulation results for a scenario with multiple users running VR applications with a target rate of 50 Mbps in a Wi-Fi network. We compare the acquired trace file, where Stations (STAs) import and generate disjoint parts of the trace file, with the proposed model, both at 30 and 60 FPS.
在图6中,我们展示了多个用户在Wi-Fi网络中以50 Mbps的目标速率运行VR应用程序的场景的模拟结果。我们将采集到的跟踪文件(其中站点(STA)导入并生成跟踪文件的不相交部分)与提出的模型进行比较,速度分别为30和60 FPS。
Notice that the fixed target data rate of 𝑅 =50 Mbps, average video frame size 𝑆 and frame rate 𝐹 will have the same ratio resulting in double the frame size for 30 FPS streams with respect to 60 FPS. For a network with fixed channel capacity, this translates to a delay which will also double assuming that processing, queuing, and other delays independent of the burst size are negligible, hence explaining the different delay performance of the two frame rates in Figure 6.
请注意,的固定目标数据速率𝑅 =50 Mbps,平均视频帧大小𝑆 和帧速率𝐹 将具有相同的比率,从而使30 FPS流的帧大小是60 FPS流的两倍。对于具有固定信道容量的网络,假设处理、排队和其他独立于突发大小的延迟可以忽略不计,这将转化为延迟,该延迟也将加倍,从而解释图6中两种帧速率的不同延迟性能。
From Figure 6a, it is possible to see that the average burst delay is below the maximum tolerable delay of 5-9 ms specified in for up to 8 users at both 30 and 60 FPS, although the delays of the 30 FPS streams are higher than those of the 60 FPS streams, as expected.
从图6a中可以看出,平均突发延迟低于中规定的最大可容忍延迟5-9 ms,对于以30和60 FPS速度传输的最多8个用户,尽管30 FPS流的延迟高于60 FPS流的延迟,如预期的那样。
Instead, if a good overall quality of experience should be granted, the same bound for the 95th percentile of the delay would only allow up to 5 users in the system for 30 FPS streams, or 7 users for 60 FPS streams, as shown in Figure 6b, thus greatly reducing the arena capacity for increased reliability and overall user experience.
相反,如果应授予良好的整体体验质量,则95%延迟的相同界限将只允许系统中最多5个用户使用30 FPS流,或7个用户使用60 FPS流,如图6b所示,从而大大降低了竞技场容量,以提高可靠性和整体用户体验。
CONCLUSIONS
In this paper, we presented a simple VR traffic model based on over 90 minutes of acquired traffic traces. While being simple, ignoring second-order statistics, and being based on an ideal setting, this model marks a starting point for network analysis and optimization tailored for this novel and peculiar type of traffic, introducing a more realistic traffic model into ns-3.
在本文中,我们提出了一个简单的虚拟现实交通模型,基于超过90分钟的交通记录。该模型虽然简单,忽略了二阶统计,并且基于理想设置,但它标志着为这种新颖而独特的流量类型量身定制的网络分析和优化的起点,将更现实的流量模型引入ns-3。
The proposed ns-3 framework for bursty applications is publicly available and open source [10], together with the implementation of the proposed traffic model and the actual traffic traces experimentally obtained. We also attempted to generalize the model to arbitrary target data rates and frame rates, allowing users to experiment with arbitrary application-level settings that suit their specific research.
针对突发应用程序提出的ns-3框架是公开的、开源的[10],以及提出的流量模型的实现和实验获得的实际流量跟踪。我们还试图将该模型推广到任意目标数据速率和帧速率,允许用户体验适合其特定研究的任意应用程序级别设置。
The model has been built upon a framework to simulate bursty applications in ns-3, where burst size and period can be customized with little additional code, and traces for burst-level metrics collections allow the user to better analyze a complex application QoS.
该模型建立在ns-3中模拟突发应用程序的框架之上,在ns-3中,突发大小和周期可以通过很少的额外代码进行定制,而对突发级别度量集合的跟踪允许用户更好地分析复杂的应用程序QoS。
Future works will focus on improving the quality and generality of this approach. For example, second-order statistics will be taken into account, trying to better characterize the statistics of GoPs. More acquisitions will be taken, possibly longer, with different streaming and video encoding settings, on several VR applications. Finally, more complex settings will be considered, e.g., adding head movements, in order to analyze possible correlations between them and the generated traffic.
未来的工作将侧重于提高这种方法的质量和通用性。例如,将考虑二阶统计,试图更好地描述GOP的统计特征。在几个虚拟现实应用程序上,将使用不同的流媒体和视频编码设置进行更多的采集,可能需要更长的时间。最后,将考虑更复杂的设置,例如增加头部运动,以便分析它们与生成的交通量之间可能的相关性。