<?xml version='1.0' encoding='utf-8'?>
<!DOCTYPE rfc [
  <!ENTITY nbsp    "&#160;">
  <!ENTITY zwsp   "&#8203;">
  <!ENTITY nbhy   "&#8209;">
  <!ENTITY wj     "&#8288;">
]>
<?xml-stylesheet type="text/xsl" href="rfc2629.xslt" ?>
<!-- generated by https://github.com/cabo/kramdown-rfc version 1.7.35 (Ruby 3.4.9) -->
<rfc xmlns:xi="http://www.w3.org/2001/XInclude" ipr="trust200902" docName="draft-calabria-bmwg-ai-fabric-terminology-01" category="info" consensus="true" submissionType="IETF" tocInclude="true" sortRefs="true" symRefs="true" version="3">
  <!-- xml2rfc v2v3 conversion 3.33.0 -->
  <front>
    <title abbrev="AI Fabric Benchmarking Terminology">Benchmarking Terminology for AI Network Fabrics</title>
    <seriesInfo name="Internet-Draft" value="draft-calabria-bmwg-ai-fabric-terminology-01"/>
    <author initials="F." surname="Calabria" fullname="Fernando Calabria">
      <organization>Cisco</organization>
      <address>
        <email>fcalabri@cisco.com</email>
      </address>
    </author>
    <author initials="C." surname="Pignataro" fullname="Carlos Pignataro">
      <organization>Blue Fern Consulting</organization>
      <address>
        <email>carlos@bluefern.consulting</email>
      </address>
    </author>
    <author initials="Q." surname="Wu" fullname="Qin Wu">
      <organization>Huawei</organization>
      <address>
        <email>bill.wu@huawei.com</email>
      </address>
    </author>
    <author initials="G." surname="Fioccola" fullname="Giuseppe Fioccola">
      <organization>Huawei</organization>
      <address>
        <email>giuseppe.fioccola@huawei.com</email>
      </address>
    </author>
    <date year="2026" month="April" day="21"/>
    <area>Operations and Management</area>
    <workgroup>BMWG Working Group</workgroup>
    <keyword>benchmarking</keyword>
    <keyword>terminology</keyword>
    <keyword>AI training</keyword>
    <keyword>AI inference</keyword>
    <keyword>network fabric</keyword>
    <keyword>RDMA</keyword>
    <keyword>RoCEv2</keyword>
    <keyword>UET</keyword>
    <keyword>collective communication</keyword>
    <keyword>AllReduce</keyword>
    <keyword>JCT</keyword>
    <keyword>TTFT</keyword>
    <keyword>KV cache</keyword>
    <abstract>
      <?line 117?>

<t>This document defines benchmarking terminology for evaluating
Ethernet-based network fabrics used in distributed Artificial
Intelligence (AI) training and inference workloads. It provides a
unified vocabulary consolidating and extending terms from RFC 1242,
RFC 8238, and the companion AI fabric methodology documents,
establishing precise, vendor-neutral definitions for collective
communication primitives, RDMA transport mechanisms (RoCEv2 and Ultra
Ethernet Transport), congestion control behaviors, AI-specific Key
Performance Indicators (KPIs), and fabric topology concepts.</t>
      <t>This document is a companion to <xref target="I-D.calabria-bmwg-ai-fabric-training-bench"/>
and <xref target="I-D.calabria-bmwg-ai-fabric-inference-bench"/>. Those documents
<bcp14>SHOULD NOT</bcp14> be applied without first consulting the terminology defined
herein. Where definitions herein overlap with RFC 1242 or RFC 8238,
the AI fabric context definition in this document takes precedence.</t>
    </abstract>
    <note removeInRFC="true">
      <name>About This Document</name>
      <t>
        The latest revision of this draft can be found at <eref target="https://fcalabri.github.io/bmwg-ai-fabric-terminology/draft-calabria-bmwg-ai-fabric-terminology.html"/>.
        Status information for this document may be found at <eref target="https://datatracker.ietf.org/doc/draft-calabria-bmwg-ai-fabric-terminology/"/>.
      </t>
      <t>Source for this draft and an issue tracker can be found at
        <eref target="https://github.com/fcalabri/bmwg-ai-fabric-terminology"/>.</t>
    </note>
  </front>
  <middle>
    <?line 136?>

<section anchor="introduction">
      <name>Introduction</name>
      <section anchor="requirements-language">
        <name>Requirements Language</name>
        <t>The key words "<bcp14>MUST</bcp14>", "<bcp14>MUST NOT</bcp14>", "<bcp14>REQUIRED</bcp14>", "<bcp14>SHALL</bcp14>", "<bcp14>SHALL
NOT</bcp14>", "<bcp14>SHOULD</bcp14>", "<bcp14>SHOULD NOT</bcp14>", "<bcp14>RECOMMENDED</bcp14>", "<bcp14>NOT RECOMMENDED</bcp14>",
"<bcp14>MAY</bcp14>", and "<bcp14>OPTIONAL</bcp14>" in this document are to be interpreted as
described in BCP 14 <xref target="RFC2119"/> <xref target="RFC8174"/> when, and only when, they
appear in all capitals, as shown here.</t>
        <?line -18?>

</section>
      <section anchor="scope-and-purpose">
        <name>Scope and Purpose</name>
        <t>This document defines terminology specifically for benchmarking
Ethernet-based AI network fabrics in controlled laboratory
environments. The defined terms cover:
distributed AI training collective communication patterns, LLM
inference serving architectures, RDMA transport semantics (RoCEv2
and UET), congestion control mechanisms, fabric topology
characteristics, and performance metric definitions.</t>
        <t>This document does not define acceptance criteria, performance
requirements, or configuration recommendations. It does not address
benchmarking of live operational networks, intra-node (NVLink/PCIe)
interconnects, or storage networking.</t>
      </section>
      <section anchor="relationship-to-existing-bmwg-work">
        <name>Relationship to Existing BMWG Work</name>
        <t>This document extends the foundational BMWG terminology established
in <xref target="RFC1242"/> (network interconnect benchmarking terminology) and
<xref target="RFC8238"/> (data center benchmarking terminology). Where terms are
defined in those RFCs, this document provides AI fabric context
extensions; the core definitions remain as established. This document
also extends the test methodology framework of <xref target="RFC2544"/> and
<xref target="RFC8239"/> as applied in the companion AI fabric methodology
documents.</t>
      </section>
      <section anchor="relationship-to-companion-documents">
        <name>Relationship to Companion Documents</name>
        <t>This document is one of three companion Internet-Drafts addressing AI
fabric benchmarking:</t>
        <ul spacing="normal">
          <li>
            <t><xref target="I-D.calabria-bmwg-ai-fabric-terminology"/> (this document): Terminology
definitions.</t>
          </li>
          <li>
            <t><xref target="I-D.calabria-bmwg-ai-fabric-training-bench"/>: Benchmarking methodology for AI training
workloads.</t>
          </li>
          <li>
            <t><xref target="I-D.calabria-bmwg-ai-fabric-inference-bench"/>: Benchmarking methodology for AI inference
serving workloads.</t>
          </li>
        </ul>
        <t>Implementers and evaluators <bcp14>SHOULD</bcp14> read this terminology document
before applying the companion methodology documents. Terms defined
here are used normatively in those documents and are not redefined
there unless the specific workload context introduces a substantive
difference, which is noted explicitly.</t>
      </section>
    </section>
    <section anchor="general-benchmarking-terms">
      <name>General Benchmarking Terms</name>
      <t>The following terms establish the general measurement framework
applicable to all AI fabric benchmarking activities.</t>
      <table anchor="tab-gen-bench">
        <name>General Benchmarking Terms</name>
        <thead>
          <tr>
            <th align="left">Term</th>
            <th align="left">Definition</th>
          </tr>
        </thead>
        <tbody>
          <tr>
            <td align="left">
              <strong>AI Fabric</strong></td>
            <td align="left">The dedicated Ethernet backend network interconnecting accelerators (GPUs/XPUs) for distributed AI training and inference workloads. Typically implemented as a non-blocking Clos (fat-tree) topology running RoCEv2 or UET transport. Distinct from the front-end (management/storage) network.</td>
          </tr>
          <tr>
            <td align="left">
              <strong>DUT</strong></td>
            <td align="left">Device Under Test. The network element(s) whose performance characteristics are being measured. In AI fabric benchmarking the DUT is one or more fabric elements: leaf switches, spine switches, NICs, or the complete fabric assembly.</td>
          </tr>
          <tr>
            <td align="left">
              <strong>SUT</strong></td>
            <td align="left">System Under Test. The complete AI compute system including accelerators, NICs, the fabric DUT, and serving/training software, when end-to-end metrics are the measurement objective.</td>
          </tr>
          <tr>
            <td align="left">
              <strong>RT</strong></td>
            <td align="left">Router Tester / Traffic Generator. Test equipment capable of generating and receiving network traffic at specified rates with nanosecond-resolution timestamping sufficient for the measurements defined in the companion methodology documents.</td>
          </tr>
          <tr>
            <td align="left">
              <strong>JFI</strong></td>
            <td align="left">Jain's Fairness Index. A scalar measure of flow-level throughput fairness across n flows: <tt>JFI = (Σxᵢ)² / (n · Σxᵢ²)</tt> where xᵢ is the throughput of flow i. A value of 1.0 indicates perfect fairness; lower values indicate disparity. <strong><bcp14>SHOULD</bcp14></strong> be computed per <xref target="RFC1242"/> reporting conventions.</td>
          </tr>
          <tr>
            <td align="left">
              <strong>Offered Load</strong></td>
            <td align="left">The total traffic rate presented to the DUT from test equipment, expressed as a fraction of line rate (0–100%) or as absolute bit/s. Offered load is controlled independently of DUT absorption, enabling characterization of saturation behavior.</td>
          </tr>
          <tr>
            <td align="left">
              <strong>Trial Duration</strong></td>
            <td align="left">The time interval over which a single measurement is conducted. For AI fabric tests, the <strong><bcp14>RECOMMENDED</bcp14></strong> minimum is 60 seconds for throughput tests and 300 seconds for soak/stability tests, per the methodology in <xref target="RFC2544"/> as extended herein.</td>
          </tr>
          <tr>
            <td align="left">
              <strong>Warmup Period</strong></td>
            <td align="left">A mandatory pre-measurement interval during which traffic is sent but results are not recorded. Ensures adaptive routing tables, PFC watermarks, and DCQCN/UET congestion controllers reach steady state before measurement begins. <strong><bcp14>RECOMMENDED</bcp14></strong> minimum: 10 seconds.</td>
          </tr>
          <tr>
            <td align="left">
              <strong>Binary Search</strong></td>
            <td align="left">An iterative test procedure for determining the maximum offered load at which a DUT meets a specified acceptance criterion (e.g., zero packet loss). The search halves the candidate load range at each iteration, converging to a resolution of 0.1% offered load within 10 iterations.</td>
          </tr>
          <tr>
            <td align="left">
              <strong>Percentile Latency</strong></td>
            <td align="left">A latency statistic expressing that the specified fraction of all measured latency samples fall at or below the reported value. Denoted Pxx (e.g., P50, P95, P99, P99.9). Tail latency (P99 and above) is especially relevant for AI fabric benchmarking because SLO violations are determined by worst-case, not median, performance.</td>
          </tr>
        </tbody>
      </table>
    </section>
    <section anchor="collective-communication-terms">
      <name>Collective Communication Terms</name>
      <t>The following terms define the collective communication operations that
are the primary traffic sources in distributed AI workloads.</t>
      <table anchor="tab-collect-comm">
        <name>Collective Communication Terms</name>
        <thead>
          <tr>
            <th align="left">Term</th>
            <th align="left">Definition</th>
          </tr>
        </thead>
        <tbody>
          <tr>
            <td align="left">
              <strong>Collective Operation</strong></td>
            <td align="left">A coordinated communication pattern executed simultaneously across all accelerators in a training or inference group. Core collectives: AllReduce (gradient aggregation), AllGather (parameter distribution), ReduceScatter (partial reduction + scatter), and AllToAll (expert dispatch in MoE models).</td>
          </tr>
          <tr>
            <td align="left">
              <strong>AllReduce</strong></td>
            <td align="left">A collective in which each participant contributes a tensor and all participants receive the element-wise sum (or other reduction) of all contributions. The dominant communication primitive in data-parallel and tensor-parallel training. BusBW is the primary KPI.</td>
          </tr>
          <tr>
            <td align="left">
              <strong>AllGather</strong></td>
            <td align="left">A collective in which each participant contributes a shard of a tensor and all participants receive the concatenation of all shards. Used in tensor-parallel (Megatron-style) layers to reconstruct distributed activations or parameters.</td>
          </tr>
          <tr>
            <td align="left">
              <strong>ReduceScatter</strong></td>
            <td align="left">A collective combining an element-wise reduction with a scatter, so each participant receives a distinct slice of the reduced result. Used in ZeRO-stage optimizer strategies and as the first half of a ring-AllReduce.</td>
          </tr>
          <tr>
            <td align="left">
              <strong>AllToAll</strong></td>
            <td align="left">A collective in which each participant sends a distinct payload to every other participant and receives a distinct payload from every other participant. The critical collective for Mixture-of-Experts token dispatch. Generates N²−1 independent point-to-point flows for N participants.</td>
          </tr>
          <tr>
            <td align="left">
              <strong>Ring Algorithm</strong></td>
            <td align="left">An AllReduce (or AllGather/ReduceScatter) algorithm structured as a logical ring of participants. Each participant sends to its right neighbor and receives from its left neighbor in 2(N−1) steps. Bus bandwidth efficiency = 2(N−1)/N, approaching 100% for large N. Standard baseline for BusBW calculation.</td>
          </tr>
          <tr>
            <td align="left">
              <strong>BusBW</strong></td>
            <td align="left">The effective data throughput per accelerator during a collective<br/>   operation, computed as:<br/><br/>      BusBW = (data_size × algo_factor) / time<br/><br/>   where algo_factor normalizes for the collective type and algorithm:<br/><br/>Collective       Algorithm                    algo_factor<br/><br/>   AllReduce        Ring / recursive doubling    2 × (n−1) / n<br/>   AllReduce        Binary / double-binary tree  2 × log₂(n) / n<br/>   AllGather        Ring                         (n−1) / n<br/>   ReduceScatter    Ring                         (n−1) / n<br/>   AllToAll         Direct                       (n−1) / n<br/><br/>   n = number of participating accelerators.<br/><br/>Ring AllReduce is the conventional comparison baseline. <br/><br/>Note: collective libraries commonly select the algorithm dynamically based on message size (e.g., tree-based for small messages, ring for large messages); algo_factor therefore varies with message size and <bcp14>MUST</bcp14> be reported per message-size bucket when dynamic selection is active. <br/>Reports <bcp14>MUST</bcp14> state: collective type, algorithm, algo_factor value, collective library name and version, and n. Units: Gbps per accelerator.</td>
          </tr>
          <tr>
            <td align="left">
              <strong>CCL</strong></td>
            <td align="left">Collective Communication Library. A software library providing optimized implementations of collective operations (AllReduce, AllGather, etc.) over a specific transport. The CCL implementation <strong><bcp14>MUST</bcp14></strong> be documented in the test report.</td>
          </tr>
          <tr>
            <td align="left">
              <strong>SPMD</strong></td>
            <td align="left">Single Program Multiple Data. The execution model underlying bulk-synchronous distributed training, in which all accelerators execute identical computation on distinct data partitions, synchronizing at collective barriers between steps.</td>
          </tr>
          <tr>
            <td align="left">
              <strong>Bulk Synchronous Parallel (BSP)</strong></td>
            <td align="left">A distributed computation model structured as alternating compute and communicate phases with a global synchronization barrier between phases. Standard training workloads follow BSP: forward pass → backward pass → AllReduce gradient sync → optimizer step.</td>
          </tr>
        </tbody>
      </table>
    </section>
    <section anchor="distributed-parallelism-strategy-terms">
      <name>Distributed Parallelism Strategy Terms</name>
      <t>The following terms define the parallelism strategies used in
distributed AI model training and inference, which determine traffic
patterns and fabric requirements.</t>
      <table anchor="tab-distri-parallel">
        <name>Distributed Parallelism Strategy Terms</name>
        <thead>
          <tr>
            <th align="left">Term</th>
            <th align="left">Definition</th>
          </tr>
        </thead>
        <tbody>
          <tr>
            <td align="left">
              <strong>Data Parallelism (DP)</strong></td>
            <td align="left">A distributed training strategy replicating the full model on each accelerator, partitioning the training dataset across replicas. Gradient synchronization after each backward pass requires an AllReduce across all DP ranks. Memory-efficient for small models; communication overhead scales with parameter count.</td>
          </tr>
          <tr>
            <td align="left">
              <strong>Tensor Parallelism (TP)</strong></td>
            <td align="left">A distributed training and inference strategy partitioning individual weight matrices across multiple accelerators. Each rank computes a partial result; AllGather or ReduceScatter collectives are required within each layer to aggregate results. Dominant parallelism within a node (intra-node).</td>
          </tr>
          <tr>
            <td align="left">
              <strong>Pipeline Parallelism (PP)</strong></td>
            <td align="left">A distributed strategy assigning contiguous groups of transformer layers to distinct stages (accelerators or nodes). Each stage processes one microbatch and forwards activations to the next stage. Generates point-to-point inter-stage traffic across the fabric (activations and gradients).</td>
          </tr>
          <tr>
            <td align="left">
              <strong>Expert Parallelism (EP)</strong></td>
            <td align="left">A parallelism strategy for Mixture-of-Experts models distributing expert sub-networks across accelerators. Each token is routed to its designated experts (typically top-K of E total experts), requiring AllToAll communication for dispatch. Wide EP (e.g., 96-way) generates dense inter-node AllToAll at every MoE layer.</td>
          </tr>
          <tr>
            <td align="left">
              <strong>MoE</strong></td>
            <td align="left">Mixture of Experts. A transformer architecture replacing dense feed-forward layers with a set of E expert sub-networks, of which only top-K experts (typically K=2 or K=4) are activated per token via a learned router. MoE enables large model capacity with sub-linear compute, but introduces AllToAll communication requirements proportional to E and sequence length.</td>
          </tr>
          <tr>
            <td align="left">
              <strong>DP Attention</strong></td>
            <td align="left">Data Parallelism applied to the attention computation, where the KV cache is partitioned across data-parallel ranks. Each rank holds 1/DP_SIZE of the KV cache; AllToAll communication exchanges attention outputs. Used in inference to reduce per-accelerator memory footprint for long contexts.</td>
          </tr>
          <tr>
            <td align="left">
              <strong>ZeRO</strong></td>
            <td align="left">Zero Redundancy Optimizer. A memory optimization strategy for data-parallel training that shards model states (parameters, gradients, optimizer states) across DP ranks instead of replicating them. Stage 1 shards optimizer states; Stage 2 adds gradient sharding; Stage 3 adds parameter sharding. Each stage increases AllGather/ReduceScatter communication.</td>
          </tr>
        </tbody>
      </table>
    </section>
    <section anchor="network-transport-terms">
      <name>Network Transport Terms</name>
      <section anchor="rocev2-and-rdma-terms">
        <name>RoCEv2 and RDMA Terms</name>
        <t>The following terms define RDMA and RoCEv2 transport semantics as
used in AI fabric benchmarking. UET, PDC, and ROD are included here
for direct comparison with their RoCEv2 counterparts; full UET-specific
terms are defined in Section 5.2.</t>
        <table anchor="tab-rocev2">
          <name>RoCEv2 and RDMA Terms</name>
          <thead>
            <tr>
              <th align="left">Term</th>
              <th align="left">Definition</th>
            </tr>
          </thead>
          <tbody>
            <tr>
              <td align="left">
                <strong>RDMA</strong></td>
              <td align="left">Remote Direct Memory Access. A transport mechanism enabling direct memory-to-memory data transfer between hosts without involving the destination CPU, providing zero-copy semantics and kernel bypass. Implementations include InfiniBand Verbs (native IB), iWARP (RDMA over TCP), and RoCEv2 (RDMA over Converged Ethernet v2).</td>
            </tr>
            <tr>
              <td align="left">
                <strong>RoCEv2</strong></td>
              <td align="left">RDMA over Converged Ethernet version 2. An RDMA transport encapsulating InfiniBand transport layer (BTH) over UDP/IP, enabling RDMA semantics on standard Ethernet infrastructure. Requires lossless fabric operation (PFC or equivalent) for correctness. Standardized in IBTA Annex 16; transported over UDP destination port 4791.</td>
            </tr>
            <tr>
              <td align="left">
                <strong>QP</strong></td>
              <td align="left">Queue Pair. The fundamental RDMA communication endpoint comprising a Send Queue (SQ) and Receive Queue (RQ). QPs are connection-oriented in Reliable Connected (RC) mode. Multiple QPs per source-destination pair are used to increase ECMP entropy in fabric load balancing.</td>
            </tr>
            <tr>
              <td align="left">
                <strong>Reliable Connected (RC)</strong></td>
              <td align="left">An RDMA QP transport service type providing reliable, in-order delivery between exactly two endpoints. The primary QP type for AI collective operations via RoCEv2. Requires connection setup before data transfer and maintains per-QP state for retransmission.</td>
            </tr>
            <tr>
              <td align="left">
                <strong>RDMA Verb</strong></td>
              <td align="left">An operation primitive of the RDMA programming model. Key verbs: SEND/RECV (two-sided, receiver must post a buffer), WRITE (one-sided, target memory written directly), READ (one-sided, remote memory read), and Atomic (compare-and-swap, fetch-and-add). AI collectives predominantly use WRITE and SEND.</td>
            </tr>
            <tr>
              <td align="left">
                <strong>UET</strong></td>
              <td align="left">Ultra Ethernet Transport. A transport protocol defined by the Ultra Ethernet Consortium (UEC) Specification 1.0 as a next-generation AI/HPC fabric transport. UET is connectionless, supports native packet spraying (RUD), and integrates multipath load balancing and congestion control. Transported over UDP destination port 4793 (pending IANA verification).</td>
            </tr>
            <tr>
              <td align="left">
                <strong>PDC</strong></td>
              <td align="left">Packet Delivery Context. The ephemeral, lightweight transport endpoint in UET, analogous to but distinct from an RDMA Queue Pair. PDCs are connectionless (no setup handshake), enabling low-latency initiation and reduced per-flow state in the NIC and switch.</td>
            </tr>
            <tr>
              <td align="left">
                <strong>ROD</strong></td>
              <td align="left">Reliable Ordered Delivery. A UET transport service providing reliable, in-order packet delivery, semantically equivalent to RoCEv2 RC mode. Suitable for legacy RDMA applications requiring strict ordering guarantees.</td>
            </tr>
          </tbody>
        </table>
      </section>
      <section anchor="ultra-ethernet-transport-uet-terms">
        <name>Ultra Ethernet Transport (UET) Terms</name>
        <t>The following terms define UET-specific concepts introduced by the
Ultra Ethernet Consortium (UEC) Specification 1.0
<xref target="UEC-SPEC-1.0"/>.</t>
        <table anchor="tab-uet">
          <name>Ultra Ethernet Transport (UET) Terms</name>
          <thead>
            <tr>
              <th align="left">Term</th>
              <th align="left">Definition</th>
            </tr>
          </thead>
          <tbody>
            <tr>
              <td align="left">
                <strong>RUD</strong></td>
              <td align="left">Reliable Unordered Delivery. A UET transport service providing reliable delivery without maintaining packet order across paths. Enables native packet spraying across ECMP paths without reorder-buffer overhead at the receiver NIC. The preferred UET service class for AI training collectives.</td>
            </tr>
            <tr>
              <td align="left">
                <strong>RUDI</strong></td>
              <td align="left">Reliable Unordered Delivery for Idempotent operations. A UET transport service optimized for operations safe to execute more than once (e.g., RDMA Writes to non-accumulating targets), allowing simplified retransmission logic with reduced state overhead.</td>
            </tr>
            <tr>
              <td align="left">
                <strong>UUD</strong></td>
              <td align="left">Unreliable Unordered Delivery. A UET transport service providing best-effort, unordered packet delivery with minimal overhead. Suitable for telemetry, speculative operations, or workloads with application-layer loss tolerance.</td>
            </tr>
            <tr>
              <td align="left">
                <strong>UEC Profile</strong></td>
              <td align="left">A defined subset of UET features targeting a specific use case: AI Base (core AI training/inference, mandatory feature set), AI Full (AI Base plus deferred send, exact-match tagging, extended atomics), or HPC (latency-optimized for traditional HPC workloads with fine-grained synchronization).</td>
            </tr>
            <tr>
              <td align="left">
                <strong>LLR</strong></td>
              <td align="left">Link Layer Retry. An optional UEC link-layer enhancement providing fast per-hop error recovery at the Ethernet link layer. LLR detects symbol errors at the FEC level and retransmits the affected frame before it is dropped, reducing the frequency of transport-layer retransmission and improving tail latency.</td>
            </tr>
            <tr>
              <td align="left">
                <strong>Packet Trimming</strong></td>
              <td align="left">An optional UEC link-layer behavior in which a congested switch, rather than dropping the full packet, transmits only the packet header (trimmed packet) to the receiver. Trimming enables the receiver to detect loss and initiate selective retransmission more rapidly, reducing bandwidth waste versus silent drop.</td>
            </tr>
            <tr>
              <td align="left">
                <strong>CBFC</strong></td>
              <td align="left">Credit-Based Flow Control. An optional UEC link-layer buffer management mechanism using explicit credit grants from downstream to upstream devices. CBFC provides backpressure without transmitting PFC PAUSE frames, eliminating the head-of-line blocking and storm propagation risks associated with PFC.</td>
            </tr>
            <tr>
              <td align="left">
                <strong>Entropy Value</strong></td>
              <td align="left">A per-packet field in the UET header used to distribute packets of a single message across available ECMP paths, providing explicit spray entropy independent of the IP 5-tuple. Enables hardware-assisted packet spraying without requiring transport-layer state in the switch.</td>
            </tr>
            <tr>
              <td align="left">
                <strong>GIN</strong></td>
              <td align="left">GPU-Initiated Networking. A communication paradigm in which GPU threads directly initiate network RDMA operations (sends, one-sided writes/reads) to the NIC hardware without CPU involvement, eliminating the CPU-GPU synchronization round-trip. Reduces effective latency by several microseconds for fine-grained operations.</td>
            </tr>
            <tr>
              <td align="left">
                <strong>KVCXL</strong></td>
              <td align="left">KV Cache Transfer Library. A software library providing standardized point-to-point data transfer primitives (register, transfer, notify) for inference engines, abstracting underlying transport mechanisms (intra-node interconnect, RDMA, PCIe, storage interfaces). Enables transport-agnostic KV cache migration in disaggregated serving architectures.</td>
            </tr>
          </tbody>
        </table>
        <section anchor="uet-transport-services-comparison">
          <name>UET Transport Services Comparison</name>
          <table anchor="tab-uet-compare">
            <name>UET Transport Services Comparison</name>
            <thead>
              <tr>
                <th align="left">Service</th>
                <th align="left">Ordered</th>
                <th align="left">Reliable</th>
                <th align="left">Retransmission Complexity</th>
                <th align="left">Primary Use Case</th>
              </tr>
            </thead>
            <tbody>
              <tr>
                <td align="left">
                  <strong>ROD</strong></td>
                <td align="left">Yes</td>
                <td align="left">Yes</td>
                <td align="left">Full per-QP state</td>
                <td align="left">Legacy RDMA / ordered AI ops</td>
              </tr>
              <tr>
                <td align="left">
                  <strong>RUD</strong></td>
                <td align="left">No</td>
                <td align="left">Yes</td>
                <td align="left">Reduced (unordered)</td>
                <td align="left">AI training collectives with spray</td>
              </tr>
              <tr>
                <td align="left">
                  <strong>RUDI</strong></td>
                <td align="left">No</td>
                <td align="left">Yes</td>
                <td align="left">Minimal (idempotent)</td>
                <td align="left">RDMA Writes; simple retransmit</td>
              </tr>
              <tr>
                <td align="left">
                  <strong>UUD</strong></td>
                <td align="left">No</td>
                <td align="left">No</td>
                <td align="left">None</td>
                <td align="left">Telemetry, speculative ops</td>
              </tr>
            </tbody>
          </table>
        </section>
      </section>
    </section>
    <section anchor="congestion-control-and-fabric-behavior-terms">
      <name>Congestion Control and Fabric Behavior Terms</name>
      <t>The following terms define congestion management mechanisms and
associated fabric behaviors critical to AI workload performance.</t>
      <table anchor="tab-congest-control">
        <name>Congestion Control and Fabric Behavior Terms</name>
        <thead>
          <tr>
            <th align="left">Term</th>
            <th align="left">Definition</th>
          </tr>
        </thead>
        <tbody>
          <tr>
            <td align="left">
              <strong>PFC</strong></td>
            <td align="left">Priority Flow Control (IEEE 802.1Qbb). A lossless Ethernet mechanism in which a receiver transmits a PAUSE frame to its upstream neighbor on a specific priority class when its ingress buffer approaches a configured threshold, temporarily halting transmission of that priority. Required for lossless RoCEv2 operation. PFC operates hop-by-hop and can propagate congestion upstream (PFC storm risk).</td>
          </tr>
          <tr>
            <td align="left">
              <strong>PFC Storm</strong></td>
            <td align="left">A pathological condition in which PFC PAUSE frames propagate across multiple hops, causing widespread throughput degradation or deadlock unrelated to the original congestion source. Detection and mitigation <strong><bcp14>SHOULD</bcp14></strong> be part of soak test evaluation per the companion methodology documents.</td>
          </tr>
          <tr>
            <td align="left">
              <strong>PFC Deadlock</strong></td>
            <td align="left">A circular PFC dependency in which sets of flows mutually pause each other indefinitely, resulting in zero progress for affected traffic classes. Deadlock risk is elevated in non-tree topologies and <strong><bcp14>MUST</bcp14></strong> be evaluated in fabric-level soak tests.</td>
          </tr>
          <tr>
            <td align="left">
              <strong>ECN</strong></td>
            <td align="left">Explicit Congestion Notification (<xref target="RFC3168"/>). An IP-layer mechanism in which a congested router marks packets with the Congestion Experienced (CE) codepoint in the IP ECN field instead of dropping them. The receiver echoes congestion feedback to the sender via the transport protocol, triggering rate reduction. Used with RoCEv2 as part of DCQCN.</td>
          </tr>
          <tr>
            <td align="left">
              <strong>DCQCN</strong></td>
            <td align="left">Data Center Quantized Congestion Notification. An end-to-end congestion control algorithm for RoCEv2 flows, combining ECN marking at congested switches with rate-based sender reduction using an AIMD scheme. Note: PFC serves as a separate, orthogonal backstop to prevent packet loss during DCQCN convergence; PFC is <strong>not</strong> a component of the DCQCN algorithm itself.</td>
          </tr>
          <tr>
            <td align="left">
              <strong>ECN Marking Ratio</strong></td>
            <td align="left">The fraction of packets (expressed as a percentage) that are marked with the CE codepoint in the IP ECN field over a measurement interval. A high ECN Marking Ratio indicates persistent congestion and is a primary Fabric Health Indicator.</td>
          </tr>
          <tr>
            <td align="left">
              <strong>Incast</strong></td>
            <td align="left">A traffic pattern in which multiple sources simultaneously send to a single destination, potentially overwhelming the destination's NIC receive buffer and the switch's egress port buffer. Incast is a dominant congestion mechanism in AllReduce and collective operations.</td>
          </tr>
          <tr>
            <td align="left">
              <strong>Incast Ratio</strong></td>
            <td align="left">The ratio of concurrent senders to receivers in an incast communication pattern (N:1). The incast ratio determines the oversubscription factor at the destination port and is a primary test parameter for congestion characterization.</td>
          </tr>
          <tr>
            <td align="left">
              <strong>Packet Spray</strong></td>
            <td align="left">A load balancing strategy distributing individual packets of a single RDMA message across all available ECMP paths, maximizing link utilization at the cost of potential out-of-order delivery at the receiver. Native in UET (RUD mode); requires NIC reorder buffering for RoCEv2 RC mode.</td>
          </tr>
          <tr>
            <td align="left">
              <strong>DLB / Flowlet</strong></td>
            <td align="left">Dynamic Load Balancing using flowlet detection. A per-flow rerouting mechanism that reassigns a flow to a new ECMP path when the flow has been idle longer than the flowlet gap threshold (typically 500 ns–2 µs), reducing out-of-order packet risk compared to packet spray while improving utilization over static per-flow ECMP.</td>
          </tr>
          <tr>
            <td align="left">
              <strong>ECMP</strong></td>
            <td align="left">Equal-Cost Multi-Path routing. A forwarding mechanism distributing traffic across multiple equal-cost paths, typically via hash of the IP 5-tuple (or entropy field in UET). ECMP imbalance (MMR &gt; 1.0) is a primary fabric efficiency metric for AI traffic.</td>
          </tr>
          <tr>
            <td align="left">
              <strong>MMR</strong></td>
            <td align="left">Max-Mean Ratio. The ratio of the flow count (or traffic load) on the most heavily utilized link to the average flow count per link across all fabric links. MMR = 1.0 indicates perfect ECMP balance; MMR &gt; 1.0 quantifies imbalance that degrades effective fabric bandwidth.</td>
          </tr>
        </tbody>
      </table>
      <section anchor="load-balancing-strategy-comparison">
        <name>Load Balancing Strategy Comparison</name>
        <table anchor="tab-load-balance">
          <name>Load Balancing Strategy Comparison</name>
          <thead>
            <tr>
              <th align="left">Strategy</th>
              <th align="left">Granularity</th>
              <th align="left">Reorder Risk</th>
              <th align="left">Utilization</th>
              <th align="left">Complexity</th>
            </tr>
          </thead>
          <tbody>
            <tr>
              <td align="left">
                <strong>ECMP (5-tuple hash)</strong></td>
              <td align="left">Per-flow</td>
              <td align="left">None</td>
              <td align="left">Low (elephant flow bias)</td>
              <td align="left">Low</td>
            </tr>
            <tr>
              <td align="left">
                <strong>DLB / Flowlet</strong></td>
              <td align="left">Per-flowlet</td>
              <td align="left">Low</td>
              <td align="left">Medium</td>
              <td align="left">Medium</td>
            </tr>
            <tr>
              <td align="left">
                <strong>Packet Spray (RoCEv2)</strong></td>
              <td align="left">Per-packet</td>
              <td align="left">High</td>
              <td align="left">High</td>
              <td align="left">High (NIC reorder buffer)</td>
            </tr>
            <tr>
              <td align="left">
                <strong>Packet Spray (UET RUD)</strong></td>
              <td align="left">Per-packet</td>
              <td align="left">None (transport tolerates OOO)</td>
              <td align="left">High</td>
              <td align="left">Low</td>
            </tr>
          </tbody>
        </table>
      </section>
    </section>
    <section anchor="fabric-topology-and-infrastructure-terms">
      <name>Fabric Topology and Infrastructure Terms</name>
      <t>The following terms define fabric topology architectures and
infrastructure components referenced in the companion methodology
documents.</t>
      <table anchor="tab-fabric-topo">
        <name>Fabric Topology and Infrastructure Terms</name>
        <thead>
          <tr>
            <th align="left">Term</th>
            <th align="left">Definition</th>
          </tr>
        </thead>
        <tbody>
          <tr>
            <td align="left">
              <strong>Fabric DUT Boundary</strong></td>
            <td align="left">The precise measurement boundary for BMWG AI fabric benchmarks. Defined as the NIC Ethernet port (transmit side at source, receive side at destination). All benchmarked metrics (throughput, latency, loss, congestion) are measured at or between NIC Ethernet ports. Intra-node segments (NVLink, PCIe Gen4/5, CXL) are outside the DUT boundary and <bcp14>MUST NOT</bcp14> be included in fabric benchmark results without explicit labelling as a separate measurement component.</td>
          </tr>
          <tr>
            <td align="left">
              <strong>Intra-Node Transfer Overhead</strong></td>
            <td align="left">The latency and bandwidth consumed by data movement within a single server node: specifically, the GPU-to-NIC path via PCIe or CXL, and GPU-to-GPU communication via NVLink. Intra-node transfer overhead is a contextual measurement reported alongside fabric benchmarks in end-to-end decomposition tests but is not itself the benchmarked entity in any test in this document</td>
          </tr>
          <tr>
            <td align="left">
              <strong>Clos / Fat-Tree Topology</strong></td>
            <td align="left">A multi-stage switch topology providing non-blocking or oversubscribed connectivity between all leaf-to-leaf pairs. In AI fabric deployments, a two-tier (leaf-spine) or three-tier (leaf-spine-superspine) Clos is standard. Full bisection bandwidth (1:1) is the target for training fabrics; 2:1 or 4:1 oversubscription may be acceptable for inference fabrics.</td>
          </tr>
          <tr>
            <td align="left">
              <strong>Rail-Optimized Topology</strong></td>
            <td align="left">A topology in which the NIC ports of each server are distributed across multiple ToR switches (one NIC port per switch), such that collective traffic between adjacent servers traverses different physical paths. Minimizes switch-to-switch traffic during ring AllReduce, maximizing effective BusBW. Requires ECMP-aware collective placement.</td>
          </tr>
          <tr>
            <td align="left">
              <strong>Bisection Bandwidth</strong></td>
            <td align="left">The aggregate bandwidth across the minimum cut that divides the fabric into two equal halves. Non-blocking fabrics provide bisection bandwidth equal to half the total edge (server-facing) bandwidth. Limits worst-case all-to-all communication throughput.</td>
          </tr>
          <tr>
            <td align="left">
              <strong>Oversubscription Ratio</strong></td>
            <td align="left">The ratio of total edge (server-facing) bandwidth to total bisection bandwidth in a Clos fabric. A 1:1 ratio is non-blocking; higher ratios (e.g., 2:1, 4:1) reduce fabric cost but may bottleneck all-to-all and AllReduce patterns when all server ports are active simultaneously.</td>
          </tr>
          <tr>
            <td align="left">
              <strong>ToR Switch</strong></td>
            <td align="left">Top-of-Rack switch. The first-hop aggregation switch connecting accelerator servers in a rack to the spine layer of the fabric. In rail-optimized topologies, multiple ToR switches serve a single rack, with each server's NICs distributed across ToRs.</td>
          </tr>
          <tr>
            <td align="left">
              <strong>Spine / Superspine</strong></td>
            <td align="left">Intermediate and top-layer switches in a multi-tier Clos fabric, providing inter-rack and inter-pod connectivity respectively. Spine switches aggregate multiple ToR switches; superspine switches aggregate multiple spine pods.</td>
          </tr>
          <tr>
            <td align="left">
              <strong>NIC</strong></td>
            <td align="left">Network Interface Controller. The hardware device providing network connectivity for an accelerator host. AI fabric NICs support RDMA (RoCEv2 or UET), hardware offload for collective operations, and, optionally, GPU-Initiated Networking (GIN). NIC model and firmware version <strong><bcp14>MUST</bcp14></strong> be documented in all benchmark reports.</td>
          </tr>
          <tr>
            <td align="left">
              <strong>Buffer Occupancy</strong></td>
            <td align="left">The instantaneous or time-averaged fill level of a switch port's packet buffer, expressed in bytes or as a fraction of total buffer capacity. Elevated sustained buffer occupancy indicates congestion. P99 buffer occupancy is a Fabric Health Indicator in the companion methodology documents.</td>
          </tr>
          <tr>
            <td align="left">
              <strong>Zero-Impact Failover</strong></td>
            <td align="left">Sub-microsecond automatic path convergence upon a link or switch failure resulting in no measurable increase to JCT or TTFT. Requires pre-programmed alternate paths and hardware-level fast reroute (FRR) with sub-microsecond detection, not relying on routing protocol convergence.</td>
          </tr>
          <tr>
            <td align="left">
              <strong>Link Utilization</strong></td>
            <td align="left">The fraction of the nominal link capacity actually used for data transmission over a measurement interval, expressed as a percentage. Reported as mean, P95, and P99 per link. High asymmetric link utilization (low average but high peak) is characteristic of bursty AI inference traffic.</td>
          </tr>
        </tbody>
      </table>
    </section>
    <section anchor="training-specific-terms">
      <name>Training-Specific Terms</name>
      <t>The following terms are specific to AI training workload benchmarking
and are used normatively in <xref target="I-D.calabria-bmwg-ai-fabric-training-bench"/>.</t>
      <table anchor="tab-training-specific">
        <name>Training-Specific Terms</name>
        <thead>
          <tr>
            <th align="left">Term</th>
            <th align="left">Definition</th>
          </tr>
        </thead>
        <tbody>
          <tr>
            <td align="left">
              <strong>JCT</strong></td>
            <td align="left">Job Completion Time. The wall-clock elapsed time from the start of a training job (or benchmark iteration) until all participating accelerators complete their work, inclusive of all forward pass, backward pass, and collective communication phases. JCT is the primary end-to-end training efficiency KPI.</td>
          </tr>
          <tr>
            <td align="left">
              <strong>Roofline JCT</strong></td>
            <td align="left">The theoretical minimum JCT assuming perfect (zero-contention, zero-queuing) network behavior: <tt>Roofline JCT = computation_time + serialization_delay</tt>, where <tt>serialization_delay = message_size / link_rate</tt>. Provides a reference for evaluating fabric overhead.</td>
          </tr>
          <tr>
            <td align="left">
              <strong>JCT Ratio</strong></td>
            <td align="left">The ratio of measured JCT to Roofline JCT. A value of 1.0 indicates no network-induced overhead. Values &gt; 1.0 quantify fabric inefficiency: <tt>JCT Ratio = JCT_measured / JCT_roofline</tt>. The JCT Ratio is the primary comparative metric for AI training fabric benchmarking.</td>
          </tr>
          <tr>
            <td align="left">
              <strong>Gradient Synchronization</strong></td>
            <td align="left">The AllReduce collective operation performed after the backward pass of each training step to sum the locally computed gradients across all data-parallel replicas. The dominant communication event in data-parallel training, occurring once per training step per layer.</td>
          </tr>
          <tr>
            <td align="left">
              <strong>Step Time</strong></td>
            <td align="left">The wall-clock duration of a single training iteration (forward pass + backward pass + gradient synchronization + optimizer step). Step time = computation time + communication time, where the communication time is dominated by the AllReduce collective.</td>
          </tr>
          <tr>
            <td align="left">
              <strong>Soak Test</strong></td>
            <td align="left">A sustained-load test run for an extended period (minimum 24 hours for stability evaluation) at a defined offered load fraction (e.g., 70% or 90% of maximum throughput). Soak tests detect buffer leaks, ECMP imbalance drift, PFC storm initiation, and long-tail error accumulation not visible in short-duration tests.</td>
          </tr>
        </tbody>
      </table>
    </section>
    <section anchor="inference-specific-terms">
      <name>Inference-Specific Terms</name>
      <t>The following terms are specific to AI inference serving workload
benchmarking and are used normatively in
<xref target="I-D.calabria-bmwg-ai-fabric-inference-bench"/>.</t>
      <table anchor="tab-infer-specific">
        <name>Inference-Specific Terms</name>
        <thead>
          <tr>
            <th align="left">Term</th>
            <th align="left">Definition</th>
          </tr>
        </thead>
        <tbody>
          <tr>
            <td align="left">
              <strong>TTFT</strong></td>
            <td align="left">Time to First Token. The elapsed time from receipt of an inference request by the serving system to emission of the first output token. Encompasses prompt processing (prefill), KV cache generation, optional KV cache transfer (in disaggregated architectures), and the initial decode step. Interactive serving target: TTFT &lt; 500 ms at P99.</td>
          </tr>
          <tr>
            <td align="left">
              <strong>ITL</strong></td>
            <td align="left">Inter-Token Latency. The elapsed time between successive output tokens during the autoregressive decode phase. Measured at P50, P95, P99, and P99.9 to characterize tail latency behavior. Interactive serving target: ITL &lt; 50 ms at P99.</td>
          </tr>
          <tr>
            <td align="left">
              <strong>TPS</strong></td>
            <td align="left">Tokens Per Second. Aggregate throughput of the inference serving system, measured as the total number of output tokens generated per second across all concurrent requests. Reported separately for input-side (prefill) TPS and output-side (decode) TPS.</td>
          </tr>
          <tr>
            <td align="left">
              <strong>KV Cache</strong></td>
            <td align="left">Key-Value Cache. The intermediate attention state (key and value projection matrices from multi-head attention layers) computed during the prefill phase and reused during each decode step to avoid redundant recomputation. KV cache size scales with: <tt>layers × attention_heads × head_dim × sequence_length × precision</tt>. The attention head configuration <strong><bcp14>MUST</bcp14></strong> be reported in all benchmark results.</td>
          </tr>
          <tr>
            <td align="left">
              <strong>Prefill Phase</strong></td>
            <td align="left">The compute-bound phase of LLM inference in which the entire input prompt is processed in parallel to generate the KV cache and the first output token. Characterized by high arithmetic intensity (200–400 ops/byte), high accelerator utilization (90–95%), and large activation tensors. Prefill latency dominates TTFT for long prompts.</td>
          </tr>
          <tr>
            <td align="left">
              <strong>Decode Phase</strong></td>
            <td align="left">The memory-bandwidth-bound phase of LLM inference in which output tokens are generated autoregressively, one token per forward pass, by reading the KV cache. Characterized by low arithmetic intensity (60–80 ops/byte), lower accelerator utilization (20–40%), and memory-bandwidth-limited KV cache reads. Decode throughput limits TPS.</td>
          </tr>
          <tr>
            <td align="left">
              <strong>Disaggregated Serving</strong></td>
            <td align="left">An inference serving architecture in which the prefill phase and decode phase are executed on physically separate groups of accelerators (workers), connected by a network fabric. Allows independent scaling of prefill and decode resources (xPyD) but introduces KV cache transfer as a fabric-critical data movement.</td>
          </tr>
          <tr>
            <td align="left">
              <strong>xPyD Ratio</strong></td>
            <td align="left">The allocation ratio of x prefill workers to y decode workers in a disaggregated serving cluster. Example: 3P9D denotes 3 prefill nodes and 9 decode nodes. The optimal xPyD ratio depends on model size, prompt/output length distributions, and TTFT/ITL SLO targets.</td>
          </tr>
          <tr>
            <td align="left">
              <strong>Continuous Batching</strong></td>
            <td align="left">A dynamic inference scheduling technique that inserts new requests into an active decode batch as slots become available (without waiting for the current batch to complete), improving accelerator utilization compared to static batching. Generates variable batch sizes that affect fabric traffic burstiness.</td>
          </tr>
          <tr>
            <td align="left">
              <strong>PagedAttention</strong></td>
            <td align="left">A KV cache memory management technique storing attention keys and values in fixed-size, non-contiguous virtual pages (typically 16–64 KB), inspired by OS virtual memory management. Reduces memory fragmentation and enables efficient KV cache sharing across requests with common prefixes.</td>
          </tr>
          <tr>
            <td align="left">
              <strong>Prefix Caching</strong></td>
            <td align="left">Reuse of previously computed KV cache segments for inference requests sharing a common prompt prefix (e.g., a fixed system prompt), eliminating redundant prefill computation. Prefix cache hit rate is a secondary KPI for inference serving efficiency.</td>
          </tr>
          <tr>
            <td align="left">
              <strong>Normal Dispatch</strong></td>
            <td align="left">An AllToAll MoE dispatch communication mode optimized for the prefill phase. Payload sizes are variable (depending on token-to-expert routing), generating dynamic tensor shapes incompatible with static graph capture. Maximizes throughput for large batches at the cost of higher per-dispatch latency.</td>
          </tr>
          <tr>
            <td align="left">
              <strong>Low-Latency Dispatch</strong></td>
            <td align="left">An AllToAll MoE dispatch communication mode optimized for the decode phase. Payload sizes are padded to fixed maximum dimensions (compatible with static graph capture), enabling lower kernel-launch overhead at the cost of slight bandwidth inefficiency. Target: &lt; 200 µs per dispatch round trip.</td>
          </tr>
          <tr>
            <td align="left">
              <strong>Expert Choice Routing</strong></td>
            <td align="left">A token routing strategy in which experts select which tokens to process, rather than tokens selecting experts. Each expert accepts its top-C tokens by affinity score, producing perfect load balance but non-uniform AllToAll message sizes across EP ranks.</td>
          </tr>
          <tr>
            <td align="left">
              <strong>Auxiliary Loss Top-k</strong></td>
            <td align="left">A top-k routing variant that adds a load-balancing auxiliary loss during training to encourage uniform token distribution across experts. Produces near-uniform  AllToAll traffic in inference and reduces hot-spot risk on the fabric.</td>
          </tr>
          <tr>
            <td align="left">
              <strong>Top-k with Token Drop</strong></td>
            <td align="left">A top-k routing variant in which tokens destined for  overloaded experts are dropped or redirected to a fallback. Reduces worst-case dispatch traffic volume at the cost of model output quality under load.</td>
          </tr>
          <tr>
            <td align="left">
              <strong>T_dispatch</strong></td>
            <td align="left">The dispatch payload per accelerator per MoE layer, computed as: T_dispatch = (B * k * H_model * P_bytes) / N where B = batch size (tokens), k = top-k routing count, H_model = hidden dimension, P_bytes = bytes per element (BF16=2, FP8=1), N = EP group size. Used as the canonical traffic volume parameter in the MoE test matrix (see Section 7.1 of the companion inference benchmarking draft).</td>
          </tr>
          <tr>
            <td align="left">
              <strong>SLO</strong></td>
            <td align="left">Service Level Objective. A quantitative target for an inference serving KPI. AI inference SLOs typically specify maximum TTFT (e.g., &lt; 500 ms P99) and maximum ITL (e.g., &lt; 50 ms P99) under a specified request arrival rate.</td>
          </tr>
          <tr>
            <td align="left">
              <strong>Speculative Decoding</strong></td>
            <td align="left">An inference acceleration technique using a small draft model to generate candidate token sequences verified in parallel by the target model. Reduces effective ITL but generates bursty, variable-length KV cache traffic; noted as a future benchmarking area not fully specified in the current companion documents.</td>
          </tr>
          <tr>
            <td align="left">
              <strong>S_KV</strong></td>
            <td align="left">The total size in bytes of the KV cache state generated by a single inference request across all transformer layers and all context tokens, computed as: S_KV = 2 x L x H_kv x D x C x P_bytes. Where: L = number of transformer layers; H_kv = number of KV attention heads per layer (H_kv &lt;= H_total for GQA/MQA); D = per-head key/value dimension (head_dim), typically model_dim / H_total; C = context length in tokens (prompt + generated tokens); P_bytes = precision in bytes per element (FP16/BF16 = 2, FP8/INT8 = 1); Factor 2 accounts for both K and V tensors, each of shape [H_kv, D] per layer per token.</td>
          </tr>
        </tbody>
      </table>
      <t>See Section 7.1 of <xref target="I-D.calabria-bmwg-ai-fabric-inference-bench"/> for the MoE test matrix referenced by T_dispatch above.</t>
      <section anchor="inference-phase-characteristics">
        <name>Inference Phase Characteristics</name>
        <table anchor="tab-infer-character">
          <name>Inference Phase Characteristics</name>
          <thead>
            <tr>
              <th align="left">Phase</th>
              <th align="left">Compute Bound</th>
              <th align="left">Arithmetic Intensity</th>
              <th align="left">Accelerator Util.</th>
              <th align="left">Primary KPI</th>
            </tr>
          </thead>
          <tbody>
            <tr>
              <td align="left">
                <strong>Prefill</strong></td>
              <td align="left">Yes</td>
              <td align="left">200–400 ops/byte</td>
              <td align="left">90–95%</td>
              <td align="left">TTFT</td>
            </tr>
            <tr>
              <td align="left">
                <strong>Decode</strong></td>
              <td align="left">No (memory BW bound)</td>
              <td align="left">60–80 ops/byte</td>
              <td align="left">20–40%</td>
              <td align="left">ITL, TPS</td>
            </tr>
          </tbody>
        </table>
      </section>
    </section>
    <section anchor="kpi-classification-terms">
      <name>KPI Classification Terms</name>
      <t>The following terms define the three-tier KPI taxonomy used across both
companion methodology documents.</t>
      <table anchor="tab-kpi-class">
        <name>KPI Classification Terms</name>
        <thead>
          <tr>
            <th align="left">Term</th>
            <th align="left">Definition</th>
          </tr>
        </thead>
        <tbody>
          <tr>
            <td align="left">
              <strong>Primary KPI</strong></td>
            <td align="left">A top-level performance indicator directly representing end-user experience or training efficiency. In training: JCT Ratio and BusBW. In inference: TTFT and ITL. Primary KPIs are the principal reporting metric and the basis for comparative benchmarking across DUT implementations.</td>
          </tr>
          <tr>
            <td align="left">
              <strong>Secondary KPI</strong></td>
            <td align="left">A fabric-level performance indicator providing mechanistic explanation for primary KPI values. Examples: collective operation throughput (BusBW), KV cache transfer goodput, AllToAll dispatch latency, ECMP imbalance (MMR), and link utilization. Secondary KPIs enable root-cause analysis of Primary KPI deviations.</td>
          </tr>
          <tr>
            <td align="left">
              <strong>Fabric Health Indicator (FHI)</strong></td>
            <td align="left">An operational metric characterizing fabric stability and anomaly conditions rather than peak performance. FHIs include: PFC event rate, PFC storm occurrence, ECN marking ratio, packet loss rate, buffer occupancy (P99), and retransmission rate. FHIs <strong><bcp14>SHOULD</bcp14></strong> be continuously monitored and reported throughout all test categories.</td>
          </tr>
          <tr>
            <td align="left">
              <strong>Goodput</strong></td>
            <td align="left">The application-useful data delivered per unit Benchmark reports <bcp14>MUST</bcp14> use the qualified term to avoid ambiguity. <br/><strong>Fabric_Goodput:</strong>  RDMA message payload bytes successfully delivered per unit time at the DUT boundary, excluding transport headers, framing overhead, padding, and retransmitted bytes.  This is the numerator quantity in KV_xfer_bandwidth and EP_alltoall_bandwidth. Units: GB/s or Gbps; reports <bcp14>MUST</bcp14> state which.<br/><strong>Inference_Goodput:</strong>  Output tokens successfully delivered per unit time, counting only requests that complete without preemption, eviction, or error.  Corresponds to TPS_output over successfully completed requests only.  Units: tokens/second.<br/>The two planes <bcp14>MUST NOT</bcp14> be conflated.  KV_BW measures Fabric_Goodput; it does not measure Inference_Goodput.</td>
          </tr>
          <tr>
            <td align="left">
              <strong>Zero Packet Loss</strong></td>
            <td align="left">time, excluding retransmitted packets, protocol.  A test acceptance criterion requiring that no packets are dropped by the DUT during the measurement interval. For RoCEv2 and UET transports, zero packet loss is the target operating condition. The binary search procedure in the companion methodology documents determines the maximum offered load satisfying this criterion.</td>
          </tr>
        </tbody>
      </table>
      <section anchor="kpi-tier-summary">
        <name>KPI Tier Summary</name>
        <table anchor="tab-kpi-tier">
          <name>KPI Tier Summary</name>
          <thead>
            <tr>
              <th align="left">Tier</th>
              <th align="left">Training Examples</th>
              <th align="left">Inference Examples</th>
              <th align="left">Purpose</th>
            </tr>
          </thead>
          <tbody>
            <tr>
              <td align="left">
                <strong>Primary KPI</strong></td>
              <td align="left">JCT Ratio, BusBW</td>
              <td align="left">TTFT, ITL, TPS</td>
              <td align="left">Direct end-user experience / business impact</td>
            </tr>
            <tr>
              <td align="left">
                <strong>Secondary KPI</strong></td>
              <td align="left">AllReduce BusBW, MMR, Link Utilization</td>
              <td align="left">AllToAll dispatch latency, KV transfer goodput</td>
              <td align="left">Root cause analysis of Primary KPI deviations</td>
            </tr>
            <tr>
              <td align="left">
                <strong>Fabric Health Indicator (FHI)</strong></td>
              <td align="left">PFC events, ECN ratio, packet loss, buffer P99, retx rate</td>
              <td align="left">PFC events, ECN ratio, packet loss, buffer P99</td>
              <td align="left">Ongoing fabric stability and anomaly detection</td>
            </tr>
          </tbody>
        </table>
      </section>
    </section>
    <section anchor="referenced-standards-abbreviations">
      <name>Referenced Standards Abbreviations</name>
      <t>The following abbreviations refer to normative and informative IETF
documents referenced throughout this document and the companion
methodology documents.</t>
      <table anchor="reference-standard">
        <name>Referenced Standards Abbreviations</name>
        <thead>
          <tr>
            <th align="left">Reference</th>
            <th align="left">Definition</th>
          </tr>
        </thead>
        <tbody>
          <tr>
            <td align="left">
              <strong>RFC 1242</strong></td>
            <td align="left">"Benchmarking Terminology for Network Interconnect Devices" (Bradner, 1991). Defines foundational benchmarking terms (throughput, latency, frame loss rate, back-to-back frames). The baseline terminology reference for BMWG work. Where terms in this document overlap with RFC 1242 definitions, the AI fabric context definitions herein take precedence.</td>
          </tr>
          <tr>
            <td align="left">
              <strong>RFC 2544</strong></td>
            <td align="left">"Benchmarking Methodology for Network Interconnect Devices" (Bradner &amp; McQuaid, 1999). Defines test methodologies for throughput, latency, frame loss rate, and back-to-back measurements. The AI fabric methodology documents extend RFC 2544 procedures for AI-specific traffic patterns and test durations.</td>
          </tr>
          <tr>
            <td align="left">
              <strong>RFC 8238</strong></td>
            <td align="left">"Data Center Benchmarking Terminology" (Bitar et al., 2017). Extends RFC 1242 with data center-relevant terms including forwarding table scaling, congestion, and VM/SDN. Incast, ECN, and buffer occupancy concepts in this document align with RFC 8238 definitions.</td>
          </tr>
          <tr>
            <td align="left">
              <strong>RFC 8239</strong></td>
            <td align="left">"Data Center Benchmarking Methodology" (Bitar et al., 2017). Defines test methodologies for data center network functions including incast, ECN marking, and lossless behavior. The AI fabric companion methodology documents extend RFC 8239 for distributed AI collective traffic patterns.</td>
          </tr>
          <tr>
            <td align="left">
              <strong>RFC 2119 / RFC 8174</strong></td>
            <td align="left">"Key words for use in RFCs to Indicate Requirement Levels" (Bradner, 1997; Leiba, 2017). Define the normative requirement language: <bcp14>MUST</bcp14>, <bcp14>MUST NOT</bcp14>, <bcp14>REQUIRED</bcp14>, <bcp14>SHALL</bcp14>, <bcp14>SHALL NOT</bcp14>, <bcp14>SHOULD</bcp14>, <bcp14>SHOULD NOT</bcp14>, <bcp14>RECOMMENDED</bcp14>, <bcp14>MAY</bcp14>, and <bcp14>OPTIONAL</bcp14>. RFC 8174 clarifies that these terms are normative only when in uppercase; lowercase uses are not normative.</td>
          </tr>
        </tbody>
      </table>
    </section>
    <section numbered="false" anchor="acknowledgments">
      <name>Acknowledgments</name>
      <t>This work has benefited from the discussions that occurred during IPPM&amp;BMWG joint meeting and on BMWG mailing list. Thanks Carsten Rossenhoevel, Mohamed Boucadair
, Sowjanya Reddy for valuable review and comments.</t>
    </section>
  </middle>
  <back>
    <references anchor="sec-combined-references">
      <name>References</name>
      <references anchor="sec-normative-references">
        <name>Normative References</name>
        <reference anchor="RFC2119">
          <front>
            <title>Key words for use in RFCs to Indicate Requirement Levels</title>
            <author fullname="S. Bradner" initials="S." surname="Bradner"/>
            <date month="March" year="1997"/>
            <abstract>
              <t>In many standards track documents several words are used to signify the requirements in the specification. These words are often capitalized. This document defines these words as they should be interpreted in IETF documents. This document specifies an Internet Best Current Practices for the Internet Community, and requests discussion and suggestions for improvements.</t>
            </abstract>
          </front>
          <seriesInfo name="BCP" value="14"/>
          <seriesInfo name="RFC" value="2119"/>
          <seriesInfo name="DOI" value="10.17487/RFC2119"/>
        </reference>
        <reference anchor="RFC8174">
          <front>
            <title>Ambiguity of Uppercase vs Lowercase in RFC 2119 Key Words</title>
            <author fullname="B. Leiba" initials="B." surname="Leiba"/>
            <date month="May" year="2017"/>
            <abstract>
              <t>RFC 2119 specifies common key words that may be used in protocol specifications. This document aims to reduce the ambiguity by clarifying that only UPPERCASE usage of the key words have the defined special meanings.</t>
            </abstract>
          </front>
          <seriesInfo name="BCP" value="14"/>
          <seriesInfo name="RFC" value="8174"/>
          <seriesInfo name="DOI" value="10.17487/RFC8174"/>
        </reference>
        <reference anchor="RFC2544">
          <front>
            <title>Benchmarking Methodology for Network Interconnect Devices</title>
            <author fullname="S. Bradner" initials="S." surname="Bradner"/>
            <author fullname="J. McQuaid" initials="J." surname="McQuaid"/>
            <date month="March" year="1999"/>
            <abstract>
              <t>This document is a republication of RFC 1944 correcting the values for the IP addresses which were assigned to be used as the default addresses for networking test equipment. This memo provides information for the Internet community.</t>
            </abstract>
          </front>
          <seriesInfo name="RFC" value="2544"/>
          <seriesInfo name="DOI" value="10.17487/RFC2544"/>
        </reference>
      </references>
      <references anchor="sec-informative-references">
        <name>Informative References</name>
        <reference anchor="IBTA-ROCE" target="https://www.infinibandta.org">
          <front>
            <title>InfiniBand Architecture Specification, Annex 16: RoCE</title>
            <author>
              <organization>InfiniBand Trade Association</organization>
            </author>
            <date year="2010"/>
          </front>
        </reference>
        <reference anchor="UEC-SPEC-1.0" target="https://ultraethernet.org">
          <front>
            <title>Ultra Ethernet Specification 1.0</title>
            <author>
              <organization>Ultra Ethernet Consortium</organization>
            </author>
            <date year="2024"/>
          </front>
        </reference>
        <reference anchor="I-D.calabria-bmwg-ai-fabric-terminology">
          <front>
            <title>Benchmarking Terminology for AI Network Fabrics</title>
            <author fullname="Fernando Calabria" initials="F." surname="Calabria">
              <organization>Cisco</organization>
            </author>
            <author fullname="Carlos Pignataro" initials="C." surname="Pignataro">
              <organization>Blue Fern Consulting</organization>
            </author>
            <author fullname="Qin Wu" initials="Q." surname="Wu">
              <organization>Huawei</organization>
            </author>
            <author fullname="Giuseppe Fioccola" initials="G." surname="Fioccola">
              <organization>Huawei</organization>
            </author>
            <date day="26" month="February" year="2026"/>
            <abstract>
              <t>   This document defines benchmarking terminology for evaluating
   Ethernet-based network fabrics used in distributed Artificial
   Intelligence (AI) training and inference workloads.  It provides a
   unified vocabulary consolidating and extending terms from RFC 1242,
   RFC 8238, and the companion AI fabric methodology documents,
   establishing precise, vendor-neutral definitions for collective
   communication primitives, RDMA transport mechanisms (RoCEv2 and Ultra
   Ethernet Transport), congestion control behaviors, AI-specific Key
   Performance Indicators (KPIs), and fabric topology concepts.

   This document is a companion to draft-bmwg-ai-fabric-training-
   bench-00 and draft-bmwg-ai-fabric-inference-bench-00.  Those
   documents SHOULD NOT be applied without first consulting the
   terminology defined herein.  Where definitions herein overlap with
   RFC 1242 or RFC 8238, the AI fabric context definition in this
   document takes precedence.

              </t>
            </abstract>
          </front>
          <seriesInfo name="Internet-Draft" value="draft-calabria-bmwg-ai-fabric-terminology-00"/>
        </reference>
        <reference anchor="I-D.calabria-bmwg-ai-fabric-training-bench">
          <front>
            <title>Benchmarking Methodology for AI Training Network Fabrics</title>
            <author fullname="Fernando Calabria" initials="F." surname="Calabria">
              <organization>Cisco</organization>
            </author>
            <author fullname="Carlos Pignataro" initials="C." surname="Pignataro">
              <organization>Blue Fern Consulting</organization>
            </author>
            <author fullname="Qin Wu" initials="Q." surname="Wu">
              <organization>Huawei</organization>
            </author>
            <author fullname="Giuseppe Fioccola" initials="G." surname="Fioccola">
              <organization>Huawei</organization>
            </author>
            <date day="26" month="February" year="2026"/>
            <abstract>
              <t>   This document defines benchmarking terminology, methodologies, and
   Key Performance Indicators (KPIs) for evaluating Ethernet-based AI
   training network fabrics.

   As large-scale distributed AI/ML training clusters grow to tens of
   thousands of accelerators (GPUs/XPUs), the backend network fabric
   becomes the critical bottleneck determining job completion time
   (JCT), training throughput, and accelerator utilization.

   This document establishes vendor-independent, reproducible test
   procedures for benchmarking fabric-level performance under realistic
   AI training workloads, covering RDMA/RoCEv2 transport, the Ultra
   Ethernet Transport (UET) protocol defined by the UEC Specification
   1.0 [UEC-1.0], congestion management (PFC, ECN, DCQCN, CBFC), load
   balancing strategies (ECMP, DLB, packet spraying), collective
   communication patterns (AllReduce, AlltoAll, AllGather), and scale/
   soak testing.

   The methodology enables apples-to-apples comparison across different
   switch ASICs, vendor implementations, NIC transport stacks (RoCEv2
   vs. UET), and fabric architectures (2-tier Clos, 3-tier Clos, rail-
   optimized).

              </t>
            </abstract>
          </front>
          <seriesInfo name="Internet-Draft" value="draft-calabria-bmwg-ai-fabric-training-bench-00"/>
        </reference>
        <reference anchor="I-D.calabria-bmwg-ai-fabric-inference-bench">
          <front>
            <title>Benchmarking Methodology for AI Inference Serving Network Fabrics</title>
            <author fullname="Fernando Calabria" initials="F." surname="Calabria">
              <organization>Cisco</organization>
            </author>
            <author fullname="Carlos Pignataro" initials="C." surname="Pignataro">
              <organization>Blue Fern Consulting</organization>
            </author>
            <author fullname="Qin Wu" initials="Q." surname="Wu">
              <organization>Huawei</organization>
            </author>
            <author fullname="Giuseppe Fioccola" initials="G." surname="Fioccola">
              <organization>Huawei</organization>
            </author>
            <date day="26" month="February" year="2026"/>
            <abstract>
              <t>   This document defines benchmarking terminology, methodologies, and
   Key Performance Indicators (KPIs) for evaluating Ethernet-based AI
   inference serving network fabrics.  As Large Language Model (LLM)
   inference deployments scale to disaggregated prefill/decode
   architectures spanning hundreds or thousands of accelerators (GPUs/
   XPUs), the interconnect fabric becomes the critical bottleneck
   determining Time to First Token (TTFT), Inter-Token Latency (ITL),
   and aggregate throughput in tokens per second (TPS).  This document
   establishes vendor-independent, reproducible test procedures for
   benchmarking fabric-level performance under realistic AI inference
   workloads.

   Coverage includes RDMA-based KV cache transfer between disaggregated
   prefill and decode workers, Mixture-of-Experts (MoE) expert
   parallelism AllToAll communication, request routing and load
   balancing for inference serving, congestion management under bursty
   inference traffic patterns, and scale/soak testing.  The methodology
   enables apples-to-apples comparison across implementations, NIC
   transport stacks (RoCEv2, UET), and fabric architectures.

   This document is a companion to [TRAINING-BENCH], which addresses
   training workloads.

              </t>
            </abstract>
          </front>
          <seriesInfo name="Internet-Draft" value="draft-calabria-bmwg-ai-fabric-inference-bench-00"/>
        </reference>
        <reference anchor="RFC1242">
          <front>
            <title>Benchmarking Terminology for Network Interconnection Devices</title>
            <author fullname="S. Bradner" initials="S." surname="Bradner"/>
            <date month="July" year="1991"/>
            <abstract>
              <t>This memo discusses and defines a number of terms that are used in describing performance benchmarking tests and the results of such tests. This memo provides information for the Internet community. It does not specify an Internet standard.</t>
            </abstract>
          </front>
          <seriesInfo name="RFC" value="1242"/>
          <seriesInfo name="DOI" value="10.17487/RFC1242"/>
        </reference>
        <reference anchor="RFC8238">
          <front>
            <title>Data Center Benchmarking Terminology</title>
            <author fullname="L. Avramov" initials="L." surname="Avramov"/>
            <author fullname="J. Rapp" initials="J." surname="Rapp"/>
            <date month="August" year="2017"/>
            <abstract>
              <t>The purposes of this informational document are to establish definitions and describe measurement techniques for data center benchmarking, as well as to introduce new terminology applicable to performance evaluations of data center network equipment. This document establishes the important concepts for benchmarking network switches and routers in the data center and is a prerequisite for the test methodology document (RFC 8239). Many of these terms and methods may be applicable to network equipment beyond the scope of this document as the technologies originally applied in the data center are deployed elsewhere.</t>
            </abstract>
          </front>
          <seriesInfo name="RFC" value="8238"/>
          <seriesInfo name="DOI" value="10.17487/RFC8238"/>
        </reference>
        <reference anchor="RFC8239">
          <front>
            <title>Data Center Benchmarking Methodology</title>
            <author fullname="L. Avramov" initials="L." surname="Avramov"/>
            <author fullname="J. Rapp" initials="J." surname="Rapp"/>
            <date month="August" year="2017"/>
            <abstract>
              <t>The purpose of this informational document is to establish test and evaluation methodology and measurement techniques for physical network equipment in the data center. RFC 8238 is a prerequisite for this document, as it contains terminology that is considered normative. Many of these terms and methods may be applicable beyond the scope of this document as the technologies originally applied in the data center are deployed elsewhere.</t>
            </abstract>
          </front>
          <seriesInfo name="RFC" value="8239"/>
          <seriesInfo name="DOI" value="10.17487/RFC8239"/>
        </reference>
        <reference anchor="RFC3168">
          <front>
            <title>The Addition of Explicit Congestion Notification (ECN) to IP</title>
            <author fullname="K. Ramakrishnan" initials="K." surname="Ramakrishnan"/>
            <author fullname="S. Floyd" initials="S." surname="Floyd"/>
            <author fullname="D. Black" initials="D." surname="Black"/>
            <date month="September" year="2001"/>
            <abstract>
              <t>This memo specifies the incorporation of ECN (Explicit Congestion Notification) to TCP and IP, including ECN's use of two bits in the IP header. [STANDARDS-TRACK]</t>
            </abstract>
          </front>
          <seriesInfo name="RFC" value="3168"/>
          <seriesInfo name="DOI" value="10.17487/RFC3168"/>
        </reference>
      </references>
    </references>
    <?line 469?>

<section numbered="false" anchor="appendix-a-term-cross-reference-to-companion-documents">
      <name>Appendix A: Term Cross-Reference to Companion Documents</name>
      <t>The following table identifies which terms from this document are used
in each companion methodology document.</t>
      <table anchor="tab-cross-ref">
        <name>Term Cross-Reference to Companion Documents</name>
        <thead>
          <tr>
            <th align="left">Term Category</th>
            <th align="left">Used in Training Bench</th>
            <th align="left">Used in Inference Bench</th>
          </tr>
        </thead>
        <tbody>
          <tr>
            <td align="left">General Benchmarking Terms (§2)</td>
            <td align="left">All terms</td>
            <td align="left">All terms</td>
          </tr>
          <tr>
            <td align="left">Collective Communication (§3)</td>
            <td align="left">AllReduce, AllGather, ReduceScatter, AllToAll, BusBW, CCL, Ring Algorithm, BSP, SPMD</td>
            <td align="left">AllToAll, BusBW</td>
          </tr>
          <tr>
            <td align="left">Parallelism Strategies (§4)</td>
            <td align="left">DP, TP, PP, EP, MoE, ZeRO</td>
            <td align="left">EP, MoE, DP Attention</td>
          </tr>
          <tr>
            <td align="left">RDMA / RoCEv2 (§5.1)</td>
            <td align="left">RDMA, RoCEv2, QP, RC mode, RDMA Verb</td>
            <td align="left">RDMA, RoCEv2, QP, RC mode, GIN, KVCXL</td>
          </tr>
          <tr>
            <td align="left">UET Terms (§5.2)</td>
            <td align="left">UET, PDC, ROD, RUD, RUDI, UUD, LLR, Packet Trimming, CBFC, UEC Profile, Entropy Value</td>
            <td align="left">UET, RUD, GIN</td>
          </tr>
          <tr>
            <td align="left">Congestion Control (§6)</td>
            <td align="left">PFC, PFC Storm, PFC Deadlock, ECN, DCQCN, ECN Marking Ratio, Incast, Incast Ratio, Packet Spray, DLB/Flowlet, ECMP, MMR</td>
            <td align="left">PFC, ECN, DCQCN, Incast, Packet Spray, ECMP</td>
          </tr>
          <tr>
            <td align="left">Fabric Topology (§7)</td>
            <td align="left">Clos, Rail-Optimized, Bisection BW, Oversubscription, ToR, Spine, NIC, Buffer Occupancy, Zero-Impact Failover, Link Utilization</td>
            <td align="left">Clos, Bisection BW, ToR, NIC, Buffer Occupancy, Link Utilization</td>
          </tr>
          <tr>
            <td align="left">Training-Specific (§8)</td>
            <td align="left">JCT, Roofline JCT, JCT Ratio, Gradient Sync, Step Time, Soak Test</td>
            <td align="left">Soak Test</td>
          </tr>
          <tr>
            <td align="left">Inference-Specific (§9)</td>
            <td align="left">—</td>
            <td align="left">TTFT, ITL, TPS, KV Cache, Prefill, Decode, Disaggregated Serving, xPyD, Continuous Batching, PagedAttention, Prefix Caching, Normal/Low-Latency Dispatch, SLO</td>
          </tr>
          <tr>
            <td align="left">KPI Classification (§10)</td>
            <td align="left">Primary KPI (JCT Ratio, BusBW), Secondary KPI, FHI, Goodput, Zero Packet Loss</td>
            <td align="left">Primary KPI (TTFT, ITL), Secondary KPI, FHI, Goodput, Zero Packet Loss</td>
          </tr>
        </tbody>
      </table>
    </section>
    <section numbered="false" anchor="appendix-b-term-taxonomy-summary">
      <name>Appendix B: Term Taxonomy Summary</name>
      <t>The following table provides a concise summary of all defined terms
organized by category, with the section reference for the full
definition.</t>
      <table anchor="tab-taxo">
        <name>Complete Term Taxonomy</name>
        <thead>
          <tr>
            <th align="left">Section</th>
            <th align="left">Term(s)</th>
            <th align="left">Category</th>
          </tr>
        </thead>
        <tbody>
          <tr>
            <td align="left">2</td>
            <td align="left">DUT, SUT, RT, JFI, Offered Load, Trial Duration, Warmup Period, Binary Search, Percentile Latency, AI Fabric</td>
            <td align="left">General Benchmarking</td>
          </tr>
          <tr>
            <td align="left">3</td>
            <td align="left">Collective Operation, AllReduce, AllGather, ReduceScatter, AllToAll, Ring Algorithm, BusBW, CCL, SPMD, BSP</td>
            <td align="left">Collective Communication</td>
          </tr>
          <tr>
            <td align="left">4</td>
            <td align="left">Data Parallelism, Tensor Parallelism, Pipeline Parallelism, Expert Parallelism, MoE, DP Attention, ZeRO</td>
            <td align="left">Parallelism Strategies</td>
          </tr>
          <tr>
            <td align="left">5.1</td>
            <td align="left">RDMA, RoCEv2, QP, Reliable Connected (RC), RDMA Verb, UET, PDC, ROD</td>
            <td align="left">Transport — RDMA / RoCEv2</td>
          </tr>
          <tr>
            <td align="left">5.2</td>
            <td align="left">RUD, RUDI, UUD, UEC Profile, LLR, Packet Trimming, CBFC, Entropy Value, GIN, KVCXL</td>
            <td align="left">Transport — UET</td>
          </tr>
          <tr>
            <td align="left">6</td>
            <td align="left">PFC, PFC Storm, PFC Deadlock, ECN, DCQCN, ECN Marking Ratio, Incast, Incast Ratio, Packet Spray, DLB/Flowlet, ECMP, MMR</td>
            <td align="left">Congestion Control</td>
          </tr>
          <tr>
            <td align="left">7</td>
            <td align="left">Clos/Fat-Tree, Rail-Optimized, Bisection Bandwidth, Oversubscription Ratio, ToR Switch, Spine/Superspine, NIC, Buffer Occupancy, Zero-Impact Failover, Link Utilization</td>
            <td align="left">Fabric Topology</td>
          </tr>
          <tr>
            <td align="left">8</td>
            <td align="left">JCT, Roofline JCT, JCT Ratio, Gradient Synchronization, Step Time, Soak Test</td>
            <td align="left">Training-Specific</td>
          </tr>
          <tr>
            <td align="left">9</td>
            <td align="left">TTFT, ITL, TPS, KV Cache, Prefill Phase, Decode Phase, Disaggregated Serving, xPyD Ratio, Continuous Batching, PagedAttention, Prefix Caching, Normal Dispatch, Low-Latency Dispatch, SLO, Speculative Decoding</td>
            <td align="left">Inference-Specific</td>
          </tr>
          <tr>
            <td align="left">10</td>
            <td align="left">Primary KPI, Secondary KPI, Fabric Health Indicator, Goodput, Zero Packet Loss</td>
            <td align="left">KPI Classification</td>
          </tr>
          <tr>
            <td align="left">11</td>
            <td align="left">RFC 1242, RFC 2544, RFC 8238, RFC 8239, RFC 2119/8174</td>
            <td align="left">Referenced Standards</td>
          </tr>
        </tbody>
      </table>
    </section>
  </back>
  <!-- ##markdown-source:
H4sIAAAAAAAAA919S3MbWZbeHr/iWhU9A1QnQFGleohq9QwFUip2kSJEUlU9
D4c6ASTBbCUy0ZkJUpyuipiYhcPrGUd467AXs5+ICe/Le/8H9y/x+c4595UA
JVa3J+ywIiSRQOZ9nHverzscDntt3hbZnnnwPCtnV8u0fpeXC3OR1cu8rIpq
cWsuq9rsH5lXWXtT1e/Mi3Ra57PmQS+dTuvsmt6kL+VDc9cYD3qztM0WVX27
Z/Lysur15tWsTJc077xOL9vhLC0wQjqcLm8WwzQfXvKAw9aPMXy422vW02Xe
NHlVtrcrevno8OJFr1wvp1m915vTFHu9WVU2Wdmsmz3T1uusRwv8rJfWWUoL
PV1lddrS241Jy7k5Sct0kS2zsn3Qw9YWdbVeARIn370031Wyi5f48EHvXXZL
j8z3emZopsEu8XuwSPxK4GjrNC/1W/qVtpzV9FKG30uFo+wQn5wdnOzz/9X4
8PoRfnpzeIH/ZlVRZLM2v87ox+VyXeYzXj4PWxRn2XwtY/5qzM9fXLzg/7/5
1szS2VXW611n5ZqAYswib6/WU9rcpYJ6525IP6DnCwJm09LzV227avZ2dux7
IxlplFcfGGHn3qc6umqXxYNeL123V1UN8NLkxlyui0Lw48GLrC7ptCoz1tEe
8BME3zZPiwZPjOSjZl3rO/GjVb1Iy/zvGHR7Zpw3s4o/z5ZpXgQg+csZvhoR
qB9sWcc4rYuqMZN8UaZtWlcbyxhvLKPzbLyO58U6M9icGRNCrouW8SVY1own
/MspPUfoU45m7rFty3udl+a79caiXm8syj4Ur+brdXqT5dH807woRjfrv7zi
r+4Cy8t83WSrFW0lr2aEsJvH83JjCfGjH13IQqcYXep70ZJ6ZVUvUxDJXq8H
5uJ+M+bo+cX+8Ox0fLjHA1pWd1Re0vqegwfs17OrvCUiW9eZOV9ls/xSiSwx
+2WZvTe7X+wxZcpi6TQXGdGFJYubm5tRzsNNabg2HdFu+EGH0PxniF0Sv/IT
X9TpPDP7TVPNciVq+sM8zDx6uPuwR7+/ORwPzyf0z+7oYbyDNwWxGHPYXhFi
ZG28cENPb1/sGm9l+tKHV9qZACha1W2+XkbLfPQYyzwaHozuQerxDn6yuNm2
IVpHSuucvcvqUZ61l9jTDomW+/Ofna0gyEtC3Bcjx3L0C2MEgTdYUvzieOS5
ROfNLhOJX3w9IhLuvCF0HT/3cuSorfP0BjXGx/XF8OEjocWszrMG1GI3fVS2
fNTDA4DuJ8jlj2KAysIhi80PIMEJYWY1j5DgQt/9t8WGaIH/LyDEv8F5RXv8
2JE5feWPObMj+7I5z+rrf+vD6yz1/8/T62yy1+sNh0OTThsArO31Lq7yxhCs
1tBlzTwjIZM1kZoa6qh8UNl1WqxT1jkskx9O0yabd/TTxqzxITGheU7T5dN1
m0FqthA3JOB72ElR5As+8f7+0cApv6xju6UbDFpU6bwZmaPWrOrqOp/TKtMe
qbWXOQ16Xc3S6bpI61sDVacq8jkvkMfJ3rdZObc7acxlXS3N2Yux2X30+FHS
w09fPfrsq4Qfpg1BYV6RWkHikHBS9mKWAbZacDVJjxTddFrkzRWGX9UkSpss
MaQ6z6t6WGZr2lAhUM3FeAAAvW7ei3RzGiBf5vi8SVizBzzKZkXSk+afXdGa
Glp/X5R9Xi7LWncMYHry/CABIBa0PIxLP7Z1VdCxXqXXeVXT8PtHw0Ylv/km
u+1NsprVH4D7iKBFK6LnTP+byVEzENAoJNpqJWCgUWfZqm1GXSyin9MAiG1l
fv/7v7g/n//hhx6m+8g7HcT+4YeRubiqmsyfTu/869M3xwfm1ekF7dykq1UB
XLkhI6Rat+Yyr5vWeM2Yjz7EdSGGeY9gm+UlyVf8EB2mfGOq66wu0hWP7BCL
dCHjUKuHsT0y4TwIK4OxQCVtBMQ2fUcoDpTK5tjmSEl3mc/nBZlnn4AR1BWZ
cawB9j75xJxlv1vnNVuljTlOy8WabFQcTmbIBgUVzRvz4OTN+cWDRP4HbPDz
2eHrN0dnhwf4+fzr/eNj90NPnxBY+p/8m+PTk5PDVwfyMmAdfdR7cLL/Vw8E
gR6cTi6OTl/tHz/Y3C6Z2UAUOqcc/I32DWaRNj0i9BnxDmEkz8eTH//L7mPC
jX9HsH20u/vkhx/0l692v3xMv9xcZaXMVpXFrf5K0L/t0flnaY1R0qIgI3eV
t2Rk0LONaa6qm5JPk6D86d8AMv9+z/xiOlvtPv6lfoANRx9amEUfMsw2P9l4
WYC45aMt0zhoRp93IB2vd/+vot8t3IMPf/EXBSG3Ge5+9Re/7DH2nM8qUv4A
ucm6XhEl3SUcQhqxPIRAKsIhcnB0xAMRQFdC5I45FfQAUXpVg/Hc9rLyOq+r
knEZlJ1ZclQmPgPR7fUiyeKdJ3e6P8wqbSE+6dyPj096XsQ0qnCkgVG3yYUb
sivLFgtXJsyc6s3hxXZ+65l20uWePfoGEpjEe4MBBWdXARcmiYMXAn6zwWjn
FR1HWdmjMekMDJnfJprB2GkSjtmrAw6RGBZFZFYu1uLaMsRsCFgkvMTTxdLW
zZHO5wSSphfpBtWlKQDkyrrHSNzpGdMERMp1Oiwrslb7r749zst3O5PxUTbo
MY3T3CUBWhbS0KkTt7Iv09gj5WmFLOYqX4FBHL4HvGhm52nrAkXEfcPs/LJa
62ZoXfxGiLxOeBOTJ0QkgUN8BLyb+EjfYmq41Dv1ogFOrycDgOFjAGilZpbh
9bvfs1JFkJqYYM+iOXNIyDMaskk63NKpQBsypcfbh5uzearKTEdo1fCNlGB7
wf5BYsEEPeKMVQRJuPQiJeiyJjWWIUQ4oPz488dgwREowKBpKit8eVcf1bB6
ToZvR4Kxe/3ACftNHaQiiqC1tVd1Fs4Y68+NxWuczP5RTxcTHtgeCd6PajD+
THH20WkN9kInBWnxEUl/fOiOcrQXu8qXm2aUdyEHmvPHJ9rQqD4+U+Cddhw0
mLJ3tFwVzG2yWvzmajtAtVR5V2fpXLA70r0sIk6zSyAw8OfWKmn+LLdq5SMG
dxPpb6xfsDniXH4ksByNuXd5kXgWHK/O7BAtj7EuC0IUXoLTne1unU6Xq1YG
+8Q0azK0IDFI1Z/nlwqshNSSfHYFHKVpMlgoRB0zMpJvoeV9Yl5mZQbDYcPR
1Yg2d0nSrbrxFo0jZF7bQt9eZmmzFmbvqbXHlEjWUsH6FlQhT4IRm0ohPglL
Mxzk9zy9+d4ceJX1+973pJDyX/r+009dLOfTT+lBEdlsSNAOnYUyhZleelMx
ZK8y6ywrslqtj5eTN83Or+mfAWPcXdL+Tmvx4naluknuMHHO/IggXw6nRTXj
vY5hivcv05aoLcsG3sap1yXPoCYXrYGEvVcIRuaAxdGsFauSZQ6pLe0Qe+wv
XZRoR+XbwG58ZARoB28uGFwH2XVOa39TzklcXNCBitJjwZTJ4vsEiBvG2FBT
6KgSjL/TTEiWUYDY+1F51zljzbQKxzFrswTF6aM6cbNniiy9NA3ZOLMraEbN
ChqH//3V0VgkuaXQgjR4O0rakOY0JfTWXZ/rrs9vmzZbbuzavU5Lxs904KaR
RwnWxXrexRQ7PcNfpqQtiU6lfGnHYUtTXbY3BKOErQNDJzVsKz4wUbkEgBgq
pKBq+ltRKO0ezmQLZ2RL6uLpvx2Y4JdgDELDtLgRf2egea14JDI9mPxINgmp
Ok8FjL2cmag991ZHS1vLcgh/awTYxNos05KwgehnPiQhVhVrpsw2X4IlLFe8
3TVGyJkL6PEE+3JsclM038FcZfe/enHE2/8VAfXPGyL8nMibmOMRHeX7kdk3
DcRMbafCZi+JZw2LjDgvZHK1XlytYIbbN9NZXdF/JT9H+PYbmsI8M/3/+d/e
/69//a+DH/+FgNsvzY//3cgnP/7L4Dc4QRocvwJ/WVHxQ+ucJsd6IHh4Gbuj
h7RXYUwN0xE0O7uMp4beoIPkxxv3IFjPKiWdmlCY0JdFF+1/mln8ZN09ViHr
DCxCrJHymoAnOrXA75SFwdwcE6Ny/LKtyCR1Z45jhgOgEa5F3NqSqvCaCKsS
CBHoMZa/XYIl4BhZRSdS5eH6D//w9/+0+/DhzwYgVTw4ZawhhpETkxoZuy6W
aXkTWmcEi2xFZEKzEUOlYbEUvF+vJOiVlZBC2K9jSBKaw8NN2lozw7qhLCwu
SBkpzIF+7aFBWCzygQ6DnSwqN0mu0ixFTJ+yVrhDwO5eiH5ijS4ClXIHoltv
NNNUpHHky/USr3/x0AgpNUooDpH4fabQzx7GDzVV+m4H0jcvCDfsREAFoTNP
Qc7AsFpyo/o1QdZ6mAQc36X1cr0yEwJfJbixb4jRz9kwBkIMo31bAM3XNatf
DCKLQ7QtoI8hmUn4CE9XE6g3ZBvMAa3DEsNBFU5XbDPTzsUlBkZFG5q8GJub
FMpGyoYdQHEwfj1+tQNxuGn5FtD3SLOjlRBbTOe39B/QT7W5cP3TbJGDLO44
mD2z6yBuAfQ8L+HpPc9grAuASgNzl/U6IQyykGbZHJyHFYdMlEsr75bpez71
KsR24rEWv4DZyyxrWYdzfHfTvKYd97PRYpSYv8vqyqyg3LQ0WtMMRJI1vEZz
lRbXmfCnGcEO3ulMJiVNgqxempqBpZsAMTHPqBe8YtLTTMDeiZoejnZ/Fq8e
4oBwjKDlBnEAI1SCIZoTyRzTzOXsVtGqkN/4dFh5sFxEAEXLCpRdmilkKlAd
rX7hB0ohuYk28C29zh4h8GCMIwwR7nowV9KdMlF/J+/fWzBOPn9I/zz5HP88
4X9GTwDKNC/cHH36VPT0KTGFAXA84yWyoleTUnCdqrC7Q+WZZrOUjAFzfnxq
iBEVNrGHLWXBFFrWlF2mDWIrcOqDZJak0KZl5FMBjH+/Zz4hUhmSOBfbSSJe
zx7crcg/+AGK/ti7qcaRm+oDyr76ekRW3+Hlqny2Ek6xZxUaBBhAOpY9NNW6
nrGY6+rWoRV3H+U/2IlLlVIkm1XEZohkMfJWZxwhXTbjiRuiyoIoLKvWDZ2l
qgSMS6FdAPeF1/6rOlD+ORFrRNCsQ/CQOuGSnkx/Uadz1ofSxaLOFryWQYIn
XqawU0yfRD3ZS1DoHFjkGRnifMYL5+eQqgJLUfzw5udQfPClBk1o0IuK/iEE
f09waUWRaEHrpTmpDknZnmcF2IUaUXaZDngOrvSC8CdmFTz1LF8B1Znr8tmB
YcH/A+EOCqGJgwcb1TAFGVS1H97kRAoNccM+vVUxANx+BpbS3RTCV9i6q4hO
ZPqtQSzGqrRNh4Am7aKQABuvzn9mj3Fknq+b599ZNc5i6jeTowA0cj5/PGga
UkzmvKV7QwlRLjCeNOR7PA6B4Y0GOLub6p8ArcgQHDbtbUE8qkhvIRKJk0Pq
loRTBN+I5tjcVpqldTkMdEw8Qr1NENAhTK01HB+tR042GVKLoWTDVZsA070D
WnNr2zYFbFP2pelwsEJYnfAw+Ovs7JS2CyduRVrEMieRaBBmbknCZ+pZUbcs
h95IKF7KWUBxGTrMD86bSeenHHfDDstg6av0lsUjQZ4sD0IoQfDwHW95ZVtf
ZW37jpfVXiVtAH6GcI0QPyf5e8QRhtXl8JCpHxjwLisdExhZM5FmfvXjv/zh
P/7jbqhmm1VF6h2sU/5BLCMe+VWEsA5F2IlZLCpaz9XSakYB54NItFS0EyHU
gPBa3zOCnSzX2ZQg/ZV3V6vHP576cPsZEMBzkFK+uGrJnKX/pkpuDtYMWDxU
ZJfBM3S6j/qvAIsBtMdVw6zBIEnuJp8TBmdq0JIu8Mw9uvMqgZewrmg5WCeM
HAZVgWwR82pkzlso0UT/iEaxSYSvhevQ9mZr0QOclokvnDFCc+rBsl8/MA6g
7AfSyerhaYAMv5jWO780xovlxJuNabPHX9tn6I8s6ZmEEN42REjmf/xnPp+3
l8QmKjqsHbaOohfFFA6eEldnQa83zvIPEBRp0Mr+9ODdQszOLwN5Ln8cWpkt
f4JZ5XW7KI97+odRdAc4sK4bBme1FqOR/jzCPvulHP2OKe8cRS2AHXk7G07l
dzjvdBTC2T/8wz/0y+44KuHD1dz1Z8tCYvH/Rwzg1AH75yCv4X643wB2lJKQ
Q1LXI3LccKCO/FvKGiwcVcp6xwQzryVcHA0sdKWQkfEjvKqQmxQgUJFP6xTZ
SawAcLid3sJuMLRnJ/PbMl2qG1YCwexcahqICsZuVf5xfBoqZtt6KSYGP0hG
J5OVp2j7xeBphPTsqmcr81oWx0Ivmo4T+BHSnwY2CchYnxryU9M1G3PsI9Qt
6P44X6MRiW1BdMbjNDIuG7t7XWJLPEySaMlsDiWboL3lVDFeLkmfhvkGfiEW
9YbUcGRzTldNlwFZ/jUeHzP3utPGOJZJ2FmnLlE3sQQYmd+rLJ97F7rVUi7D
JQcmR9/hWaBUJyZrZ6OB+HFSH0AJfOngs7TqzkS0FQBVvG3WD+n9lWztyyk6
7/Lk5EDcy+ImmtQVqfxLc4JEHxraHBBflenE8mBvJ/Rws4YnWiJN03Xxbtjc
kuFGehzZI5G2ZrXWxOsiG1aKWjUmhyhX9QBMXzXJ0msaLFKYjBmApJjptPnf
MUm3IaCnaU14XSNJr73JCDlVRlqpVbwz58GqJ04nfX4+GaguFW4lXJQAoSP/
C9hoqToyxRkPLPRaP2nrV0S1jdUvF0U1pd36TajbTxbu1i0vBXLZWXTO+FTj
19DS90D6N3hslZJN+If/8I8cS4o/8QzO2XhYBX8XKqXZKjTbFbhD7Mha7h+2
zMV6PwigaMGcN0vaECu9t/e04lfBq4G+rMmT3RQXOaLtoS8bWHRODGvo92za
S5jIFyaD3NPKB+FEW+0fbMUpH2uxoCAKLRiE6oFDGYjuheZhPT4gnsRTg8vM
s0OCWBpizOoc0IEJj16GRx4iXnoJac1zxCijEABUAtQJvA4HEzjo3tHgJ9my
qm+HWRRLUQnFFvzTrguG+NwVYtsIhFja8H6FWbUuHcO6EFs0AuzFhwEbhzwd
mCOwIXhBXHxNtHiTsR6+TBHgylywZWk5YqQ1iEqPjVuChxXgfR2w+54GyhQy
HSPFKHC8sFNN4eyclHwUbBKzb1O9MJn1UI/MgXUthLShLyN0i4Qin1zknCeT
fCWKfQTJyVZIOpARIuQLTRojPr1Yg2uyG4lFHMsnOPuyOrDivWkMg5dEXsT5
WfeeZ/DqHIoPHLoHu6Qb8EnEWUmdqIlNwhPENCnMrYn8ABryKZFXwGOE9mLH
NuQ4gNrfLmoopxxERfvh8JjX8knvgRJLNQbhoQPhFmZ1e5etK4QRuNAIxuoF
a9bToc0TcwS3iYNiLZOuhYCExMByDlk2nDYvqRM8V791of62Wg2/wdEdakBN
nxkkioiqCosiHlOtZhmocf4diW5zOLH66ZMvhjfp7cBGbTMspGw0RiVpbm5Y
ePTZZQAfH+ONhS99wLBUgPFCZYHQxEJ0C3MRmc+lM2aAPOllls2HViQqYlr/
TtbK9rcAO8E3IiVYYxdgbYHiN8843eGbZ48HTMOKOKopy8Fc5yncA1law1/O
Z0TbxI45FEgAUlWd+Tzi3jNEyXiZWBRoNa0tk0k4ShXkz9xxRqHcAlFxlJUt
GOQHasz/d2tmjEVWLtorl2wxMfvEoErnnN6QZzZPTSkvtU+HWlKipjYesOWy
wFHHe9mbxygduz9VlHjuelUVRPG7OweTt+dHf31oPWx20Kd3QSB7j7xS8B2/
QAI+LTDwSXrpwF5HFm10dMPQV7FkqUZYX7WrOlepVlTKC4ntOL0S3j0G2V8j
0AV2TzobPDCnVq0C9up4qmrJYiMuEQPESTMONYlT1emgTGDeE0+Y63hVEilz
eHBgIW4lNupvEHkETDu6x5JVTkLLXTtld7Sn+sAjJAc2gS6Jx2kU+/1n8r2X
6vaBiO+TnKgz1o/v8L3FpxtqpsI6A4CJcno/xVOUVFtB5UpDrFaKvEpfSMKJ
zh/XV/kxfl5e3ZYanTY9W/izPf42QhJVYiYHY7Fnz04PmMVIbo/Gw3vCitk7
EngmmHcQCPPaLoE1qawG9dHBsV5Jw7vqlp5LrQ0zXc7VjP989Oieqi92Lgk/
hOOkq6jjRvRCsz+DaHcMPC7Z8YkRuh8hE8huJRjxKTLrDwykqwppB7ZeJS+v
q+LaqsNzBNw1KDGevEkCix2haLJlVrfhmRCY3yEDrzDTW+i+I3PUMecV+mGh
8bdZPSUaLCWufvScJGj+3f4ZCUTGAzblL8YTDXXpeQTfjSWGHeb/XT9yioY8
LzD94Cvi+zCPRvBmdzLyCa3SVcOuW9p6sHb/iCia/ecXX6v34c3BZOdoEuSr
8JgeWMy01CZ1qyB2WqfOMh7ZYpuGo/2cF6qY7hwhpHm+GEOG4slrMgLKdqD1
XzXQoGSMsdavuFhKrjt35eNP/TbgNtPFR4fPW3z85ZNdC9bXEwbp63W2hiKc
1+LpuATD5uMuZL8dkVLORY8ErRGpiQv7HFlxMlL//PVAjlmDY/rx2Ws60NcT
oTCbyFmVw6rOnZ/mjNgTp7yN5Xv6tH82HjCrH3m3DEaBciFR6WG0S9qHz+GF
GqhM1RyOTya0elIaVpxko6fAkZtpWpCMYo5jY2hbF2IjJQyX15OIq9Wcl8nO
ck9itY4D988QSTRIMUEhBNGypd7sPalM0LBuKgddjZza4CamwsCaqrDdmwYl
S0glQDoPaGh765XNrIkZCY4Lyf4t/WXQDmlGycTBlHXGT2pbFAcjAAGUb6Hi
EdqHdlVT4WdX4l5bcsIphPcIFYUg22mzZ84PXx3snB2OvyXl8qYaNqRSzxMb
AyINZI1kHWJ0hG3TNRJaiJt8d3Z0cWj6pEnZ56Xk1+oXN3UOvUfZaXGLwPzh
/kH0Ri1sWt9AlrmNyLcVXLl9ESrZkD4bNjfpKiGVmrR+/p3EOmF1dCRci2dD
3nSsyB+RdWJQ7NICkKQPg67TB+HCOzpDBkbQayuaxkmn6S2D9s4uCqb/5pBo
Z6Nvg6Y1k9I2tGmlXF+x8/Vk7BLh/BqQuJWHiAQelpBavhJHtjJ9TWlqVnXK
XtH+2ZsDBSTMnoXYQeJHINWmQ3fqKOymh408MD7G1j4jFVCrd4/2X+0Dq9yu
vd1/MGaIT2SxB5YUx6LEqqt3RaofsnESU8AZoi6RUJTMrSktCkpKNkW1gDsA
lYnr1lv9HLlMLcMIOC2tpMsJWTb0y0oJlVSCOamJ77JBIIA4M1bTm6T9iTiu
OFoq4XZQL2ezCv2q9/vV0VgMHs7DdiR8eqCqirK7U7AoGsRCBigYJbI7RvdB
HqfIYFld4mQmW4xezAFeqg2cjZXLn69zziYUEyNbpLRV0SZXqp1LcZI10KHh
zpBARhPj98WaFF06zqwJFWS4VKCGil68VZ9lNfiTO6kR5HQxuIfqGyqVrvbZ
m6uWcHs/mXB7v/992K3lhx/uq5G+6Zzym7L6E87ZSzCrcVrRwbXtcvSCB2pr
geBhz6qtfwfH0IdZTvMbbvw64+GGwve9x1TTDp2MICS3chPtjLBBbMpuZVbA
l9upfQo5tyOLNwdHH4MYj3M0z5arquW0/5VPprwLlj5EhpcD0d2kl2x920gQ
F1aQrYvoz8yFPRlVv0NCKTMaFKaQjb5eWqVWZB9X4VvEbBAi02KASIhLroZY
SZZzCMOw0HVSSrHnTVn/afgzJa4Nvzh9m5i1G6LDKzQMi6ReTeaWxURsoeW8
pZY5C1EJAyBShrjAxEeHxN/l+cdQVP2CnZ4VnBylTyciGkMk8DIvbG6dlbio
0hKXGfZ6maVc/KtwF0XYET7EPvJB94Bsz6GA9rnCMkC9nSAe4/O2dVhIAeQa
HpkXsFT7dpRVsWZOI+iNFJpENMjhkn3EbbpYcLzRJY2nrMcALQgmkPF9lSDD
GB9beC/UQYbHOuADCIYLLB3zxpETJ2CPj88YZijhNccM5DOc00j0Qx0dECZx
9k6PISuvAP+gYJUD+GnDOTPDq2plaLesh3I1960lfMc7MZh1ntISOK41I6bb
3C6npDHx24196wVm5+oSEZtKFq34v1PO35EU5qVLRM+5bmBOtsNKdEaiGBeh
qsWJeOuiACAA3VuH6lgdWvImmWB9trJTUYQcLkiDhqLsdevtsLMlEkF42epR
mZX2CUo6rrjUgDgKbyKKrgkFJsYDQjy+V45HgwRhGrdYlaPZgfV+WgY8cst2
ft2IPyMewicjlCe6ISsxmc2YQE1BDDLmhXW6yufFbQB5n+V1Q4iSsfFPlNHk
rFdgky6/4fkL0frG9HLeDp9z4sgLqEhjq2Z+CMIidHyJXuCvWTcaquCqTDPj
CeAGhMeZdb95dYMkzixdYvfrlf485yo+EhVYnK/SRtSR0+rBAKz0s8fCHAae
gsn+m/NDwU9idMQ3l3npA6Y4KwRXOL7lChdZ9yP+smRHeLpQD3neIKyiDeI0
7oY5XIxH7eVvkXliIzsZElgZL0iwFC7BAjxREcUa3z6OpijTSB6nK8uRZBsb
17kmemAe73WA0F/l4MwqQ2DL+zxINTePJubzISnRRea1Dnhcb9iOI7xi6ugq
IF7dsLpll5ojlTpWpV8evWL4vJy8GR4pUs+tU5U9C11PCpy183yx9KRL73IV
OniuNVk9hdhSP/GBBVk0nEeZGGfTstmbNTs8kCNSWAAWBm6rY5pR/IWZFod1
sIkeGGJZ3VB5jYYJQzrc1Uhjuk2Q+mhNlClci9dSZIwwZlgRFcmTUHsSeH7z
7fjXkpf0zbdmzKGTC+uuuF9CUhM6zDpB0Nj94dspmX6dLYAddeK+5oqO/PJW
fHI+WpKVC7Q5SVyTLEwa5AVt78kUtLoIS5pFu0sM2l4krskFP3GZziRCbFmq
Q8p0UVZcjOOiS8t8ofa8VGq4kPl8e+OS0ERaEy2ofXQfG0jMpU+Y7P0D56L2
NdJ6gV3xMFL0YzpNa2EGyvX3rCMELH/MZb3vEQIkW119YG9I+RlDAwrMm/Bv
ZND+Fa3A/sv6U+TTIv0ksCt3jFVFSceqVo2JraZXlRvpTBXlvtNeB+CI240J
jV4yq+qYFcGQJ6ro9nNnSgysn1tU/aeixAdyse1o5jyc/lNidxd3qcdNfNxD
dW25Y//YUdpiJOenUQHK4sX1Z1aN5OOWcuDw2SZeWUPoBdLJhYi0O5nPpyce
F9QiRWVX97ORJ6ojELYh8/I20g9M/+jw8NB89fDRaPf1dAqPn/fmOyrxakGg
i3ndxylXaSjAbaaC0wxcdju0RW9OrOy6xIjlZNOc3QoL6AtWS7GJ7Zl0VpOe
PZDFJFUaRJGJrQHPkI9LsuUq1Y5mIfWxDE1bN6XzJs816qsbt60OLOsesXIi
v0LeVqvh9Ja1d3bupaVTPaKTdzvnKIjoKNBLvNOOPj7Hxy6zhGSXLTWAPHGN
0QToXRUpmLabzUSLI/6NGj+R/6SDrbTPiEvbn8N3OddcLbjv0zmUKuL0NbrN
+BwAAhYJBFmT3ZxEKFDB2KoXnl3ttOKFzVkNa8QRm+Qa6Cp9p3Xb2kMR+kLm
eybco/AeYDjQxdrKmLwGM6gZRFZpYk+igq5RDU3qR5brds0OuxUXQXIylhS2
QOViUspEJ7ed8WggqW+Fpz9TZ4uzqGy2EaMwZI9dHp83l2eiJFNjQXBvcJ6+
9tiwpUFhlq9CR17Q7jRi2jkIOngcjkVBO7RaZMDGXlWt97T1pfz6s90vvvrh
hwHbBkcTVQC30ri3tySzxXDxs9N4bSQ6nJATeVCZAmkyPhzQGHQe1qmsKiyt
2GnYLkchNN+W4u1yPIYWV0nMx86D5B8YFRZHoSqiaUGe2lzJTnQBak++WIgr
tZZcO60K04wRaV6oDtTGoSxXebv8GfziE2fG0tzq9Ro+YGhjd4CeYR202djS
Kc1XCwC1dBmMrklQ2QbIufY07YZBbIUz9qd1BAoYXwInLAH5nkcnB6aZISYw
MlLbwIyKpCMwkqsFM+jySEwiUF5VCzYjAXbiZtyHipjKNXs3fN23Lf9hUNki
biDEUx6eiOHTT0nvJCBKi0yS7N6+kZc8LEgSZMVlgOnmRHd/BsC64qSwLNui
Z7/TD2IlNeDcg4blABQEANOePaPy4UcwVrP3tzUggPS8IiFnNtYZt9tgK61s
QyxgjwEvUpVC1Tm+zkiQXfl+pBYSR+UsbVrlfpb92GJiR8FOHNgq5051MXBD
auvVbg3iT2SfstImNeXYNYnmYrkl++LPGzbBbMmoFdjaSlYQk57JhHEyUcoz
6MmDbcjOg2JarzyFXCnIT2YS2hIqjsHTwRF+Rmo2ytm6rjOt1PNlqcxspLga
UOQxttdr91/t7WqDA31ORnd55+Igqth5M0X/zpWwLal3UYfdRrhvAw2kk4PL
rpLcCc87Ol1GOq62c6jots9BHJV0KWlRamqQLr3Nq8G6e9e1gUzPre4N7jAh
5RvsxaQpCpeP3qrAb5jyHaYhhw8unk5CQSccQtwqtVWw0OwRkeUY2+CpT2gX
lJSBBN1s8VQ3Lqec/fg5GUxQjotMCOtAS57QoMY8d6AT/nkpD6rbT1i8j1DW
me0f4jGYWQ4yNpC5y/1puCtExSHrGw86UYHZh4kHrlLUuUApnhcZZyZaf6d9
BMtYpCuvCodJrJ8/fGjK5g9//0+PzI//KgnA6meMQK38m9UVtZ2YM4SeJDAV
WoL38oYnymyRm2jMPBiwJ8+7TyQt5/B3hF/DMY6e816GE2xa4QUoakpvDL0I
TzvZ3Y7NZTw0Y5UioQcENAOC5dWmJ42Lgq3PzTn+4A8YyankSyEcevLk5Mz8
EiHLQUyntleYr8vV5qU+KIevXAb0iUQTTtL3w5MMYXRAcRSzKYcCnN/Hq7Qb
BzUPYEhxKxfs9yojyxFJGXwm6EcCmrNJvPBULaLRoHPzIwEd28ShXOo9aKvP
7ugVxWBRoDw1Dijmd6wIXUKj9UBjxBdjI/KlWaPX+rvjgiRmckOrHbmapPsb
6Br17lCvSxDtuHHsx9+jiKaELSFOmjNlIWegjO/NmwDlv4/cOR/w3TC0+hbb
gISSbzWxdOL8G8f0S59shdVVqsXuZpqnzUC/uotT2YHACfRJc5LNEXH3P2xI
BtvB1y9Gqf178zW0mPi//iZDHWwdFBwZKTKbo/Im+141lwAl8Or09HTg55Ot
WlQArg8dLgkefPxQxaGjiHFhmxkCWY6iNMZ7OHO6Xd8jPyN7cuLMSK/VIptD
naof7i4XtXu9j1fnhWvxZ55zh9/61uk52oQ/7jGlD0nBP9oAb0lMZsNVAsLa
pgJn7txA4id1rjp45LkrH+uWLqHNfR4oN7A1i8JPlPlGg33vkEisdz1hOyLs
Jy0FGK7Xkm2rJGmGG4tEaq93RjfZQgoltAGzuKJRRvR45/PEjH99LKOTAOKl
2xZzDmSuWFrb6LskbZ9r6XbmWozZOIQL7ZCChKseYHqFllV0Sg5xvB6LbbzC
NlyM4FQzB9x525gE1umDh9zWfylJORwUWFYSCvFVZKrWscEn9Vp7UTtz6ReH
uA+ZrYAyqycQpAxBOgICnmTC6UMIqMQKM54WuEeH4iIULuVFr0zgdLV1p3er
K1FPof/wKW3gLk4jsLDnGQOzEfKR9nVcXCOtvMWu5P2FWAkttL0VA0CV740+
/Rp7Rb9U4sJpO7yAL8eyGNutjjUbqX4QC8jzDx/IiTqwVnVoLky5Llky6K6x
JovtENRoRIqNckNSJAY3nfamZMAW1a02OU+RgDtsUXjc5ze5celAOpWi4UD3
q2GzhpUqT/FG0URPg04jCTtMicPMtKjZolx/l4wi14VS8lU1/0K8F9rz/ql5
tLeL6R/jv66JtExv+bIKaTVnE2N8dEoHcSlNZHsMT12+R+ccHNCdTWzZmuR4
kpolPfqEBrhGImpJFCuYF9WZ97Qg19YNJWnb/NUAOaQ8U1y2bpU3d5Lz36Yz
MUJrtjvpAfyPqjrtlEzDXt027A/WVDMOq3BDE5kMaGARTMdX/4st8rN9CAJ7
zOtg3GglyKqGqjJMb9Kod5hB3R1Tou8/aE//uT19x458MavHjKAM07abnCEB
gDXDXFIEghrNvITiiqxxKPTaOxA+qoBc7P0JmmKwFSHldRqLGy21rr9oNiey
7AvYh5dcUzgI9dDjnOMZvvcdqA6QTjcK0bz4cn1Nuwh9hxfiPkthBZ6f27Y9
ZuJMnwIMGE9EgjoFczoPsKfsmoInEN82NvWOKDEBHQ5sgZxrpt9Iz0wmx6ol
tYt40bsQENrcTX0yrr6ezVduESY0JYTmCiizjhPKVYATaZ0zIguoqhWs0zO4
eG0+woVtnCWxF9+6zrLY7V20HX0xvOrQacwdnMUBbg0uhSSx0xqcxWeSeW99
cgdD4Hm8XMVMiXgWAx4j3rJmG5uhwRxbO+el7Zhzx4oZLNy6n3swag8KVK9q
CoddBm9TBBBz9gBDwqQTqdllcNhcdtLVq47YqbmxpHSLH5nzqOV1QOpbAfLU
eEHywZfkCZrbbZ9AJBFgTQ05svkC1u4rMq3qcZkfknkUSld9N9oPx23KCD1Q
ZDYKhCcfj1YBiM+rHzU/J/7uJq0uL6VDWnSpVZSvmSKR0eZgQam6K5HG9F8e
vSJVGSJFSkC5LD6vlzyVLQK7uxlMGurYqjQFbVHYLXs6m61Xqe1/Kv5LbtAv
9MhKQb7MhuoywPyscCD0JL5AITWM/ec2FKT2YNiBmZYzvYVpp02WIw+9sjRZ
kS2KHplDGyRr1k0r6TM2MdquOvBFeOtghC6pWx7FtHf40X9ir2+U/A6P6NFZ
i0bfBZQW6a+zng6DBCBcnlctxQmWihJuox9mveLANztdKkuu6LpdSHF7EGos
K1V/Wf1xhV7EtX41vsDLuKs4kNroiGxrkFhLln41mWabA41ckpgcJeegipeS
xM+Ls7OBr0kP9+Pcm4k2TJYMIMmT4uW66p1gry5pFnsNXCZbwzXc34F9/4UA
x1XJ0zMSpl3bhlg+u8nF9O8Ox2w0BPcBIMDOmhQNXi615y5f/kTIZN1jI/FJ
pM3tUh16G77sPrw01sMGickhoFWWvmNdOL6aABuerkmE3UaXh4T+Qev3sJef
kNixbo/7ujPE+WHv/xzaeosP+TrAYXxHqirK/XHZJ9HVVvaekG13ivzEC13u
6fQg5JeO+9VUXW/SjyhHABObuoF2MuPAe1akK87VRAt1dzMFsRUJ6gbNc39L
o/XDe7t8++iBWaNndNwedfOSDndZg1RlA1yJeAkarRRk72rQvCmJ2/Ak3YhW
J+ikbaJA/J3msIHF6zYUOKGD3rFnVXXJmbMWihd8VUBW1ZlkGlnFHLPQmtYc
6rMO376WVJfabEHafQ9/t87WrLFaWWszmPbMb8IJzbOwfcRbPpSf882eqaWj
tyTw0tvf2NYSv9nyJY2i0SdpCbnDxPgWbpTfjFDZoJdieq9b57ZOV6DcqQTB
Cu/Q0p3PCc9wUZff1QcuVSAOriAZ5qXk1vlJv5WbFSKX+a23ffz54QoIuzTa
PP381q1nh3+tdTm/EQLwT3fwRGI6EjXbiEyExnmnTYGmANv2D+dxsqwDlzcD
tilBNmsN3JYbUbHbJepDZe3woG1WxgkG6MqMx4moWRS4rqGuFUYYweh0G3GN
sS7ubtYs+QsbjZp9ezsoFLU0fi2lgUhnlSvbF8np7vgUXMnBJ2BMc3sHRBhZ
dQM61mP6UbO3n3fg9fO4vVuYwPzzTp+3ASrsAUwQXUSHRumwY9HSh2GDl81v
uWKEodn6at1tKOAAgpwl3AGjXhmn3w2lMzE3MFyXVjl3BT4rvgDC9C1revSY
lHWSntJ6zN054VPJBnANp66sKboewOkcavl++fBnUKWe4L9Ldx+Ct+cBN5dr
Zcs7VMksSLoT1+5EB+d1ftnKVRGS7ecLWYXDw3s55NoYKfvxRW5VyfrVdd7k
ovLhRs66HTpscRlfVj1wMtRLbVES7hD7ohO4+6X/WKVg875Iqx3E1yJ+QDvo
/eR7be+nHkApForLJf30BTfZvkCjJi2A3lAKOGixEpUgbBjElU/wgNxqdpns
Va9gQkFjlFRqG3pLCyLpDYW8dma73PCM1OTlqrUd0NjaQy0nGVdkTrosd1+2
7k1G/61zmvc3cuCjmNTAX+QsGFiwOxyxEO77yMa0dcbozuyN4gCi+QWnDSy5
sAwXUNhoxMWxd0IMGa72No0t8HXNOdfcGYYlQgAelyfGAWoym2pJEOKOyLJa
1nvQeNCHfTq3Y6i6PnqCIwmyYbKoAi24budDW6ft8c43N34xOVenFC98Qkdw
zvYRaQDOmxFfuyTA79KK4E8SRLKawC3pGxrHgLLd1qTxmLU0vdQL8pkUb5vA
wLGRJr2eNi9pZC6n8RhoaINyYzDPq9/KKfCXvnZFKlakfCW7HbIqI5/ZbKjQ
QeV6c0l5Qh+XMHMzX36NiOG36tR0bRmZKsV5pZXQdgjp8DbwGkCAP7oRwRit
fmTOo8+wbhHQAKfdXFe5NBdAHy+5DMiJxpGnOlY1gxaWpJRpszm0JLfLe3vF
xU30EX54O8+X+Nl2YXsrXdjwkURn6Q1V2fwGeb/xdbShr8cFwbZ4erRppIbj
FRgTAMOpIAq2IYc1FVCEacfHJwGeRlESLIt7UnGPd+FfeeOaOPJCvL5UOTSN
28JZTrSNQY4DimVFgk3nlFM/YZUwNpUNZHz/0UNc3PWY+FK1anbgW4Ifjp8P
fHmRYf4Ebzz5/GfKD6UTn2//qHdWNDAdBGKWX1jlphF26FrCCRAcoA8EoWI4
a2cr56W/J8Bjiofk9FQfc0e4EBF5kgaEK8kKDC1L6fViScMexBZos+NiK7C/
AOS+ikAtl8LdCetHcjoW1htQQPUd9uLwgmv4kHDAMAyYZyHBl4DrHESi7lx4
qbv16oMXWMcYvckmQknDUHcX8bDdLbE3zpTVcL1vixrf0wktCOwpse5mAXHa
ue+bEyFQghDWdoK5sH1x6VYYrA3XXkn2bv/95PZg0O0RuakeiL9VlClXSBTl
AFjIYsCO5YsOC7bNpDWC37t16TZB7rd2gfYzjjtsL82DK4Q7Yx6+5+ux9sxn
kycH6ONZgco+c+Nz01je/RM7PH8kvJING9oLL9sm3K74sg3fupuwO1FK3VGq
Uu4b3mSkPhfQ9w4EPy7C0jYTrrwazXBLbob7HB0IHNK5ZvgB7tEBzNeFqM6z
qzInvi/hzbxsuLEocjytdJYAJ4ch2kDf0V64jWmKCukKEEhZkF3bt0klN2ne
2mxWNtFU+ssAUIXUI4V2dS5d8y7aDfM9NYNzqvsNu+3iMgFehszScBBa8ugv
9QJJ22FJwtxwc+bS4s3miS2yedx+dD+o8JQ+VUG9nIckzCkpebDCkjSJxqsS
jHyX+XuyKOX8EfkMmhlf53UrWc3cqthnhe5+QWzri8fmG+7sVzYrrgYjwj09
dy9tLMwXBtvuoXW68N36+ZZnLWn13bK9PkFsOOgK43CC/e9yj4SQw/ssFunv
Wc+yWHgGBUdZBum2nM/vVCM/mU1+inMo3KRuMX5mtVN4QjWVU4GtNX/kmUFc
U+0VKUvLkTqlO5BVEXuWEpxcUqGg0OpVV52FWgbifWIuQshXu6ADPbcqDq76
kX6x6MHrbhqLfRhgFJ2WNRvSgVaslx8Joqd6nYYQonAdjYSwIGYvrDQb1sgI
wSe43NayDL10i+C+Yqxl4mvZ6JcAjBDgok5XV4iESKvFE8ndYILz98a6i0CY
IrOmm1avAX+kYztIdJpyHFc3QzXh/k+CMjbgNiG5SudzYTiCWNb7QmozVBAu
/r8PZDodxGiz0uJzWKRrXELYbahkIdNw/7MolSLEsAu1CH9hSOtE3jyrWQ4E
3CLASIuAqGP4+KpCCPpMEMClIEFPs+EyV3jhb/DSztN6c4xqK6IHcoUVq9tx
qxP9WluLlLapuG2trIgoKVQNl9MiT2Bs34Nicsl+FNJs0LyHBabWBViXf1Ax
InEt8FQ6eXhyPW6E18o4V+yhuzRA7zFbvydxAwo/ljyH1fCdT9AavnPQYQor
NTGIOwunJsj/ZU7lxgqLzXwfZbSanJHChEXZ1borx5z4tyt1cJtYdQr9uN02
/T7ddbKhxum706EqmOzmVaU1FJqab5U+m+KCvTIyi/PkoK5WH4SD117VZcI5
tUpojN2ATuabwHMihDTzMdxXSLptZFrmhUtJ4Uj2EizIcXL4bfd6XRXrZdal
Hb2yQjQrJFgBi7g7BB+V2+zbechP2ANvJ7D3ynUvDsPvrlt8fEOY8ePhXrDn
5lPzjv5+/VaW86mZvOV8A1wV9Uq918/pSa+skNhnIBLPeEdfxADnoojEDfeM
eCdxqNIzpMROgDFvtRbC3jJI63mx+8WzR4l5Mfnq2S7N8IoeIypga4Fn1xrT
1F2BW5VS3x/D2pd6aYICwMH+cfaQvEeqWOZ6N3852rW+Jp/I4NEzcsjOaaLW
lZ6Tuiv5C9q+4pizAk79De/7GphqJWYUZHOm22wuDjRG7mGaoQnKb8SNfOs4
PRvWql44d+PkyZOBtmyVp6CZBw+5ZwTdwpuJrcMWN+zgLmhwWZ9L5VtFsL25
1Xx0iCiuAat6arms3nTCULQ30AQuD3+nsfAa6/hptF1nx1uibmXb01Vaxm52
m8H2wXn9dQuSO5A4VWSopk1oBAKfnhq5WFhMwTWbwrF/nuxvDjmgUdZtAEib
GKMmhcerjaSY87fffNu5uJ3pzKf+xM381Qno/RpsH2sEbNP1Hjg4t9xAYm8t
1YxxZZAdnoEV4oJEMmCP6e/Xb99d038H9HdMf5WgR+Y7cIs9eiS8U25zzqcy
QPgQDR/77xofDTR9fvwXz+g1gQ6o5+Xr/Z2T1/uDp7SMZ9IFDvoJmTM74hR1
HMf0rSdxENaxMbKwf3HHDvyUtvPMQUIxIndaQl91+p8HoFdW+DTgas4x6Q8w
YnEvJrtf7IDPAaLM6XaOXl18hRoxGueFFLY+AhmBl4rFMa2Am3xY31p3W6L9
Fi5FCTZ/+zeAU2IO/vbfB8BzV2yEMS9Gko2A110xLUS8zje55U+NPjmltsuK
g9oewuRAPvGt3COuPnNrEy9h4INDNlCDwJZ8IcVkyMbieh4wJ++ZO3Keue+5
z76Vl0itGgWdhGA/fagQTf2cQSOhTa8qfWj9piBtsOnQ22n78vTV9H3+nRTK
oHyr6zXk8cUtiKjRxXHCgYbuebq4zcaBbgeaRDKx1zH6bfj+Fve7bCyoecAY
bfq+KqulJpkp0wHe9j6WG3jP/jv+bAJVT7Lwgm4+NnfEXfnAF7kjfQ3chfsO
zoe0xFpUPe6wYcLaitB+OSrd53tBTgjIUBP+jwLJp0E/Tia7OB6F2CQKpSaS
lMh+KjQOIdW5nEhiPfzTtMkbzYT12Sax1NErSt5cdO9SdEIldAYowKL+J9th
5rN+bckw6AbFV6nW119WdXiltvqNnEey2dueuhLY232GXRiudR7XRVXNuYLN
2Qtdg3sjWQClxDYw0UkpHJkICo16k0hZraCqr9l3nRa3ADcxtJD6kQQdA/Su
BNj+i6+PXNt/t192d/GpBtHUIDvIJ12wACbCSdnvpL2KmshKRQ5k1LHK0Jzu
mg3pNiLpN9JjxCdOSM6NNJANW57wIpOo24i8upH824eemMSNUCVgL5ohryTu
UTRz/l4Ws0TPFYdoeQiNvSk6wAnLignEATK+0K/Eu+teCjp4n3rQoJdOj1Qu
ccdrgwMN7JLR2Zrn3fxtKT3EmYPG2N5iPQ18zYcx0+U0X6w5i1quH7YH/1bX
skeLiRs4WCNMRL1G6UUb3LIuDumrJRhWRyLJFsfpmm1x1rx0yyRRj0ZV7CZT
R0zCrh/Oq4p71Io+yOoYgSxvbAIb6Voq7NQeYc/JN9++fU/n/TaoMKLRDidv
6VDaiv55G1Tz2Jtin+9wNjpujH0aQ1cUU7a0RxZ8TghFEDyNwnT3AVoitqX4
Cpmpq+tVy8M0ddS69onjZ8uVZICgoEFzQWrJGCLgjHF1CkFZbxgngfpWbXFp
wRAuyY4+97NiDTSKwkT2sSMeWN06a/M3FYq+0MYkrHxFcJo7g9EIdAQk+zWR
oTExtj1FX+F5lUmxpT5kNkAaptfbqwvgIWLCEeB57IpxRfuTJC7/nJa0L/So
lYNgsoh+IYfMXpkmQdEUvizX4SR0mahdBgQPsgu29/p54VuJAPmiJuFNou3C
AkYV10Uqx2XXgzJPiXHpRd5NhjCmeP/mGsi8R8FCtwGNNaSjTLiGJm4upZUn
SM1BKdS1363yobTkU63sLoUL6hj+fCI62QUUq/P1EiKJNST8/r1LRnfylvOI
rJ4XfDhZ16tqezPMrfqUU28SvTVelNYk0DftTVXbVKgdEh0NR6mgkaC+4049
xCU48jwJ2l4kplviIA/epQOQ5tDVGRDKIblu7ivX7y3WnXRtRIZuyk4nNjmX
iujrvURlfuq7aH5aLqqPqgmumKSDZqyLB1gWopAo+2fe1LK3RTVmfzqtHVi6
an8afimmmlwtoJmI9vpY9/vR4cUL34YhNO4CmR9XglvF19Fk725Dwe3gw9bC
GQF+99FjuRPswfNQd75gupaxocxGpXGadEBDc2fTB6Ss1um8hBd198kTtKqS
zg7QzyG4VdeLlHOxlLY3ZJCWnqHCRaiAmBf34JN2lNoPC23nOEO+DRYcZ+Rz
AwosXn0vOvVGqT07uNOVduZT0Ji5A18jHQp86Z71gQSP8FV6GDl9J60xsnlY
ooRRH33++PEWgJ8Eh3l/gJs/Myez1+s0nzPonwSgF+eBGzXPGvUt3Afg0t0h
AHoglTQ9wsNhu2yQvGpjt+yFi707ZBjeSR+2lBN/G6/fZiQ3IQS/evTZVwLB
sC3iXegLYOUkBw3usC5QhPxw90s0W+L1Nf6k+dxZU55lUq3KfTQ5NUAQxioH
Qc8o6RmgOTVhExGB4bcnO+cHr2zrOWZvCtyuCRHccNMl/CJflB4rsfsQ5TqQ
efIRyAR4dhdkPoJBAYh8vtG6nAkBeCjlfs/WpLKJ6dr61mfKXnQI68NqR4Ba
2LG9RTi8tn1LJwSLXREx7u4+IanMI+1+qXSJ29RoV9pgfc03DuMRVoFV+GW2
HpKPiAMaXT745VP6PJ+mHbhqCaKVBMHtukSP5WJN1tIe68GJ04Zx3drrN0dn
hweJOf96//hY/5PvxKq0/9vnx6cnJ4evDvDKyf5fCeBPJxdHp6/2j0duw2gi
W0sLLdZUaW1NFuTj+3WyOSH9ktFsGNWNxHqfSiCcI3rrJrMvtf5Fq+U5njx0
dz3aW6Q+Km5FLO/P3pXoOjWXJBMaVVzj2fzZg8u0aDI8xtYco6S0sisJ5nIN
iRblEZ7M1o1E/XnLav277N2jyeTkz1hm/Jbbci4zvZSm5DQ9/maZ5pIFkDd8
2xnfgDtOa3TaJAWrabLyqgJOEPCrqxS1SM+r9Sydp3ndo5Oqbn6blrcpgjBz
YfdcVMJuF9p1dqMleksr0klkMz9mOKw4HeS92d8Tj+AYbq6hl/mEpWNHQQeW
au6CV+S+lBJg5AkKUmg8mPFBQRixJq256NlL5j9Mud6JORYvBlzM9v5kp7Mz
uwq+8Iq7ftPr6umSNlZsigBSMH7850cDUZN1G9HPPXjCHacYR9km9Opng1AV
T/yFwomJbhT2rrjE6uvjMVHpmXRF0f6y9N35hE5/cnIQKO7OkoB3fvNyYZwC
reQxVnIwgY2RmAn9PZwAtw4Tg5ui0ejQ/h5eus2Dand+e13sj//8+WjXNsdP
9OPEvKb3tUmlXpGFOyk//NjLo1ewMsa/PuaJuO+9hfrnI4a7v3n47JRY0dkb
+ecoMW/w4/ExWTWdu3oSvs0lMcEFUrTd8AYVOy6PRovQc9zo1kfL+GIgBob4
+7j/ufxom2arTOZWwMlmL93Eie6wy6tbM3eho7ePn+9oizxxvbK9ZmcOJ7Cj
xe+ztxab6NZc0w6+xA7QVYP2G3UfIsTxjXEI47qtYBK0x0iki0aCPg/AtLgt
Q2K29RrYamfKCuIZefw7Bt4covf9liJx2uBXAzGsk6jcNQlt7agkNDGu5DEx
rtgP6QX+597324rPaLInmOwPf/+fNgz3xF2QktgM/UTzxROzNSc84czgxGzJ
3MX5htmniYlzKglqnFC4sy0jLuH0YGxhixOEtrD7cNCJwvW7bolBEjv1E3ig
CYw2cNB1gXXHc6D56QP5pposlWjbrlTw/rJKBb4VdM9V0F3Y6Jn1+NxbpK18
uTYUbTQsbGQMWzBvqzhZMPSqekFL0soFdbnfJr5xt6WC2NLkJKx1UfS8ej6S
a1vUEcHb6HOHTS8BN8TZI7D6N9Ds8M8ZSOEFAf1UvWpoR5mAWyIhdW0L+L5L
6+V6hWqxvGLWwI69c3bsJfgYGjs66h5bow934gm7uUOCYi2fmUhCnq5cxeBP
FIsbojAQk5CILBzNB8QxVvPYaCv8QFASKCTLNfpskq/ELxB9qpmT0WcbQtOJ
1DukMRZCInS7aNx+4XUgUZNYJIqzUgMZ4EuxtJa5gBBdyRlJxw+J0UhyxjK7
MzXkNyb84v+qyNwix7GoL1UG7dgmiB8UhzYesykV7Wp8BzAVkTu++dWfLi27
chw7+Oqnibmgyv5OibcpTzHRk/sIN8l2sCLO/Xa3oLOL/BPEXSDi7hR8idmW
QGe2ynNsdvdhLLw2BdZ23/WHpeEWycuTMdWrwyhxvq3EOWbcT0/0293dJzts
aH+/3a0clNiTcPOdpzVQF4k9Em7/G++1U+kGxAAA

-->

</rfc>
