Worth Reading: In-network acceleration for AI/ML workloads Let’s take an example of gradient aggregation across model copies. In a GPU cluster with Nd model copies and Nm GPUs in each model copy, Nm gradient aggregation threads will run in parallel at the end of each training iteration. Related ← Worth Reading: Destination-Adjacent Source Address spoofingWorth Reading: Are options widening for enterprise network services? →