Worth Reading: Greasing The Skids To Move AI From InfiniBand To Ethernet Just about everybody, including Nvidia, thinks that in the long run, most people running most AI training and inference workloads at any appreciable scale – hundreds to millions of datacenter devices – will want a cheaper alternative for networking AI accelerators than InfiniBand. Related ← Worth Reading: FCC slams banhammer on 5G fast lanesWorth Reading: How SSH got to be on port 22 →