I'm caching lots of data in memory to a local redis server. The server will need to connect to hundreds of other servers in the same data center (as they all need to access the same information hosted by the redis server).
The amount of data you store/cache is rather irrelevant. Relevant is the speed at which you need to transmit data to or from storage.
In order to minimize latency and maximize throughput I plan on connecting all of the servers to redis server with a 40 Gigabit Ethernet (40GbE).
For a large number of high-volume network ports you need a good infrastructure design. You not only require a huge bandwidth inside the redis server - network, storage and processing - but also the means to distribute that bandwidth.
Depending on the exact size, a large chassis switch (up to 800 ports or so) or a hierarchical tree is required. This paper from Cisco should provide a good starting point. A collapsed-core design is likely sufficient for your size.