Is one 10 gig port the same as ten 1 gig ports
Solution 1:
Simply put, no, they are different:
- with a 10 GbE interface, you get a bandwidth of 10 Gb/s even for a single connection
- with 10x 1GbE interfaces (and using 802.ad protocol), a single connection/session is limited to 1 Gb/s only. On the other hand, you can serve 10 concurrent session each with a bandwidth of 1 Gb/s
In other words, bonding generally does not increase the speed of a single connection. The only exception is Linux bonding type 0 (balance-rr), which sends packets in a round robin fashion, but it has significant drawbacks and limited scaling. For a practical example, give a look here
Solution 2:
10gb/s via x10 1gb/s ports
I am answering only for completeness sake and to save you some headaches. I have over 20k servers doing something similar to this and I can tell you it is a bad idea. This method adds a lot of complexity that will cause operational problems later on. We did this with 4 1gb nics per server. At the time it actually made more sense than going 10gig. At the time, 10gig everywhere would have been many times the cost for very little gain. Our recent itterations of our datacenters no longer do this.
An LACP bond (mode 4) with a single LAG partner will give you 10gb/s, nearly the same as a single 10gb/s port. This can actually be done using more than 1 switch, but they have to support MLAG, otherwise you have to connect only to one switch. If they don't support MLAG, then you only get 1 switch of bandwidth. The other interfaces will be in standby. (so 5gb/s if you have 2 switches).
A single connection will only utilize one link, but you can split up traffic where required at layer-7 if you need to, or you could look into MPTCP, but support for that is new in recent kernels and I am not sure it is ready for prime time. You can split up data sync's using LFTP+SFTP and the mirror sub-system of LFTP. It can even split up one file into multiple streams. There is also bittorrent.
You will not be able to do DHCP on these ports from a client perspective to PXE boot a OS installer, unless you force up eth0 on the server side which technically breaks the LACP monitoring. It can be done, but should not be and it will make troubleshooting problems more difficult if you force an interface up.
In your bonding config, you will have to generate a unique MAC address that is different than all of your physical interfaces, or you will have race conditions due to the way PXE/DHCP work, assuming there is DHCP/PXE in your setup. There are many examples online of how to generate the unique bond0 MAC on the fly.
This also requires configuration on the switch side that aligns with each of your servers bond configuration. You will want to have the LLDP daemon installed to make troubleshooting this less painful, and LLDP enabled on your switches.
If you do this, your cabeling and labeling needs to be flawless. Your switch automation needs to be solid. One cable offset that mixes 2 servers will cause very fun problems.
Kudos to Jay at IBM for making the bonding code as good as he did and for helping us figure out how to get DHCP working in this configuration.