You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Results of comparing communication between two VMs on two separate providers connected via VPN.
The following setups are considered: Host - direct access to provider host, without VM and VPN Direct - communication between the VMs, the hosts can create a direct p2p connection between each other Relayed - communication between the VMs, the hosts cannot reach one another, the communication is facilitated by the relay server
Setup
The first packet between two VMs triggers the connection between the two provider nodes, that is the cause of the higher latency on the first ping.
Ping latency between provider A and relay server is ~41ms.
Ping latency between provider B and relay server is ~44ms.
Both provider has a 1Gbit/s ethernet connection.
Yagna version is 0.12.3
Measurements
Communication form
Host
Direct
Relayed
first ping
0.6ms
80ms
340ms
following pings
0.5ms
4ms
87ms
FTP download
465Mbit/s
378Mbit/s
9.7Mbit/s
FTP download (interface limited to 100Mbit/s)
93Mbit/s
79Mbit/s
N/A
FTP download (interface limited to 50Mbit/s)
46Mbit/s
39Mbit/s
N/A
Observations
Direct p2p connection
packet latency is increased by ~3ms
throughput is ~80% of what the host can achieve
These numbers are practical maximums when the providers are not doing much else. These would be affected if any of the providers are running some other workloads.
Relayed connection
packet latency is the sum of latency between the providers and the relay server, plus ~3ms that is observed over a direct connection
throughput is very low
These numbers are affected by the load on the providers and also the load on the relay. If the relay server is busy, it could increase the latency and provide even lower throughput.
Remarks
MTU
The default MTU for the VPN interface is 1220. I tried to change it to 1400 and 1000 but it did not show any difference in throughput.
Changing MTU to 65535 (largest allowed value) causes the VPN/VM to stop functioning. (On the downloader side I had to stop the provider, the VM was stuck.)
The text was updated successfully, but these errors were encountered:
Description
Results of comparing communication between two VMs on two separate providers connected via VPN.
The following setups are considered:
Host - direct access to provider host, without VM and VPN
Direct - communication between the VMs, the hosts can create a direct p2p connection between each other
Relayed - communication between the VMs, the hosts cannot reach one another, the communication is facilitated by the relay server
Setup
The first packet between two VMs triggers the connection between the two provider nodes, that is the cause of the higher latency on the first ping.
Ping latency between provider A and relay server is ~41ms.
Ping latency between provider B and relay server is ~44ms.
Both provider has a 1Gbit/s ethernet connection.
Yagna version is 0.12.3
Measurements
Observations
Direct p2p connection
These numbers are practical maximums when the providers are not doing much else. These would be affected if any of the providers are running some other workloads.
Relayed connection
These numbers are affected by the load on the providers and also the load on the relay. If the relay server is busy, it could increase the latency and provide even lower throughput.
Remarks
MTU
The default MTU for the VPN interface is 1220. I tried to change it to 1400 and 1000 but it did not show any difference in throughput.
Changing MTU to 65535 (largest allowed value) causes the VPN/VM to stop functioning. (On the downloader side I had to stop the provider, the VM was stuck.)
The text was updated successfully, but these errors were encountered: