Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

VPN latency and throughput comparison #316

Closed
evik42 opened this issue Sep 14, 2023 · 0 comments
Closed

VPN latency and throughput comparison #316

evik42 opened this issue Sep 14, 2023 · 0 comments
Assignees

Comments

@evik42
Copy link
Contributor

evik42 commented Sep 14, 2023

Description

Results of comparing communication between two VMs on two separate providers connected via VPN.

The following setups are considered:
Host - direct access to provider host, without VM and VPN
Direct - communication between the VMs, the hosts can create a direct p2p connection between each other
Relayed - communication between the VMs, the hosts cannot reach one another, the communication is facilitated by the relay server

Setup

The first packet between two VMs triggers the connection between the two provider nodes, that is the cause of the higher latency on the first ping.
Ping latency between provider A and relay server is ~41ms.
Ping latency between provider B and relay server is ~44ms.
Both provider has a 1Gbit/s ethernet connection.

Yagna version is 0.12.3

Measurements

Communication form Host Direct Relayed
first ping 0.6ms 80ms 340ms
following pings 0.5ms 4ms 87ms
FTP download 465Mbit/s 378Mbit/s 9.7Mbit/s
FTP download (interface limited to 100Mbit/s) 93Mbit/s 79Mbit/s N/A
FTP download (interface limited to 50Mbit/s) 46Mbit/s 39Mbit/s N/A

Observations

Direct p2p connection

  • packet latency is increased by ~3ms
  • throughput is ~80% of what the host can achieve

These numbers are practical maximums when the providers are not doing much else. These would be affected if any of the providers are running some other workloads.

Relayed connection

  • packet latency is the sum of latency between the providers and the relay server, plus ~3ms that is observed over a direct connection
  • throughput is very low

These numbers are affected by the load on the providers and also the load on the relay. If the relay server is busy, it could increase the latency and provide even lower throughput.

Remarks

MTU

The default MTU for the VPN interface is 1220. I tried to change it to 1400 and 1000 but it did not show any difference in throughput.
Changing MTU to 65535 (largest allowed value) causes the VPN/VM to stop functioning. (On the downloader side I had to stop the provider, the VM was stuck.)

@evik42 evik42 self-assigned this Sep 14, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants