Low speed between sites by ipsec
They connected by 2 vpn from different providers on 1 gbit/s each.
I cretated 2 vti interfaces and union in vti trunk on each side.
Build ipsec tunnel with ike2/esp/aes-128/sha1.
UTM functions are off.
In tests we have only 70-80 mbit/s tranfering speed between sites.
What may be cause off this low speed?
On 1st USG we have 33 ipsec tunnels, but they don't use in nightly time, and speed is same.
All switches are 1gbit/s, ZWs shows 1gbit/s providers connection.
All Replies
-
Hi @alexey
The performance result is coming from different condition:
It will include different test PC hardware/ protocol/ packet size/ session numbers...etc.
If you transmit the files by multiple sessions? (download the files by multiple PCs)
The total performance should much better than it.
0 -
Every night we copy backups from 1 site to other by robocopy win util. From 1 server to other.It is small number of big files. In result we have less than 500 Mbyte\m, that less than 100 mbit\s.By iperf on udp i get same result, 70-80 mbit\s.By ftp result is lesser, 40-60 mbit\s.Where i can see load of ZW, interfaces in that time, number of sessions or something that can help us solve this problem with slow speed.0
-
I configure daily e-mail logs on both ZW 1100.
When backup starts, CPU load of central ZW is 60-70%, and aroung 15K session.
On 2nd ZW CPU load 50-60% and less than 5K sessions.
VPN interfaces have 80-90 Mbps loads.
Bandwidth of vpn inrefaces, vti interfaces, local net is set to 1048576 kbit.
MTU size in vti trunk and on local interface is set to 1490.
By specifications, ZW 1100 can transfer 800 mbit by ipsec.
What can search the reason of low ipsec speed?
0 -
VTI MTU 1490 seems quite high for me. For VTI/IPsec (AES128, SHA256, DH14) we normaly use MTU 1400
0 -
Hi @alexey
The performance in datasheet is tested by RFC2544. (1518 byte DUP with multi-sessions)
According to your scenario, the VPN traffic looks is working on single session.(same source and destination IP will define as 1 session)
If there are multiple clients downloading traffic from server side, the total VPN performance should better than it.
0 -
@Line2 Thanks for answer. After i set MTU 1400 and mss 1360, in weekly backup job ZW gives around 250 mbit\s in multicopy.
@Zyxel_Stanley In working hours, we have around 250 users. They work with mail and netwok shares between sites. And load of vti interfaces only 120-150 mbit\s with 80% CPU load and around 25 thousand sessions. How we can know, this speed is result of low perfomance or wrong settings?
0 -
HI alexey any progress / update in this. We have noticed a significant drop in transfer rates over certain VTI tunnels with large data transfers on several ZW USG appliances connected.
Outside of VTI the transfers to the hosts using a direct a NAT to the host (no IPSEC) the transfers are significantly (x 4) faster.
Using MSS 1360 and MTU 1400 on all VTI ends with these same settings
VPN gateway PH1 :
encryption: 3des authentication: sha SA lifetime: 86400 key group: group2
VPN Connection PH2
transform set: 1 encryption: aes128 authentication: sha512 SA lifetime: 28800 PFS: none adjust mss: yes mss value: 1360
Just curious.
WarwickT
Hong Kong
0 -
Hi @warwickt .
We have same results.
Without ipsec speed via connections bettween sites around 300-400 mbit\s.
With ipsec much less in 4-5 times.
But if we run copy in 4-5 threads from different sources by ipsec, it gives the same 300+ mbit\s.
It is very strange implementation of used speed by ipsec tunnel.
We use MSS 1360 and MTU 1400 too.
P1&P2 aes128/sha1/dh2.
0 -
Hi alexey thanks for the followup. Yes indeed it's quite strange This is common to other well known brands of routers as well. Most of the credible tech forums/answers has involve s Cisco and lesser home user/sme routers, this it seems not unique to our mates at Zyxel.
One might tolerate a small degradation with IPSEC however in some cases we've seen 80% drop. Yet at the equivalent period wan->upstream-->host@wan:nat-port we see the usual and expect transfer speed.😕
This seems to effect transfers of medium to large objects the size of which is very subjective. Trivial transactions over IPSEC are subjectively the same. Measure they are within acceptable service times .
When one asks about the usual "ah you just have a packet fragmentation" issue over the IPSEC interfaces. Just clamp the MSS " mantra... Well as we both and other contend this is not always the case.
Further despite the large chinks that are sent over the IPSEC we've noticed high CPU busy 10% even with softirq 70% yet the data rates a lousy to go upstream. Similarly the destination router exhibits similar characteristics.
I'm hoping the lads at Zyxel Tech can assist with this.
IF you have any clues or observations for your end please share them.
Frankly other than setting up control tests and using the limited debug cli , theres not much to be able to see.
Cheers mate
WarwickT
Hong Kong
0
Categories
- All Categories
- 415 Beta Program
- 2.4K Nebula
- 144 Nebula Ideas
- 94 Nebula Status and Incidents
- 5.6K Security
- 237 USG FLEX H Series
- 267 Security Ideas
- 1.4K Switch
- 71 Switch Ideas
- 1.1K Wireless
- 40 Wireless Ideas
- 6.3K Consumer Product
- 247 Service & License
- 384 News and Release
- 83 Security Advisories
- 29 Education Center
- 10 [Campaign] Zyxel Network Detective
- 3.2K FAQ
- 34 Documents
- 34 Nebula Monthly Express
- 83 About Community
- 71 Security Highlight