Bug #611
[rsyncd] Reenable, properly limit bandwidth
100%
Description
Mirrors use too much bandwidth when they synchronize packages with repo, so there are huge latencies at ssh and other services. Per-connection bandwidth limiting is not useful when a single mirror uses several connections at once.
Find a proper solution (use iptables? have rsync access from only one primary mirror?), implement it and start rsyncd.socket
at repo.
Related issues
History
Updated by alfplayer over 9 years ago
mtjm | re-enabled rsyncd, now at --bwlimit=128 until someone finds a better solution
128 KB/s seems too low. I would limit the web access if it helps increase that number (HTTP mirrors can be used anyway) because the web interface is not essential in my opinion. Maybe this works: http://nginx.org/en/docs/http/ngx_http_core_module.html#limit_rate
Updated by mtjm over 9 years ago
128 KB/s seems too low.
768 KiB/s is too high: there were several connections used at once. We cannot limit total bandwidth usage in per-connection rsyncd settings.
I would limit the web access if it helps increase that number (HTTP mirrors can be used anyway) because the web interface is not essential in my opinion.
It's already less than used by rsync.
We need some automated per-port statistics, all that I wrote here is based on several runs of iftop
.
Updated by alfplayer over 9 years ago
I was thinking in 384 KiB/s. If multiple rsync connections is not standard limiting that number of simultaneous connections would solve this issue.
Updated by alfplayer over 9 years ago
Additional comments on this thread: https://lists.parabola.nu/pipermail/dev/2014-December/002542.html
Updated by alfplayer about 8 years ago
- Related to Bug #889: [rsyncd] prevent a single mirror from taking all available connections added
Updated by alfplayer about 8 years ago
- Related to Bug #531: Suggestion to mirror hosts: sync from hosts other than repo.parabola.nu to avoid overload added
Updated by lukeshu about 7 years ago
- % Done changed from 0 to 100
- Status changed from open to fixed
The issue was never bandwidth; it was disk wait.
Anyway, it's now on winston; and enabled.