Reading and Writing are less coupled in the sftp client and server than
they may be over a traditional ReadWriter. This mirrors the full duplex
/ half close nature of a TCP connection.
In essence conn.Reader cannot be closed locally, it can only close in
response to the remote side using conn.WriteCloser.Close(), or a network
error, which is effectively the same thing.
Make this behaviour super clear by no longer smooshing the type into a
ReadWriteCloser.
sendRequest was not returning a value to the channel if there was
_not_ an error. How did this even work ?!?
Also remove server_test.go because it was wrong, it was testing that
sending a properly formed, but invalid (ie, requested wrong extended
packet) packet would cause an error during send ... which is wrong.
sendRequest will only return an error if the client connection was
closed.
Fixes#55
c.sendRequest always does the right thing, but a few fast paths were not
constructing a buffered channel which would cause c.dispatchRequest to
block on error.
Previously sftp.Close() would only shutdown the write side of the
connection and return. The sftp server would then shutdown the read side
of the connection at which point the recv goroutine would receive a
failure and return. When using a local sftp server in the intergration
tests we first call sftp.Close() followed by cmd.Wait(). cmd.Wait()
calls close on stdout while the sftp recv goroutine is still reading
from it. The fix is to block in sftp.Close() until the sftp server has
closed the receive end of the connection and the recv goroutine has
subsequently exited.
Previously recvPacket would be invoked in several goroutines. This meant
that when multiple concurrent requests were in flight there were N
goroutines each waiting on recvPacket. For optimal throughput the goal
is to send a new request as quickly as possible once a response is
received. The previous mechanism worked counter to this because the
goroutine sending new requests would be competing against N recvPacket
goroutines that may become runnable as data streams in. Having a single
goroutine responsible for recvPacket means that the recv and send
goroutines will ping-pong back and forth optimizing throughput.
This changes shows a ~10-25% increase in throughput in the the *Delay*
benchmark tests.
$ go test -bench=. -integration
PASS
BenchmarkRead1k 2 840068631 ns/op 12.48 MB/s
BenchmarkRead16k 20 72968548 ns/op 143.70 MB/s
BenchmarkRead32k 30 56871347 ns/op 184.38 MB/s
BenchmarkRead128k 100 34150953 ns/op 307.05 MB/s
BenchmarkRead512k 100 15730685 ns/op 666.59 MB/s
BenchmarkRead1MiB 200 10462421 ns/op 1002.24 MB/s
BenchmarkRead4MiB 200 7325236 ns/op 1431.47 MB/s
BenchmarkRead4MiBDelay10Msec 10 186893765 ns/op 56.11 MB/s
BenchmarkRead4MiBDelay50Msec 2 907127114 ns/op 11.56 MB/s
BenchmarkRead4MiBDelay150Msec 1 2708025060 ns/op 3.87 MB/s
BenchmarkWrite1k 1 1623940932 ns/op 6.46 MB/s
BenchmarkWrite16k 10 174293843 ns/op 60.16 MB/s
BenchmarkWrite32k 10 120377272 ns/op 87.11 MB/s
BenchmarkWrite128k 20 54592205 ns/op 192.08 MB/s
BenchmarkWrite512k 50 66449591 ns/op 157.80 MB/s
BenchmarkWrite1MiB 50 70965660 ns/op 147.76 MB/s
BenchmarkWrite4MiB 50 69234861 ns/op 151.45 MB/s
BenchmarkWrite4MiBDelay10Msec 5 276624260 ns/op 37.91 MB/s
BenchmarkWrite4MiBDelay50Msec 1 1318396552 ns/op 7.95 MB/s
BenchmarkWrite4MiBDelay150Msec 1 3918416658 ns/op 2.68 MB/s
BenchmarkCopyDown10MiBDelay10Msec 10 152240808 ns/op 68.88 MB/s
BenchmarkCopyDown10MiBDelay50Msec 2 715003188 ns/op 14.67 MB/s
BenchmarkCopyDown10MiBDelay150Msec 1 2116878801 ns/op 4.95 MB/s
BenchmarkCopyUp10MiBDelay10Msec 10 192748258 ns/op 54.40 MB/s
BenchmarkCopyUp10MiBDelay50Msec 2 691486538 ns/op 15.16 MB/s
BenchmarkCopyUp10MiBDelay150Msec 1 1997162991 ns/op 5.25 MB/s
BenchmarkMarshalInit 2000000 644 ns/op
BenchmarkMarshalOpen 3000000 562 ns/op
BenchmarkMarshalWriteWorstCase 20000 75166 ns/op
BenchmarkMarshalWrite1k 500000 3862 ns/op
ok github.com/pkg/sftp 71.174s
This is an attempt to permit overlapping requests.
For each request sent, we spin off a goroutine to send the request for us
then handle _a_ response. That goroutine will try to dispatch the request
to the original sender. As every request is initiated from the client, there
will always be a matching number of replies from the server so eventally
every worker goroutine will process a request and exit.