Fixes#234
Similar to #181 (which was about read methods), the client deadlocks on
server drop on write methods (Write/ReadFrom). This is another case of
broadcastErr() and dispatchRequest() deadlocking.
The fix is to bump up the channel used to communicate about inflight
packets to maxConcurrentRequests+1 (+1 is new). It seems that it can
have the full set of packets going and hit the error (triggering
broadcastErr) yet it still tries to call dispatchRequest again. More
details in that ticket.
I'm also bumping up the chan buffer in the read methods to keep them
consistent. I can't reproduce the problem, but it looks like it should
have it. So it might just be a much harder race to trigger.
This also includes 2 tests which reprocuded the issue for ReadFrom and
Write.
to support actual rename() operations rather than the link() call
effectively mandated by the sftp v3/draft-2 requirement that targets
not be overwritten.
The client was deadlocking if you killed the server when a file transfer
was occuring. The problem occured when you had many requests sent and
were waiting on responses when the server dies. The errors for all those
pending requests get sent along the response channel which is also used
for the errors reported by the new requests being made. The new request
channel send is in the same loop as the channel reading, so when it
blocked due to all other errors filling the channel it deadlocked the
program.
There were 2 possible fixes, changing the new request error channel
usage to be in a separate goroutine to keep it from blocking the loop or
to increase the size of the buffer to handle all the errors. This
implements the latter.
Included is a test that reproduces the problem most of the time. Due to
the required timing of the issue, it was impossible to reproduce 100% of
the time.
Fixes issue #167
File read/write client methods were breaking out of the packet handling
loops with packets still 'in flight' (packets sent without reply from
server) when errors occured. This caused problems after the fact with
the returning packets interfearing with later calls.
This change makes sure all the in flight packets are processed (even if
just discarded) before breaking out of the loop and returning.
Fixes github issue #145
Increase the buffer of channel for dispatchRequest() responses to be
equal to the maximum possible difference between desiredInFlight minus
inFlight, as this is the maximum possible error returns it would get
before moving on to channel handling code.
Cisco SSH 2.0 server doesn't fill out the "message" field,
even though it returns valid status codes. This makes
File.Read method error out without actually reading the
file due to unmarshalStatus returning a "packet too short"
error.
When writer returns an error, the code would mess up inFlight counter
and the loop will never finish as the result.
We switch to only use inFlight to count requests to sftp server -
we delete from pendingWrites now so we can use len(pendingWrites) to
count.
Reading and Writing are less coupled in the sftp client and server than
they may be over a traditional ReadWriter. This mirrors the full duplex
/ half close nature of a TCP connection.
In essence conn.Reader cannot be closed locally, it can only close in
response to the remote side using conn.WriteCloser.Close(), or a network
error, which is effectively the same thing.
Make this behaviour super clear by no longer smooshing the type into a
ReadWriteCloser.
sendRequest was not returning a value to the channel if there was
_not_ an error. How did this even work ?!?
Also remove server_test.go because it was wrong, it was testing that
sending a properly formed, but invalid (ie, requested wrong extended
packet) packet would cause an error during send ... which is wrong.
sendRequest will only return an error if the client connection was
closed.
Fixes#55
c.sendRequest always does the right thing, but a few fast paths were not
constructing a buffered channel which would cause c.dispatchRequest to
block on error.