we need to add a case for this packet inside the packet worker otherwise
it will be handled in hasHandle case and it will become a "Put" request.
Client side if a Truncate request is called on the open file we should
send a FSETSTAT packet, the request is on the handle, and not a SETSTAT
packet that should be used for paths and not for handle.
after processing a packet we keep in memory the allocated slices and we reuse
them for new packets.
Slices are allocated in:
- recvPacket
- when we receive a sshFxpReadPacket (downloads)
The allocated slices have a fixed size = maxMsgLength.
Allocated slices are referenced to the request order id and are marked for reuse
after a request is served in maybeSendPackets.
The allocator is added to the packetManager struct and it is cleaned at the end
of the Serve() function.
This allocation mode is optional and disabled by default
Before this change in File.WriteTo() we used Fstat to discover the
length of the file being transferred. It appears that some SFTP
servers do not implement this properly perhaps because it is a seldom
used call.
After this change we replace the Fstat on the file handle with a Stat
of the path. Stat is commonly used function and implemented correctly
in both the servers that had the problem with Fstat, thus working
around the problem.
The code before and after uses the same number of SFTP roundtrips so
performance should be identical.
Fixes#288
Fixes#234
Similar to #181 (which was about read methods), the client deadlocks on
server drop on write methods (Write/ReadFrom). This is another case of
broadcastErr() and dispatchRequest() deadlocking.
The fix is to bump up the channel used to communicate about inflight
packets to maxConcurrentRequests+1 (+1 is new). It seems that it can
have the full set of packets going and hit the error (triggering
broadcastErr) yet it still tries to call dispatchRequest again. More
details in that ticket.
I'm also bumping up the chan buffer in the read methods to keep them
consistent. I can't reproduce the problem, but it looks like it should
have it. So it might just be a much harder race to trigger.
This also includes 2 tests which reprocuded the issue for ReadFrom and
Write.
to support actual rename() operations rather than the link() call
effectively mandated by the sftp v3/draft-2 requirement that targets
not be overwritten.
The client was deadlocking if you killed the server when a file transfer
was occuring. The problem occured when you had many requests sent and
were waiting on responses when the server dies. The errors for all those
pending requests get sent along the response channel which is also used
for the errors reported by the new requests being made. The new request
channel send is in the same loop as the channel reading, so when it
blocked due to all other errors filling the channel it deadlocked the
program.
There were 2 possible fixes, changing the new request error channel
usage to be in a separate goroutine to keep it from blocking the loop or
to increase the size of the buffer to handle all the errors. This
implements the latter.
Included is a test that reproduces the problem most of the time. Due to
the required timing of the issue, it was impossible to reproduce 100% of
the time.
Fixes issue #167
File read/write client methods were breaking out of the packet handling
loops with packets still 'in flight' (packets sent without reply from
server) when errors occured. This caused problems after the fact with
the returning packets interfearing with later calls.
This change makes sure all the in flight packets are processed (even if
just discarded) before breaking out of the loop and returning.
Fixes github issue #145
Increase the buffer of channel for dispatchRequest() responses to be
equal to the maximum possible difference between desiredInFlight minus
inFlight, as this is the maximum possible error returns it would get
before moving on to channel handling code.
Cisco SSH 2.0 server doesn't fill out the "message" field,
even though it returns valid status codes. This makes
File.Read method error out without actually reading the
file due to unmarshalStatus returning a "packet too short"
error.
When writer returns an error, the code would mess up inFlight counter
and the loop will never finish as the result.
We switch to only use inFlight to count requests to sftp server -
we delete from pendingWrites now so we can use len(pendingWrites) to
count.
Reading and Writing are less coupled in the sftp client and server than
they may be over a traditional ReadWriter. This mirrors the full duplex
/ half close nature of a TCP connection.
In essence conn.Reader cannot be closed locally, it can only close in
response to the remote side using conn.WriteCloser.Close(), or a network
error, which is effectively the same thing.
Make this behaviour super clear by no longer smooshing the type into a
ReadWriteCloser.