after processing a packet we keep in memory the allocated slices and we reuse
them for new packets.
Slices are allocated in:
- recvPacket
- when we receive a sshFxpReadPacket (downloads)
The allocated slices have a fixed size = maxMsgLength.
Allocated slices are referenced to the request order id and are marked for reuse
after a request is served in maybeSendPackets.
The allocator is added to the packetManager struct and it is cleaned at the end
of the Serve() function.
This allocation mode is optional and disabled by default
An Open packet would trigger the use of a worker pool, then the Stat
packet would come in go to the pool and return faster than the Open
(returning a file-not-found error). This fixes that be eliminating the
pool/non-pool state switching.
The include test doesn't really exercise it fully as it cannot inject
a delay in the right place to trigger the race. I plan on adding a means
to inject some logic into the packet handling in the future once I
rewrite the old filesystem server code as a request-server backend.
Fixes#265
Previous code used the request ids to do ordering. This worked until a
client came along that used un-ordered request ids. This reworks the
ordering to use an internal counter (per session) to order all packets
ensuring that responses are sent in the same order as the requests were
received.
Fixes#260
Splitted cleanPath into cleanPacketPath and cleanPath for better handling of slashes in file paths
Added test for cleanPath func
Removed code duplication => filepath.ToSlash(filepath.Clean(...)) => cleanPath(...)
Fixed tests for runLs to match year or time
Renamed constants to fit hound rules
There is a data race with the waitgroup (wg) object used to synchronize
the workers with the server exit. The workers called wg.Add()
asynchronously and it was possible for the Wait() to get hit before any
of the Add() calls were made in certain conditions. I only ever saw this
sporatically in the travis tests.
This fixes it by making the wg.Add() calls synchronous.
The worker/packet mangement code needs to be in the packet manager so
the request-server can utilize it as well. This also improves the
encapsulation of the method as it relied on internal data that should be
better isolated inside the file/struct.
File operations that happen after the open packet has been received,
like reading/writing, can be done with the pool as the order they are
run in doesn't matter (the packets contain the file offsets).
Command operations, on the other hand, need to be serialized.
This flips between a pool of workers for file operations and a single
worker for everything else. It flips on Open and Close packets.
Incoming queue is not guaranteed to contain request IDs in ascending
order. Such state may be achieved by using a single sftp.Client
connection from multiple goroutines. Sort incoming queue to avoid
livelock due to different request/response order, like this:
2017/03/27 18:29:07 incoming: [55 56 54 57 58]
2017/03/27 18:29:07 outgoing: [54 55 56 57 58]
For single-threaded clients request/response order will remain intact
and nothing should break.
Signed-off-by: Pavel Borzenkov <pavel.borzenkov@gmail.com>
I noticed a significan slowdown in throughput tested by the benchmarks
when using the pre go1.8 sort.Sort() method for sorting. So I decided to
split this out and use build flags so people could regain the
performance loss by upgrading to go1.8+.