Previously the central directory Zip64 data was written even if it wasn't
strictly needed. The standard allows for entries to include Zip64 data
(say, if they are streamed and their size is unknown when writing the file
data) without needing any Zip64 data in the central directory. So now we
only write central directory Zip64 data if there are over 65535 files or
the file data is huge.
With Zip64 write support enabled by default, it's important that we
only store the extra data when we need to. This commit ensures that
the Zip64 extra data is included for an entry if its size is over
4GB, or if we don't know how big it will be at the point of writing
the local header data.
This commit also removes the need for the Zip64Placeholder extra
data field. Now we just use the Zip64 field itself and ensure it's
filled in correctly.
`CentralDirectory` shouldn't be in the public API for rubyzip and
there's nothing that `CentralDirectory::read_from_stream` did that
couldn't be done by just initializing an object first. Keeping it around
risked things getting out of date as we streamline and fix other things.
`test_read_from_truncated_zip_file` was not testing what it thought it
was. It was testing whether we caught an out-of-bounds cdir offset, not
whether we caught a corrupted cdir entry.
This commit embraces the actual behaviour and tests that we catch an
out-of-bounds error for both standard `IO`s and `StringIO`s.
This change has several benefits:
* When errors occur, the test provides useful feedback, showing you
expected vs. actual.
* We no longer need to open and modify the Enumerable module.
* The test is more readable.
This greatly simplifies the creation of `Entry` objects when only a
couple of fields are not set to their defaults, while at the same time
allowing an `Entry` to be fully configured at creation time if
appropriate.
This fundamentally changes the `Entry` API and means that some
convenience methods in `OutputStream` and `File` have needed to be
refactored.