Also, remove `Entry#extra=` as it makes no sense (and wasn't even being
tested).
And remove slightly odd test that was assuming an archive would not be
changed if its utime was changed - even if it was being changed back
immediately. This test was merely confirming that we weren't catching
timestamp changes correctly.
Set it to true by default - because a new `Entry` is dirty by
definition, having not been written yet. Then make sure that an `Entry`
that is created by reading from a zip file is set as not dirty.
When passing an `Entry` type to `File#get_output_stream` the entry is
used to create a `StreamableStream`, which preserves all the info in the
entry, such as timestamp, etc. But then in `put_next_entry` all that is
lost due to the test for `kind_of?(Entry)` which a `StreamableStream` is
not. See #503 for details.
This change tests for `StreamableStream`s in `put_next_entry` and uses
them directly. Some set-up within `Entry` needed to be made more robust
to cope with this, but otherwise it's a low impact change, which does
fix the problem.
The reason this case was being missed before is that the tests weren't
testing `get_output_stream` with an `Entry` object, so I have also added
that test too.
Fixes#503.
`CentralDirectory` shouldn't be in the public API for rubyzip and
there's nothing that `CentralDirectory::read_from_stream` did that
couldn't be done by just initializing an object first. Keeping it around
risked things getting out of date as we streamline and fix other things.
If a zip file has a comment that is 65,535 characters long - which is a
valid length and the maximum allowable length - the initial read of the
archive fails to find the Zip64 End of Central Directory Locator and
therefore cannot read the rest of the file.
This commit fixes this by making sure that we look far enough back into
the file from the end to find this locator, and then use the
information in it to find the Zip64 End of Central Directory Record.
Test added to catch regressions.
Fixes#509.
This method provides a short cut to finding out how many entries are in
an archive by reading this number directly from the central directory,
and not iterating through the entire set of entries.
When loading extra fields from both the central directory and local headers,
unknown fields were not merged correctly. They were being appended, which
means that we end up with the two versions stuck together - in some
cases duplicating the field completely.
This broke all kinds of things (like calculating the size of a local
header) in subtle ways.
This commit fixes this by implementing a new `Unknown` extra field type,
and making sure that when reading local and central extra fields they
are stored and preserved correctly. We cannot assume the unknown fields
use the same data in the local and central headers.
Fixes#505.
If a zip file has a comment that is 65,535 characters long - which is a
valid length and the maximum allowable length - the initial read of the
archive fails to find the End of Central Directory Record and therefore
cannot read the rest of the file.
This commit fixes this by making sure that we look far enough back into
the file from the end to find the EoCDR. Test added to catch
regressions.
Fixes#508.
We were previously trying to work out where the next entry would be,
even with GP bit 3 set, but the logic was flaky and cannot really be
correct given the data available. It's not expected behaviour, so raise
the error instead.
This means that we get rid of the incorrect `Entry.data_descriptor_size`
which was doing more harm than good.
`test_read_from_truncated_zip_file` was not testing what it thought it
was. It was testing whether we caught an out-of-bounds cdir offset, not
whether we caught a corrupted cdir entry.
This commit embraces the actual behaviour and tests that we catch an
out-of-bounds error for both standard `IO`s and `StringIO`s.