Kitxuuu commited on
Commit
0711113
·
verified ·
1 Parent(s): b6a9216

Add files using upload-large-folder tool

Browse files
Files changed (20) hide show
  1. local-test-commons-compress-delta-02/afc-commons-compress/src/test/resources/COMPRESS-477/split_zip_created_by_winrar/file_to_compare_1 +1297 -0
  2. local-test-commons-compress-delta-02/afc-commons-compress/src/test/resources/COMPRESS-477/split_zip_created_by_zip/file_to_compare_1 +38 -0
  3. local-test-commons-compress-delta-02/afc-commons-compress/src/test/resources/COMPRESS-477/split_zip_created_by_zip/file_to_compare_2 +79 -0
  4. local-test-commons-compress-delta-02/afc-commons-compress/src/test/resources/org/apache/commons/compress/ar/number_parsing/bad_length-fail.ar +8 -0
  5. local-test-commons-compress-delta-02/afc-commons-compress/src/test/resources/org/apache/commons/compress/ar/number_parsing/bad_long_namelen_bsd-fail.ar +5 -0
  6. local-test-commons-compress-delta-02/afc-commons-compress/src/test/resources/org/apache/commons/compress/ar/number_parsing/bad_long_namelen_gnu1-fail.ar +8 -0
  7. local-test-commons-compress-delta-02/afc-commons-compress/src/test/resources/org/apache/commons/compress/ar/number_parsing/bad_long_namelen_gnu2-fail.ar +6 -0
  8. local-test-commons-compress-delta-02/afc-commons-compress/src/test/resources/org/apache/commons/compress/ar/number_parsing/bad_long_namelen_gnu3-fail.ar +0 -0
  9. local-test-commons-compress-delta-02/afc-commons-compress/src/test/resources/org/apache/commons/compress/ar/number_parsing/bad_user-fail.ar +8 -0
  10. local-test-commons-compress-delta-02/afc-commons-compress/src/test/resources/org/apache/commons/compress/fuzz/crash-f2efd9eaeb86cda597d07b5e3c3d81363633c2da +0 -0
  11. local-test-commons-compress-delta-02/afc-commons-compress/src/test/resources/org/apache/commons/compress/ico/commons-daemon/procrunr.ico +0 -0
  12. local-test-commons-compress-delta-02/afc-commons-compress/src/test/resources/org/apache/commons/compress/ico/commons-daemon/procruns.ico +0 -0
  13. local-test-commons-compress-delta-02/afc-commons-compress/src/test/resources/org/apache/commons/compress/ico/commons-daemon/procrunw.ico +0 -0
  14. local-test-commons-compress-delta-02/afc-commons-compress/src/test/resources/org/apache/commons/compress/ico/commons-imaging/OutOfMemory_epine.ico +0 -0
  15. local-test-commons-compress-delta-02/afc-commons-compress/src/test/resources/org/apache/commons/compress/pack/signatures_oom.pack +0 -0
  16. local-test-commons-compress-delta-02/fuzz-tooling/infra/build_specified_commit.py +410 -0
  17. local-test-commons-compress-delta-02/fuzz-tooling/infra/cifuzz/test_data/external-project/.clusterfuzzlite/build.sh +24 -0
  18. local-test-commons-compress-delta-02/fuzz-tooling/infra/constants.py +49 -0
  19. local-test-commons-compress-delta-02/fuzz-tooling/infra/tools/wycheproof/.gitignore +1 -0
  20. local-test-commons-compress-delta-02/fuzz-tooling/infra/tools/wycheproof/generate_job.py +50 -0
local-test-commons-compress-delta-02/afc-commons-compress/src/test/resources/COMPRESS-477/split_zip_created_by_winrar/file_to_compare_1 ADDED
@@ -0,0 +1,1297 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ /*
2
+ * Licensed to the Apache Software Foundation (ASF) under one
3
+ * or more contributor license agreements. See the NOTICE file
4
+ * distributed with this work for additional information
5
+ * regarding copyright ownership. The ASF licenses this file
6
+ * to you under the Apache License, Version 2.0 (the
7
+ * "License"); you may not use this file except in compliance
8
+ * with the License. You may obtain a copy of the License at
9
+ *
10
+ * http://www.apache.org/licenses/LICENSE-2.0
11
+ *
12
+ * Unless required by applicable law or agreed to in writing,
13
+ * software distributed under the License is distributed on an
14
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
15
+ * KIND, either express or implied. See the License for the
16
+ * specific language governing permissions and limitations
17
+ * under the License.
18
+ */
19
+ package org.apache.commons.compress.archivers.zip;
20
+
21
+ import java.io.ByteArrayInputStream;
22
+ import java.io.ByteArrayOutputStream;
23
+ import java.io.EOFException;
24
+ import java.io.IOException;
25
+ import java.io.InputStream;
26
+ import java.io.PushbackInputStream;
27
+ import java.math.BigInteger;
28
+ import java.nio.ByteBuffer;
29
+ import java.util.Arrays;
30
+ import java.util.zip.CRC32;
31
+ import java.util.zip.DataFormatException;
32
+ import java.util.zip.Inflater;
33
+ import java.util.zip.ZipEntry;
34
+ import java.util.zip.ZipException;
35
+
36
+ import org.apache.commons.compress.archivers.ArchiveEntry;
37
+ import org.apache.commons.compress.archivers.ArchiveInputStream;
38
+ import org.apache.commons.compress.compressors.bzip2.BZip2CompressorInputStream;
39
+ import org.apache.commons.compress.compressors.deflate64.Deflate64CompressorInputStream;
40
+ import org.apache.commons.compress.utils.ArchiveUtils;
41
+ import org.apache.commons.compress.utils.IOUtils;
42
+ import org.apache.commons.compress.utils.InputStreamStatistics;
43
+
44
+ import static org.apache.commons.compress.archivers.zip.ZipConstants.DWORD;
45
+ import static org.apache.commons.compress.archivers.zip.ZipConstants.SHORT;
46
+ import static org.apache.commons.compress.archivers.zip.ZipConstants.WORD;
47
+ import static org.apache.commons.compress.archivers.zip.ZipConstants.ZIP64_MAGIC;
48
+
49
+ /**
50
+ * Implements an input stream that can read Zip archives.
51
+ *
52
+ * <p>As of Apache Commons Compress it transparently supports Zip64
53
+ * extensions and thus individual entries and archives larger than 4
54
+ * GB or with more than 65536 entries.</p>
55
+ *
56
+ * <p>The {@link ZipFile} class is preferred when reading from files
57
+ * as {@link ZipArchiveInputStream} is limited by not being able to
58
+ * read the central directory header before returning entries. In
59
+ * particular {@link ZipArchiveInputStream}</p>
60
+ *
61
+ * <ul>
62
+ *
63
+ * <li>may return entries that are not part of the central directory
64
+ * at all and shouldn't be considered part of the archive.</li>
65
+ *
66
+ * <li>may return several entries with the same name.</li>
67
+ *
68
+ * <li>will not return internal or external attributes.</li>
69
+ *
70
+ * <li>may return incomplete extra field data.</li>
71
+ *
72
+ * <li>may return unknown sizes and CRC values for entries until the
73
+ * next entry has been reached if the archive uses the data
74
+ * descriptor feature.</li>
75
+ *
76
+ * </ul>
77
+ *
78
+ * @see ZipFile
79
+ * @NotThreadSafe
80
+ */
81
+ public class ZipArchiveInputStream extends ArchiveInputStream implements InputStreamStatistics {
82
+
83
+ /** The zip encoding to use for file names and the file comment. */
84
+ private final ZipEncoding zipEncoding;
85
+
86
+ // the provided encoding (for unit tests)
87
+ final String encoding;
88
+
89
+ /** Whether to look for and use Unicode extra fields. */
90
+ private final boolean useUnicodeExtraFields;
91
+
92
+ /** Wrapped stream, will always be a PushbackInputStream. */
93
+ private final InputStream in;
94
+
95
+ /** Inflater used for all deflated entries. */
96
+ private final Inflater inf = new Inflater(true);
97
+
98
+ /** Buffer used to read from the wrapped stream. */
99
+ private final ByteBuffer buf = ByteBuffer.allocate(ZipArchiveOutputStream.BUFFER_SIZE);
100
+
101
+ /** The entry that is currently being read. */
102
+ private CurrentEntry current = null;
103
+
104
+ /** Whether the stream has been closed. */
105
+ private boolean closed = false;
106
+
107
+ /** Whether the stream has reached the central directory - and thus found all entries. */
108
+ private boolean hitCentralDirectory = false;
109
+
110
+ /**
111
+ * When reading a stored entry that uses the data descriptor this
112
+ * stream has to read the full entry and caches it. This is the
113
+ * cache.
114
+ */
115
+ private ByteArrayInputStream lastStoredEntry = null;
116
+
117
+ /** Whether the stream will try to read STORED entries that use a data descriptor. */
118
+ private boolean allowStoredEntriesWithDataDescriptor = false;
119
+
120
+ /** Count decompressed bytes for current entry */
121
+ private long uncompressedCount = 0;
122
+
123
+ private static final int LFH_LEN = 30;
124
+ /*
125
+ local file header signature WORD
126
+ version needed to extract SHORT
127
+ general purpose bit flag SHORT
128
+ compression method SHORT
129
+ last mod file time SHORT
130
+ last mod file date SHORT
131
+ crc-32 WORD
132
+ compressed size WORD
133
+ uncompressed size WORD
134
+ file name length SHORT
135
+ extra field length SHORT
136
+ */
137
+
138
+ private static final int CFH_LEN = 46;
139
+ /*
140
+ central file header signature WORD
141
+ version made by SHORT
142
+ version needed to extract SHORT
143
+ general purpose bit flag SHORT
144
+ compression method SHORT
145
+ last mod file time SHORT
146
+ last mod file date SHORT
147
+ crc-32 WORD
148
+ compressed size WORD
149
+ uncompressed size WORD
150
+ file name length SHORT
151
+ extra field length SHORT
152
+ file comment length SHORT
153
+ disk number start SHORT
154
+ internal file attributes SHORT
155
+ external file attributes WORD
156
+ relative offset of local header WORD
157
+ */
158
+
159
+ private static final long TWO_EXP_32 = ZIP64_MAGIC + 1;
160
+
161
+ // cached buffers - must only be used locally in the class (COMPRESS-172 - reduce garbage collection)
162
+ private final byte[] lfhBuf = new byte[LFH_LEN];
163
+ private final byte[] skipBuf = new byte[1024];
164
+ private final byte[] shortBuf = new byte[SHORT];
165
+ private final byte[] wordBuf = new byte[WORD];
166
+ private final byte[] twoDwordBuf = new byte[2 * DWORD];
167
+
168
+ private int entriesRead = 0;
169
+
170
+ /**
171
+ * Create an instance using UTF-8 encoding
172
+ * @param inputStream the stream to wrap
173
+ */
174
+ public ZipArchiveInputStream(final InputStream inputStream) {
175
+ this(inputStream, ZipEncodingHelper.UTF8);
176
+ }
177
+
178
+ /**
179
+ * Create an instance using the specified encoding
180
+ * @param inputStream the stream to wrap
181
+ * @param encoding the encoding to use for file names, use null
182
+ * for the platform's default encoding
183
+ * @since 1.5
184
+ */
185
+ public ZipArchiveInputStream(final InputStream inputStream, final String encoding) {
186
+ this(inputStream, encoding, true);
187
+ }
188
+
189
+ /**
190
+ * Create an instance using the specified encoding
191
+ * @param inputStream the stream to wrap
192
+ * @param encoding the encoding to use for file names, use null
193
+ * for the platform's default encoding
194
+ * @param useUnicodeExtraFields whether to use InfoZIP Unicode
195
+ * Extra Fields (if present) to set the file names.
196
+ */
197
+ public ZipArchiveInputStream(final InputStream inputStream, final String encoding, final boolean useUnicodeExtraFields) {
198
+ this(inputStream, encoding, useUnicodeExtraFields, false);
199
+ }
200
+
201
+ /**
202
+ * Create an instance using the specified encoding
203
+ * @param inputStream the stream to wrap
204
+ * @param encoding the encoding to use for file names, use null
205
+ * for the platform's default encoding
206
+ * @param useUnicodeExtraFields whether to use InfoZIP Unicode
207
+ * Extra Fields (if present) to set the file names.
208
+ * @param allowStoredEntriesWithDataDescriptor whether the stream
209
+ * will try to read STORED entries that use a data descriptor
210
+ * @since 1.1
211
+ */
212
+ public ZipArchiveInputStream(final InputStream inputStream,
213
+ final String encoding,
214
+ final boolean useUnicodeExtraFields,
215
+ final boolean allowStoredEntriesWithDataDescriptor) {
216
+ this.encoding = encoding;
217
+ zipEncoding = ZipEncodingHelper.getZipEncoding(encoding);
218
+ this.useUnicodeExtraFields = useUnicodeExtraFields;
219
+ in = new PushbackInputStream(inputStream, buf.capacity());
220
+ this.allowStoredEntriesWithDataDescriptor =
221
+ allowStoredEntriesWithDataDescriptor;
222
+ // haven't read anything so far
223
+ buf.limit(0);
224
+ }
225
+
226
+ public ZipArchiveEntry getNextZipEntry() throws IOException {
227
+ uncompressedCount = 0;
228
+
229
+ boolean firstEntry = true;
230
+ if (closed || hitCentralDirectory) {
231
+ return null;
232
+ }
233
+ if (current != null) {
234
+ closeEntry();
235
+ firstEntry = false;
236
+ }
237
+
238
+ long currentHeaderOffset = getBytesRead();
239
+ try {
240
+ if (firstEntry) {
241
+ // split archives have a special signature before the
242
+ // first local file header - look for it and fail with
243
+ // the appropriate error message if this is a split
244
+ // archive.
245
+ readFirstLocalFileHeader(lfhBuf);
246
+ } else {
247
+ readFully(lfhBuf);
248
+ }
249
+ } catch (final EOFException e) { //NOSONAR
250
+ return null;
251
+ }
252
+
253
+ final ZipLong sig = new ZipLong(lfhBuf);
254
+ if (!sig.equals(ZipLong.LFH_SIG)) {
255
+ if (sig.equals(ZipLong.CFH_SIG) || sig.equals(ZipLong.AED_SIG) || isApkSigningBlock(lfhBuf)) {
256
+ hitCentralDirectory = true;
257
+ skipRemainderOfArchive();
258
+ return null;
259
+ }
260
+ throw new ZipException(String.format("Unexpected record signature: 0X%X", sig.getValue()));
261
+ }
262
+
263
+ int off = WORD;
264
+ current = new CurrentEntry();
265
+
266
+ final int versionMadeBy = ZipShort.getValue(lfhBuf, off);
267
+ off += SHORT;
268
+ current.entry.setPlatform((versionMadeBy >> ZipFile.BYTE_SHIFT) & ZipFile.NIBLET_MASK);
269
+
270
+ final GeneralPurposeBit gpFlag = GeneralPurposeBit.parse(lfhBuf, off);
271
+ final boolean hasUTF8Flag = gpFlag.usesUTF8ForNames();
272
+ final ZipEncoding entryEncoding = hasUTF8Flag ? ZipEncodingHelper.UTF8_ZIP_ENCODING : zipEncoding;
273
+ current.hasDataDescriptor = gpFlag.usesDataDescriptor();
274
+ current.entry.setGeneralPurposeBit(gpFlag);
275
+
276
+ off += SHORT;
277
+
278
+ current.entry.setMethod(ZipShort.getValue(lfhBuf, off));
279
+ off += SHORT;
280
+
281
+ final long time = ZipUtil.dosToJavaTime(ZipLong.getValue(lfhBuf, off));
282
+ current.entry.setTime(time);
283
+ off += WORD;
284
+
285
+ ZipLong size = null, cSize = null;
286
+ if (!current.hasDataDescriptor) {
287
+ current.entry.setCrc(ZipLong.getValue(lfhBuf, off));
288
+ off += WORD;
289
+
290
+ cSize = new ZipLong(lfhBuf, off);
291
+ off += WORD;
292
+
293
+ size = new ZipLong(lfhBuf, off);
294
+ off += WORD;
295
+ } else {
296
+ off += 3 * WORD;
297
+ }
298
+
299
+ final int fileNameLen = ZipShort.getValue(lfhBuf, off);
300
+
301
+ off += SHORT;
302
+
303
+ final int extraLen = ZipShort.getValue(lfhBuf, off);
304
+ off += SHORT; // NOSONAR - assignment as documentation
305
+
306
+ final byte[] fileName = new byte[fileNameLen];
307
+ readFully(fileName);
308
+ current.entry.setName(entryEncoding.decode(fileName), fileName);
309
+ if (hasUTF8Flag) {
310
+ current.entry.setNameSource(ZipArchiveEntry.NameSource.NAME_WITH_EFS_FLAG);
311
+ }
312
+
313
+ final byte[] extraData = new byte[extraLen];
314
+ readFully(extraData);
315
+ current.entry.setExtra(extraData);
316
+
317
+ if (!hasUTF8Flag && useUnicodeExtraFields) {
318
+ ZipUtil.setNameAndCommentFromExtraFields(current.entry, fileName, null);
319
+ }
320
+
321
+ processZip64Extra(size, cSize);
322
+
323
+ current.entry.setLocalHeaderOffset(currentHeaderOffset);
324
+ current.entry.setDataOffset(getBytesRead());
325
+ current.entry.setStreamContiguous(true);
326
+
327
+ ZipMethod m = ZipMethod.getMethodByCode(current.entry.getMethod());
328
+ if (current.entry.getCompressedSize() != ArchiveEntry.SIZE_UNKNOWN) {
329
+ if (ZipUtil.canHandleEntryData(current.entry) && m != ZipMethod.STORED && m != ZipMethod.DEFLATED) {
330
+ InputStream bis = new BoundedInputStream(in, current.entry.getCompressedSize());
331
+ switch (m) {
332
+ case UNSHRINKING:
333
+ current.in = new UnshrinkingInputStream(bis);
334
+ break;
335
+ case IMPLODING:
336
+ current.in = new ExplodingInputStream(
337
+ current.entry.getGeneralPurposeBit().getSlidingDictionarySize(),
338
+ current.entry.getGeneralPurposeBit().getNumberOfShannonFanoTrees(),
339
+ bis);
340
+ break;
341
+ case BZIP2:
342
+ current.in = new BZip2CompressorInputStream(bis);
343
+ break;
344
+ case ENHANCED_DEFLATED:
345
+ current.in = new Deflate64CompressorInputStream(bis);
346
+ break;
347
+ default:
348
+ // we should never get here as all supported methods have been covered
349
+ // will cause an error when read is invoked, don't throw an exception here so people can
350
+ // skip unsupported entries
351
+ break;
352
+ }
353
+ }
354
+ } else if (m == ZipMethod.ENHANCED_DEFLATED) {
355
+ current.in = new Deflate64CompressorInputStream(in);
356
+ }
357
+
358
+ entriesRead++;
359
+ return current.entry;
360
+ }
361
+
362
+ /**
363
+ * Fills the given array with the first local file header and
364
+ * deals with splitting/spanning markers that may prefix the first
365
+ * LFH.
366
+ */
367
+ private void readFirstLocalFileHeader(final byte[] lfh) throws IOException {
368
+ readFully(lfh);
369
+ final ZipLong sig = new ZipLong(lfh);
370
+ if (sig.equals(ZipLong.DD_SIG)) {
371
+ throw new UnsupportedZipFeatureException(UnsupportedZipFeatureException.Feature.SPLITTING);
372
+ }
373
+
374
+ if (sig.equals(ZipLong.SINGLE_SEGMENT_SPLIT_MARKER)) {
375
+ // The archive is not really split as only one segment was
376
+ // needed in the end. Just skip over the marker.
377
+ final byte[] missedLfhBytes = new byte[4];
378
+ readFully(missedLfhBytes);
379
+ System.arraycopy(lfh, 4, lfh, 0, LFH_LEN - 4);
380
+ System.arraycopy(missedLfhBytes, 0, lfh, LFH_LEN - 4, 4);
381
+ }
382
+ }
383
+
384
+ /**
385
+ * Records whether a Zip64 extra is present and sets the size
386
+ * information from it if sizes are 0xFFFFFFFF and the entry
387
+ * doesn't use a data descriptor.
388
+ */
389
+ private void processZip64Extra(final ZipLong size, final ZipLong cSize) {
390
+ final Zip64ExtendedInformationExtraField z64 =
391
+ (Zip64ExtendedInformationExtraField)
392
+ current.entry.getExtraField(Zip64ExtendedInformationExtraField.HEADER_ID);
393
+ current.usesZip64 = z64 != null;
394
+ if (!current.hasDataDescriptor) {
395
+ if (z64 != null // same as current.usesZip64 but avoids NPE warning
396
+ && (ZipLong.ZIP64_MAGIC.equals(cSize) || ZipLong.ZIP64_MAGIC.equals(size)) ) {
397
+ current.entry.setCompressedSize(z64.getCompressedSize().getLongValue());
398
+ current.entry.setSize(z64.getSize().getLongValue());
399
+ } else if (cSize != null && size != null) {
400
+ current.entry.setCompressedSize(cSize.getValue());
401
+ current.entry.setSize(size.getValue());
402
+ }
403
+ }
404
+ }
405
+
406
+ @Override
407
+ public ArchiveEntry getNextEntry() throws IOException {
408
+ return getNextZipEntry();
409
+ }
410
+
411
+ /**
412
+ * Whether this class is able to read the given entry.
413
+ *
414
+ * <p>May return false if it is set up to use encryption or a
415
+ * compression method that hasn't been implemented yet.</p>
416
+ * @since 1.1
417
+ */
418
+ @Override
419
+ public boolean canReadEntryData(final ArchiveEntry ae) {
420
+ if (ae instanceof ZipArchiveEntry) {
421
+ final ZipArchiveEntry ze = (ZipArchiveEntry) ae;
422
+ return ZipUtil.canHandleEntryData(ze)
423
+ && supportsDataDescriptorFor(ze)
424
+ && supportsCompressedSizeFor(ze);
425
+ }
426
+ return false;
427
+ }
428
+
429
+ @Override
430
+ public int read(final byte[] buffer, final int offset, final int length) throws IOException {
431
+ if (length == 0) {
432
+ return 0;
433
+ }
434
+ if (closed) {
435
+ throw new IOException("The stream is closed");
436
+ }
437
+
438
+ if (current == null) {
439
+ return -1;
440
+ }
441
+
442
+ // avoid int overflow, check null buffer
443
+ if (offset > buffer.length || length < 0 || offset < 0 || buffer.length - offset < length) {
444
+ throw new ArrayIndexOutOfBoundsException();
445
+ }
446
+
447
+ ZipUtil.checkRequestedFeatures(current.entry);
448
+ if (!supportsDataDescriptorFor(current.entry)) {
449
+ throw new UnsupportedZipFeatureException(UnsupportedZipFeatureException.Feature.DATA_DESCRIPTOR,
450
+ current.entry);
451
+ }
452
+ if (!supportsCompressedSizeFor(current.entry)) {
453
+ throw new UnsupportedZipFeatureException(UnsupportedZipFeatureException.Feature.UNKNOWN_COMPRESSED_SIZE,
454
+ current.entry);
455
+ }
456
+
457
+ int read;
458
+ if (current.entry.getMethod() == ZipArchiveOutputStream.STORED) {
459
+ read = readStored(buffer, offset, length);
460
+ } else if (current.entry.getMethod() == ZipArchiveOutputStream.DEFLATED) {
461
+ read = readDeflated(buffer, offset, length);
462
+ } else if (current.entry.getMethod() == ZipMethod.UNSHRINKING.getCode()
463
+ || current.entry.getMethod() == ZipMethod.IMPLODING.getCode()
464
+ || current.entry.getMethod() == ZipMethod.ENHANCED_DEFLATED.getCode()
465
+ || current.entry.getMethod() == ZipMethod.BZIP2.getCode()) {
466
+ read = current.in.read(buffer, offset, length);
467
+ } else {
468
+ throw new UnsupportedZipFeatureException(ZipMethod.getMethodByCode(current.entry.getMethod()),
469
+ current.entry);
470
+ }
471
+
472
+ if (read >= 0) {
473
+ current.crc.update(buffer, offset, read);
474
+ uncompressedCount += read;
475
+ }
476
+
477
+ return read;
478
+ }
479
+
480
+ /**
481
+ * @since 1.17
482
+ */
483
+ @Override
484
+ public long getCompressedCount() {
485
+ if (current.entry.getMethod() == ZipArchiveOutputStream.STORED) {
486
+ return current.bytesRead;
487
+ } else if (current.entry.getMethod() == ZipArchiveOutputStream.DEFLATED) {
488
+ return getBytesInflated();
489
+ } else if (current.entry.getMethod() == ZipMethod.UNSHRINKING.getCode()) {
490
+ return ((UnshrinkingInputStream) current.in).getCompressedCount();
491
+ } else if (current.entry.getMethod() == ZipMethod.IMPLODING.getCode()) {
492
+ return ((ExplodingInputStream) current.in).getCompressedCount();
493
+ } else if (current.entry.getMethod() == ZipMethod.ENHANCED_DEFLATED.getCode()) {
494
+ return ((Deflate64CompressorInputStream) current.in).getCompressedCount();
495
+ } else if (current.entry.getMethod() == ZipMethod.BZIP2.getCode()) {
496
+ return ((BZip2CompressorInputStream) current.in).getCompressedCount();
497
+ } else {
498
+ return -1;
499
+ }
500
+ }
501
+
502
+ /**
503
+ * @since 1.17
504
+ */
505
+ @Override
506
+ public long getUncompressedCount() {
507
+ return uncompressedCount;
508
+ }
509
+
510
+ /**
511
+ * Implementation of read for STORED entries.
512
+ */
513
+ private int readStored(final byte[] buffer, final int offset, final int length) throws IOException {
514
+
515
+ if (current.hasDataDescriptor) {
516
+ if (lastStoredEntry == null) {
517
+ readStoredEntry();
518
+ }
519
+ return lastStoredEntry.read(buffer, offset, length);
520
+ }
521
+
522
+ final long csize = current.entry.getSize();
523
+ if (current.bytesRead >= csize) {
524
+ return -1;
525
+ }
526
+
527
+ if (buf.position() >= buf.limit()) {
528
+ buf.position(0);
529
+ final int l = in.read(buf.array());
530
+ if (l == -1) {
531
+ buf.limit(0);
532
+ throw new IOException("Truncated ZIP file");
533
+ }
534
+ buf.limit(l);
535
+
536
+ count(l);
537
+ current.bytesReadFromStream += l;
538
+ }
539
+
540
+ int toRead = Math.min(buf.remaining(), length);
541
+ if ((csize - current.bytesRead) < toRead) {
542
+ // if it is smaller than toRead then it fits into an int
543
+ toRead = (int) (csize - current.bytesRead);
544
+ }
545
+ buf.get(buffer, offset, toRead);
546
+ current.bytesRead += toRead;
547
+ return toRead;
548
+ }
549
+
550
+ /**
551
+ * Implementation of read for DEFLATED entries.
552
+ */
553
+ private int readDeflated(final byte[] buffer, final int offset, final int length) throws IOException {
554
+ final int read = readFromInflater(buffer, offset, length);
555
+ if (read <= 0) {
556
+ if (inf.finished()) {
557
+ return -1;
558
+ } else if (inf.needsDictionary()) {
559
+ throw new ZipException("This archive needs a preset dictionary"
560
+ + " which is not supported by Commons"
561
+ + " Compress.");
562
+ } else if (read == -1) {
563
+ throw new IOException("Truncated ZIP file");
564
+ }
565
+ }
566
+ return read;
567
+ }
568
+
569
+ /**
570
+ * Potentially reads more bytes to fill the inflater's buffer and
571
+ * reads from it.
572
+ */
573
+ private int readFromInflater(final byte[] buffer, final int offset, final int length) throws IOException {
574
+ int read = 0;
575
+ do {
576
+ if (inf.needsInput()) {
577
+ final int l = fill();
578
+ if (l > 0) {
579
+ current.bytesReadFromStream += buf.limit();
580
+ } else if (l == -1) {
581
+ return -1;
582
+ } else {
583
+ break;
584
+ }
585
+ }
586
+ try {
587
+ read = inf.inflate(buffer, offset, length);
588
+ } catch (final DataFormatException e) {
589
+ throw (IOException) new ZipException(e.getMessage()).initCause(e);
590
+ }
591
+ } while (read == 0 && inf.needsInput());
592
+ return read;
593
+ }
594
+
595
+ @Override
596
+ public void close() throws IOException {
597
+ if (!closed) {
598
+ closed = true;
599
+ try {
600
+ in.close();
601
+ } finally {
602
+ inf.end();
603
+ }
604
+ }
605
+ }
606
+
607
+ /**
608
+ * Skips over and discards value bytes of data from this input
609
+ * stream.
610
+ *
611
+ * <p>This implementation may end up skipping over some smaller
612
+ * number of bytes, possibly 0, if and only if it reaches the end
613
+ * of the underlying stream.</p>
614
+ *
615
+ * <p>The actual number of bytes skipped is returned.</p>
616
+ *
617
+ * @param value the number of bytes to be skipped.
618
+ * @return the actual number of bytes skipped.
619
+ * @throws IOException - if an I/O error occurs.
620
+ * @throws IllegalArgumentException - if value is negative.
621
+ */
622
+ @Override
623
+ public long skip(final long value) throws IOException {
624
+ if (value >= 0) {
625
+ long skipped = 0;
626
+ while (skipped < value) {
627
+ final long rem = value - skipped;
628
+ final int x = read(skipBuf, 0, (int) (skipBuf.length > rem ? rem : skipBuf.length));
629
+ if (x == -1) {
630
+ return skipped;
631
+ }
632
+ skipped += x;
633
+ }
634
+ return skipped;
635
+ }
636
+ throw new IllegalArgumentException();
637
+ }
638
+
639
+ /**
640
+ * Checks if the signature matches what is expected for a zip file.
641
+ * Does not currently handle self-extracting zips which may have arbitrary
642
+ * leading content.
643
+ *
644
+ * @param signature the bytes to check
645
+ * @param length the number of bytes to check
646
+ * @return true, if this stream is a zip archive stream, false otherwise
647
+ */
648
+ public static boolean matches(final byte[] signature, final int length) {
649
+ if (length < ZipArchiveOutputStream.LFH_SIG.length) {
650
+ return false;
651
+ }
652
+
653
+ return checksig(signature, ZipArchiveOutputStream.LFH_SIG) // normal file
654
+ || checksig(signature, ZipArchiveOutputStream.EOCD_SIG) // empty zip
655
+ || checksig(signature, ZipArchiveOutputStream.DD_SIG) // split zip
656
+ || checksig(signature, ZipLong.SINGLE_SEGMENT_SPLIT_MARKER.getBytes());
657
+ }
658
+
659
+ private static boolean checksig(final byte[] signature, final byte[] expected) {
660
+ for (int i = 0; i < expected.length; i++) {
661
+ if (signature[i] != expected[i]) {
662
+ return false;
663
+ }
664
+ }
665
+ return true;
666
+ }
667
+
668
+ /**
669
+ * Closes the current ZIP archive entry and positions the underlying
670
+ * stream to the beginning of the next entry. All per-entry variables
671
+ * and data structures are cleared.
672
+ * <p>
673
+ * If the compressed size of this entry is included in the entry header,
674
+ * then any outstanding bytes are simply skipped from the underlying
675
+ * stream without uncompressing them. This allows an entry to be safely
676
+ * closed even if the compression method is unsupported.
677
+ * <p>
678
+ * In case we don't know the compressed size of this entry or have
679
+ * already buffered too much data from the underlying stream to support
680
+ * uncompression, then the uncompression process is completed and the
681
+ * end position of the stream is adjusted based on the result of that
682
+ * process.
683
+ *
684
+ * @throws IOException if an error occurs
685
+ */
686
+ private void closeEntry() throws IOException {
687
+ if (closed) {
688
+ throw new IOException("The stream is closed");
689
+ }
690
+ if (current == null) {
691
+ return;
692
+ }
693
+
694
+ // Ensure all entry bytes are read
695
+ if (currentEntryHasOutstandingBytes()) {
696
+ drainCurrentEntryData();
697
+ } else {
698
+ // this is guaranteed to exhaust the stream
699
+ skip(Long.MAX_VALUE); //NOSONAR
700
+
701
+ final long inB = current.entry.getMethod() == ZipArchiveOutputStream.DEFLATED
702
+ ? getBytesInflated() : current.bytesRead;
703
+
704
+ // this is at most a single read() operation and can't
705
+ // exceed the range of int
706
+ final int diff = (int) (current.bytesReadFromStream - inB);
707
+
708
+ // Pushback any required bytes
709
+ if (diff > 0) {
710
+ pushback(buf.array(), buf.limit() - diff, diff);
711
+ current.bytesReadFromStream -= diff;
712
+ }
713
+
714
+ // Drain remainder of entry if not all data bytes were required
715
+ if (currentEntryHasOutstandingBytes()) {
716
+ drainCurrentEntryData();
717
+ }
718
+ }
719
+
720
+ if (lastStoredEntry == null && current.hasDataDescriptor) {
721
+ readDataDescriptor();
722
+ }
723
+
724
+ inf.reset();
725
+ buf.clear().flip();
726
+ current = null;
727
+ lastStoredEntry = null;
728
+ }
729
+
730
+ /**
731
+ * If the compressed size of the current entry is included in the entry header
732
+ * and there are any outstanding bytes in the underlying stream, then
733
+ * this returns true.
734
+ *
735
+ * @return true, if current entry is determined to have outstanding bytes, false otherwise
736
+ */
737
+ private boolean currentEntryHasOutstandingBytes() {
738
+ return current.bytesReadFromStream <= current.entry.getCompressedSize()
739
+ && !current.hasDataDescriptor;
740
+ }
741
+
742
+ /**
743
+ * Read all data of the current entry from the underlying stream
744
+ * that hasn't been read, yet.
745
+ */
746
+ private void drainCurrentEntryData() throws IOException {
747
+ long remaining = current.entry.getCompressedSize() - current.bytesReadFromStream;
748
+ while (remaining > 0) {
749
+ final long n = in.read(buf.array(), 0, (int) Math.min(buf.capacity(), remaining));
750
+ if (n < 0) {
751
+ throw new EOFException("Truncated ZIP entry: "
752
+ + ArchiveUtils.sanitize(current.entry.getName()));
753
+ }
754
+ count(n);
755
+ remaining -= n;
756
+ }
757
+ }
758
+
759
+ /**
760
+ * Get the number of bytes Inflater has actually processed.
761
+ *
762
+ * <p>for Java &lt; Java7 the getBytes* methods in
763
+ * Inflater/Deflater seem to return unsigned ints rather than
764
+ * longs that start over with 0 at 2^32.</p>
765
+ *
766
+ * <p>The stream knows how many bytes it has read, but not how
767
+ * many the Inflater actually consumed - it should be between the
768
+ * total number of bytes read for the entry and the total number
769
+ * minus the last read operation. Here we just try to make the
770
+ * value close enough to the bytes we've read by assuming the
771
+ * number of bytes consumed must be smaller than (or equal to) the
772
+ * number of bytes read but not smaller by more than 2^32.</p>
773
+ */
774
+ private long getBytesInflated() {
775
+ long inB = inf.getBytesRead();
776
+ if (current.bytesReadFromStream >= TWO_EXP_32) {
777
+ while (inB + TWO_EXP_32 <= current.bytesReadFromStream) {
778
+ inB += TWO_EXP_32;
779
+ }
780
+ }
781
+ return inB;
782
+ }
783
+
784
+ private int fill() throws IOException {
785
+ if (closed) {
786
+ throw new IOException("The stream is closed");
787
+ }
788
+ final int length = in.read(buf.array());
789
+ if (length > 0) {
790
+ buf.limit(length);
791
+ count(buf.limit());
792
+ inf.setInput(buf.array(), 0, buf.limit());
793
+ }
794
+ return length;
795
+ }
796
+
797
+ private void readFully(final byte[] b) throws IOException {
798
+ readFully(b, 0);
799
+ }
800
+
801
+ private void readFully(final byte[] b, final int off) throws IOException {
802
+ final int len = b.length - off;
803
+ final int count = IOUtils.readFully(in, b, off, len);
804
+ count(count);
805
+ if (count < len) {
806
+ throw new EOFException();
807
+ }
808
+ }
809
+
810
+ private void readDataDescriptor() throws IOException {
811
+ readFully(wordBuf);
812
+ ZipLong val = new ZipLong(wordBuf);
813
+ if (ZipLong.DD_SIG.equals(val)) {
814
+ // data descriptor with signature, skip sig
815
+ readFully(wordBuf);
816
+ val = new ZipLong(wordBuf);
817
+ }
818
+ current.entry.setCrc(val.getValue());
819
+
820
+ // if there is a ZIP64 extra field, sizes are eight bytes
821
+ // each, otherwise four bytes each. Unfortunately some
822
+ // implementations - namely Java7 - use eight bytes without
823
+ // using a ZIP64 extra field -
824
+ // https://bugs.sun.com/bugdatabase/view_bug.do?bug_id=7073588
825
+
826
+ // just read 16 bytes and check whether bytes nine to twelve
827
+ // look like one of the signatures of what could follow a data
828
+ // descriptor (ignoring archive decryption headers for now).
829
+ // If so, push back eight bytes and assume sizes are four
830
+ // bytes, otherwise sizes are eight bytes each.
831
+ readFully(twoDwordBuf);
832
+ final ZipLong potentialSig = new ZipLong(twoDwordBuf, DWORD);
833
+ if (potentialSig.equals(ZipLong.CFH_SIG) || potentialSig.equals(ZipLong.LFH_SIG)) {
834
+ pushback(twoDwordBuf, DWORD, DWORD);
835
+ current.entry.setCompressedSize(ZipLong.getValue(twoDwordBuf));
836
+ current.entry.setSize(ZipLong.getValue(twoDwordBuf, WORD));
837
+ } else {
838
+ current.entry.setCompressedSize(ZipEightByteInteger.getLongValue(twoDwordBuf));
839
+ current.entry.setSize(ZipEightByteInteger.getLongValue(twoDwordBuf, DWORD));
840
+ }
841
+ }
842
+
843
+ /**
844
+ * Whether this entry requires a data descriptor this library can work with.
845
+ *
846
+ * @return true if allowStoredEntriesWithDataDescriptor is true,
847
+ * the entry doesn't require any data descriptor or the method is
848
+ * DEFLATED or ENHANCED_DEFLATED.
849
+ */
850
+ private boolean supportsDataDescriptorFor(final ZipArchiveEntry entry) {
851
+ return !entry.getGeneralPurposeBit().usesDataDescriptor()
852
+
853
+ || (allowStoredEntriesWithDataDescriptor && entry.getMethod() == ZipEntry.STORED)
854
+ || entry.getMethod() == ZipEntry.DEFLATED
855
+ || entry.getMethod() == ZipMethod.ENHANCED_DEFLATED.getCode();
856
+ }
857
+
858
+ /**
859
+ * Whether the compressed size for the entry is either known or
860
+ * not required by the compression method being used.
861
+ */
862
+ private boolean supportsCompressedSizeFor(final ZipArchiveEntry entry) {
863
+ return entry.getCompressedSize() != ArchiveEntry.SIZE_UNKNOWN
864
+ || entry.getMethod() == ZipEntry.DEFLATED
865
+ || entry.getMethod() == ZipMethod.ENHANCED_DEFLATED.getCode()
866
+ || (entry.getGeneralPurposeBit().usesDataDescriptor()
867
+ && allowStoredEntriesWithDataDescriptor
868
+ && entry.getMethod() == ZipEntry.STORED);
869
+ }
870
+
871
+ private static final String USE_ZIPFILE_INSTEAD_OF_STREAM_DISCLAIMER =
872
+ " while reading a stored entry using data descriptor. Either the archive is broken"
873
+ + " or it can not be read using ZipArchiveInputStream and you must use ZipFile."
874
+ + " A common cause for this is a ZIP archive containing a ZIP archive."
875
+ + " See http://commons.apache.org/proper/commons-compress/zip.html#ZipArchiveInputStream_vs_ZipFile";
876
+
877
+ /**
878
+ * Caches a stored entry that uses the data descriptor.
879
+ *
880
+ * <ul>
881
+ * <li>Reads a stored entry until the signature of a local file
882
+ * header, central directory header or data descriptor has been
883
+ * found.</li>
884
+ * <li>Stores all entry data in lastStoredEntry.</p>
885
+ * <li>Rewinds the stream to position at the data
886
+ * descriptor.</li>
887
+ * <li>reads the data descriptor</li>
888
+ * </ul>
889
+ *
890
+ * <p>After calling this method the entry should know its size,
891
+ * the entry's data is cached and the stream is positioned at the
892
+ * next local file or central directory header.</p>
893
+ */
894
+ private void readStoredEntry() throws IOException {
895
+ final ByteArrayOutputStream bos = new ByteArrayOutputStream();
896
+ int off = 0;
897
+ boolean done = false;
898
+
899
+ // length of DD without signature
900
+ final int ddLen = current.usesZip64 ? WORD + 2 * DWORD : 3 * WORD;
901
+
902
+ while (!done) {
903
+ final int r = in.read(buf.array(), off, ZipArchiveOutputStream.BUFFER_SIZE - off);
904
+ if (r <= 0) {
905
+ // read the whole archive without ever finding a
906
+ // central directory
907
+ throw new IOException("Truncated ZIP file");
908
+ }
909
+ if (r + off < 4) {
910
+ // buffer too small to check for a signature, loop
911
+ off += r;
912
+ continue;
913
+ }
914
+
915
+ done = bufferContainsSignature(bos, off, r, ddLen);
916
+ if (!done) {
917
+ off = cacheBytesRead(bos, off, r, ddLen);
918
+ }
919
+ }
920
+ if (current.entry.getCompressedSize() != current.entry.getSize()) {
921
+ throw new ZipException("compressed and uncompressed size don't match"
922
+ + USE_ZIPFILE_INSTEAD_OF_STREAM_DISCLAIMER);
923
+ }
924
+ final byte[] b = bos.toByteArray();
925
+ if (b.length != current.entry.getSize()) {
926
+ throw new ZipException("actual and claimed size don't match"
927
+ + USE_ZIPFILE_INSTEAD_OF_STREAM_DISCLAIMER);
928
+ }
929
+ lastStoredEntry = new ByteArrayInputStream(b);
930
+ }
931
+
932
+ private static final byte[] LFH = ZipLong.LFH_SIG.getBytes();
933
+ private static final byte[] CFH = ZipLong.CFH_SIG.getBytes();
934
+ private static final byte[] DD = ZipLong.DD_SIG.getBytes();
935
+
936
+ /**
937
+ * Checks whether the current buffer contains the signature of a
938
+ * &quot;data descriptor&quot;, &quot;local file header&quot; or
939
+ * &quot;central directory entry&quot;.
940
+ *
941
+ * <p>If it contains such a signature, reads the data descriptor
942
+ * and positions the stream right after the data descriptor.</p>
943
+ */
944
+ private boolean bufferContainsSignature(final ByteArrayOutputStream bos, final int offset, final int lastRead, final int expectedDDLen)
945
+ throws IOException {
946
+
947
+ boolean done = false;
948
+ for (int i = 0; !done && i < offset + lastRead - 4; i++) {
949
+ if (buf.array()[i] == LFH[0] && buf.array()[i + 1] == LFH[1]) {
950
+ int expectDDPos = i;
951
+ if (i >= expectedDDLen &&
952
+ (buf.array()[i + 2] == LFH[2] && buf.array()[i + 3] == LFH[3])
953
+ || (buf.array()[i] == CFH[2] && buf.array()[i + 3] == CFH[3])) {
954
+ // found a LFH or CFH:
955
+ expectDDPos = i - expectedDDLen;
956
+ done = true;
957
+ }
958
+ else if (buf.array()[i + 2] == DD[2] && buf.array()[i + 3] == DD[3]) {
959
+ // found DD:
960
+ done = true;
961
+ }
962
+ if (done) {
963
+ // * push back bytes read in excess as well as the data
964
+ // descriptor
965
+ // * copy the remaining bytes to cache
966
+ // * read data descriptor
967
+ pushback(buf.array(), expectDDPos, offset + lastRead - expectDDPos);
968
+ bos.write(buf.array(), 0, expectDDPos);
969
+ readDataDescriptor();
970
+ }
971
+ }
972
+ }
973
+ return done;
974
+ }
975
+
976
+ /**
977
+ * If the last read bytes could hold a data descriptor and an
978
+ * incomplete signature then save the last bytes to the front of
979
+ * the buffer and cache everything in front of the potential data
980
+ * descriptor into the given ByteArrayOutputStream.
981
+ *
982
+ * <p>Data descriptor plus incomplete signature (3 bytes in the
983
+ * worst case) can be 20 bytes max.</p>
984
+ */
985
+ private int cacheBytesRead(final ByteArrayOutputStream bos, int offset, final int lastRead, final int expecteDDLen) {
986
+ final int cacheable = offset + lastRead - expecteDDLen - 3;
987
+ if (cacheable > 0) {
988
+ bos.write(buf.array(), 0, cacheable);
989
+ System.arraycopy(buf.array(), cacheable, buf.array(), 0, expecteDDLen + 3);
990
+ offset = expecteDDLen + 3;
991
+ } else {
992
+ offset += lastRead;
993
+ }
994
+ return offset;
995
+ }
996
+
997
+ private void pushback(final byte[] buf, final int offset, final int length) throws IOException {
998
+ ((PushbackInputStream) in).unread(buf, offset, length);
999
+ pushedBackBytes(length);
1000
+ }
1001
+
1002
+ // End of Central Directory Record
1003
+ // end of central dir signature WORD
1004
+ // number of this disk SHORT
1005
+ // number of the disk with the
1006
+ // start of the central directory SHORT
1007
+ // total number of entries in the
1008
+ // central directory on this disk SHORT
1009
+ // total number of entries in
1010
+ // the central directory SHORT
1011
+ // size of the central directory WORD
1012
+ // offset of start of central
1013
+ // directory with respect to
1014
+ // the starting disk number WORD
1015
+ // .ZIP file comment length SHORT
1016
+ // .ZIP file comment up to 64KB
1017
+ //
1018
+
1019
+ /**
1020
+ * Reads the stream until it find the "End of central directory
1021
+ * record" and consumes it as well.
1022
+ */
1023
+ private void skipRemainderOfArchive() throws IOException {
1024
+ // skip over central directory. One LFH has been read too much
1025
+ // already. The calculation discounts file names and extra
1026
+ // data so it will be too short.
1027
+ realSkip((long) entriesRead * CFH_LEN - LFH_LEN);
1028
+ findEocdRecord();
1029
+ realSkip((long) ZipFile.MIN_EOCD_SIZE - WORD /* signature */ - SHORT /* comment len */);
1030
+ readFully(shortBuf);
1031
+ // file comment
1032
+ realSkip(ZipShort.getValue(shortBuf));
1033
+ }
1034
+
1035
+ /**
1036
+ * Reads forward until the signature of the &quot;End of central
1037
+ * directory&quot; record is found.
1038
+ */
1039
+ private void findEocdRecord() throws IOException {
1040
+ int currentByte = -1;
1041
+ boolean skipReadCall = false;
1042
+ while (skipReadCall || (currentByte = readOneByte()) > -1) {
1043
+ skipReadCall = false;
1044
+ if (!isFirstByteOfEocdSig(currentByte)) {
1045
+ continue;
1046
+ }
1047
+ currentByte = readOneByte();
1048
+ if (currentByte != ZipArchiveOutputStream.EOCD_SIG[1]) {
1049
+ if (currentByte == -1) {
1050
+ break;
1051
+ }
1052
+ skipReadCall = isFirstByteOfEocdSig(currentByte);
1053
+ continue;
1054
+ }
1055
+ currentByte = readOneByte();
1056
+ if (currentByte != ZipArchiveOutputStream.EOCD_SIG[2]) {
1057
+ if (currentByte == -1) {
1058
+ break;
1059
+ }
1060
+ skipReadCall = isFirstByteOfEocdSig(currentByte);
1061
+ continue;
1062
+ }
1063
+ currentByte = readOneByte();
1064
+ if (currentByte == -1
1065
+ || currentByte == ZipArchiveOutputStream.EOCD_SIG[3]) {
1066
+ break;
1067
+ }
1068
+ skipReadCall = isFirstByteOfEocdSig(currentByte);
1069
+ }
1070
+ }
1071
+
1072
+ /**
1073
+ * Skips bytes by reading from the underlying stream rather than
1074
+ * the (potentially inflating) archive stream - which {@link
1075
+ * #skip} would do.
1076
+ *
1077
+ * Also updates bytes-read counter.
1078
+ */
1079
+ private void realSkip(final long value) throws IOException {
1080
+ if (value >= 0) {
1081
+ long skipped = 0;
1082
+ while (skipped < value) {
1083
+ final long rem = value - skipped;
1084
+ final int x = in.read(skipBuf, 0, (int) (skipBuf.length > rem ? rem : skipBuf.length));
1085
+ if (x == -1) {
1086
+ return;
1087
+ }
1088
+ count(x);
1089
+ skipped += x;
1090
+ }
1091
+ return;
1092
+ }
1093
+ throw new IllegalArgumentException();
1094
+ }
1095
+
1096
+ /**
1097
+ * Reads bytes by reading from the underlying stream rather than
1098
+ * the (potentially inflating) archive stream - which {@link #read} would do.
1099
+ *
1100
+ * Also updates bytes-read counter.
1101
+ */
1102
+ private int readOneByte() throws IOException {
1103
+ final int b = in.read();
1104
+ if (b != -1) {
1105
+ count(1);
1106
+ }
1107
+ return b;
1108
+ }
1109
+
1110
+ private boolean isFirstByteOfEocdSig(final int b) {
1111
+ return b == ZipArchiveOutputStream.EOCD_SIG[0];
1112
+ }
1113
+
1114
+ private static final byte[] APK_SIGNING_BLOCK_MAGIC = new byte[] {
1115
+ 'A', 'P', 'K', ' ', 'S', 'i', 'g', ' ', 'B', 'l', 'o', 'c', 'k', ' ', '4', '2',
1116
+ };
1117
+ private static final BigInteger LONG_MAX = BigInteger.valueOf(Long.MAX_VALUE);
1118
+
1119
+ /**
1120
+ * Checks whether this might be an APK Signing Block.
1121
+ *
1122
+ * <p>Unfortunately the APK signing block does not start with some kind of signature, it rather ends with one. It
1123
+ * starts with a length, so what we do is parse the suspect length, skip ahead far enough, look for the signature
1124
+ * and if we've found it, return true.</p>
1125
+ *
1126
+ * @param suspectLocalFileHeader the bytes read from the underlying stream in the expectation that they would hold
1127
+ * the local file header of the next entry.
1128
+ *
1129
+ * @return true if this looks like a APK signing block
1130
+ *
1131
+ * @see <a href="https://source.android.com/security/apksigning/v2">https://source.android.com/security/apksigning/v2</a>
1132
+ */
1133
+ private boolean isApkSigningBlock(byte[] suspectLocalFileHeader) throws IOException {
1134
+ // length of block excluding the size field itself
1135
+ BigInteger len = ZipEightByteInteger.getValue(suspectLocalFileHeader);
1136
+ // LFH has already been read and all but the first eight bytes contain (part of) the APK signing block,
1137
+ // also subtract 16 bytes in order to position us at the magic string
1138
+ BigInteger toSkip = len.add(BigInteger.valueOf(DWORD - suspectLocalFileHeader.length
1139
+ - (long) APK_SIGNING_BLOCK_MAGIC.length));
1140
+ byte[] magic = new byte[APK_SIGNING_BLOCK_MAGIC.length];
1141
+
1142
+ try {
1143
+ if (toSkip.signum() < 0) {
1144
+ // suspectLocalFileHeader contains the start of suspect magic string
1145
+ int off = suspectLocalFileHeader.length + toSkip.intValue();
1146
+ // length was shorter than magic length
1147
+ if (off < DWORD) {
1148
+ return false;
1149
+ }
1150
+ int bytesInBuffer = Math.abs(toSkip.intValue());
1151
+ System.arraycopy(suspectLocalFileHeader, off, magic, 0, Math.min(bytesInBuffer, magic.length));
1152
+ if (bytesInBuffer < magic.length) {
1153
+ readFully(magic, bytesInBuffer);
1154
+ }
1155
+ } else {
1156
+ while (toSkip.compareTo(LONG_MAX) > 0) {
1157
+ realSkip(Long.MAX_VALUE);
1158
+ toSkip = toSkip.add(LONG_MAX.negate());
1159
+ }
1160
+ realSkip(toSkip.longValue());
1161
+ readFully(magic);
1162
+ }
1163
+ } catch (EOFException ex) { //NOSONAR
1164
+ // length was invalid
1165
+ return false;
1166
+ }
1167
+ return Arrays.equals(magic, APK_SIGNING_BLOCK_MAGIC);
1168
+ }
1169
+
1170
+ /**
1171
+ * Structure collecting information for the entry that is
1172
+ * currently being read.
1173
+ */
1174
+ private static final class CurrentEntry {
1175
+
1176
+ /**
1177
+ * Current ZIP entry.
1178
+ */
1179
+ private final ZipArchiveEntry entry = new ZipArchiveEntry();
1180
+
1181
+ /**
1182
+ * Does the entry use a data descriptor?
1183
+ */
1184
+ private boolean hasDataDescriptor;
1185
+
1186
+ /**
1187
+ * Does the entry have a ZIP64 extended information extra field.
1188
+ */
1189
+ private boolean usesZip64;
1190
+
1191
+ /**
1192
+ * Number of bytes of entry content read by the client if the
1193
+ * entry is STORED.
1194
+ */
1195
+ private long bytesRead;
1196
+
1197
+ /**
1198
+ * Number of bytes of entry content read from the stream.
1199
+ *
1200
+ * <p>This may be more than the actual entry's length as some
1201
+ * stuff gets buffered up and needs to be pushed back when the
1202
+ * end of the entry has been reached.</p>
1203
+ */
1204
+ private long bytesReadFromStream;
1205
+
1206
+ /**
1207
+ * The checksum calculated as the current entry is read.
1208
+ */
1209
+ private final CRC32 crc = new CRC32();
1210
+
1211
+ /**
1212
+ * The input stream decompressing the data for shrunk and imploded entries.
1213
+ */
1214
+ private InputStream in;
1215
+ }
1216
+
1217
+ /**
1218
+ * Bounded input stream adapted from commons-io
1219
+ */
1220
+ private class BoundedInputStream extends InputStream {
1221
+
1222
+ /** the wrapped input stream */
1223
+ private final InputStream in;
1224
+
1225
+ /** the max length to provide */
1226
+ private final long max;
1227
+
1228
+ /** the number of bytes already returned */
1229
+ private long pos = 0;
1230
+
1231
+ /**
1232
+ * Creates a new <code>BoundedInputStream</code> that wraps the given input
1233
+ * stream and limits it to a certain size.
1234
+ *
1235
+ * @param in The wrapped input stream
1236
+ * @param size The maximum number of bytes to return
1237
+ */
1238
+ public BoundedInputStream(final InputStream in, final long size) {
1239
+ this.max = size;
1240
+ this.in = in;
1241
+ }
1242
+
1243
+ @Override
1244
+ public int read() throws IOException {
1245
+ if (max >= 0 && pos >= max) {
1246
+ return -1;
1247
+ }
1248
+ final int result = in.read();
1249
+ pos++;
1250
+ count(1);
1251
+ current.bytesReadFromStream++;
1252
+ return result;
1253
+ }
1254
+
1255
+ @Override
1256
+ public int read(final byte[] b) throws IOException {
1257
+ return this.read(b, 0, b.length);
1258
+ }
1259
+
1260
+ @Override
1261
+ public int read(final byte[] b, final int off, final int len) throws IOException {
1262
+ if (len == 0) {
1263
+ return 0;
1264
+ }
1265
+ if (max >= 0 && pos >= max) {
1266
+ return -1;
1267
+ }
1268
+ final long maxRead = max >= 0 ? Math.min(len, max - pos) : len;
1269
+ final int bytesRead = in.read(b, off, (int) maxRead);
1270
+
1271
+ if (bytesRead == -1) {
1272
+ return -1;
1273
+ }
1274
+
1275
+ pos += bytesRead;
1276
+ count(bytesRead);
1277
+ current.bytesReadFromStream += bytesRead;
1278
+ return bytesRead;
1279
+ }
1280
+
1281
+ @Override
1282
+ public long skip(final long n) throws IOException {
1283
+ final long toSkip = max >= 0 ? Math.min(n, max - pos) : n;
1284
+ final long skippedBytes = IOUtils.skip(in, toSkip);
1285
+ pos += skippedBytes;
1286
+ return skippedBytes;
1287
+ }
1288
+
1289
+ @Override
1290
+ public int available() throws IOException {
1291
+ if (max >= 0 && pos >= max) {
1292
+ return 0;
1293
+ }
1294
+ return in.available();
1295
+ }
1296
+ }
1297
+ }
local-test-commons-compress-delta-02/afc-commons-compress/src/test/resources/COMPRESS-477/split_zip_created_by_zip/file_to_compare_1 ADDED
@@ -0,0 +1,38 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ /*
2
+ * Licensed to the Apache Software Foundation (ASF) under one
3
+ * or more contributor license agreements. See the NOTICE file
4
+ * distributed with this work for additional information
5
+ * regarding copyright ownership. The ASF licenses this file
6
+ * to you under the Apache License, Version 2.0 (the
7
+ * "License"); you may not use this file except in compliance
8
+ * with the License. You may obtain a copy of the License at
9
+ *
10
+ * http://www.apache.org/licenses/LICENSE-2.0
11
+ *
12
+ * Unless required by applicable law or agreed to in writing,
13
+ * software distributed under the License is distributed on an
14
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
15
+ * KIND, either express or implied. See the License for the
16
+ * specific language governing permissions and limitations
17
+ * under the License.
18
+ */
19
+ package org.apache.commons.compress.archivers.dump;
20
+
21
+
22
+ /**
23
+ * Unsupported compression algorithm. The dump archive uses an unsupported
24
+ * compression algorithm (BZLIB2 or LZO).
25
+ */
26
+ public class UnsupportedCompressionAlgorithmException
27
+ extends DumpArchiveException {
28
+ private static final long serialVersionUID = 1L;
29
+
30
+ public UnsupportedCompressionAlgorithmException() {
31
+ super("this file uses an unsupported compression algorithm.");
32
+ }
33
+
34
+ public UnsupportedCompressionAlgorithmException(final String alg) {
35
+ super("this file uses an unsupported compression algorithm: " + alg +
36
+ ".");
37
+ }
38
+ }
local-test-commons-compress-delta-02/afc-commons-compress/src/test/resources/COMPRESS-477/split_zip_created_by_zip/file_to_compare_2 ADDED
@@ -0,0 +1,79 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ /*
2
+ * Licensed to the Apache Software Foundation (ASF) under one
3
+ * or more contributor license agreements. See the NOTICE file
4
+ * distributed with this work for additional information
5
+ * regarding copyright ownership. The ASF licenses this file
6
+ * to you under the Apache License, Version 2.0 (the
7
+ * "License"); you may not use this file except in compliance
8
+ * with the License. You may obtain a copy of the License at
9
+ *
10
+ * http://www.apache.org/licenses/LICENSE-2.0
11
+ *
12
+ * Unless required by applicable law or agreed to in writing,
13
+ * software distributed under the License is distributed on an
14
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
15
+ * KIND, either express or implied. See the License for the
16
+ * specific language governing permissions and limitations
17
+ * under the License.
18
+ */
19
+
20
+ package org.apache.commons.compress.compressors.deflate;
21
+
22
+ import java.util.zip.Deflater;
23
+
24
+ /**
25
+ * Parameters for the Deflate compressor.
26
+ * @since 1.9
27
+ */
28
+ public class DeflateParameters {
29
+
30
+ private boolean zlibHeader = true;
31
+ private int compressionLevel = Deflater.DEFAULT_COMPRESSION;
32
+
33
+ /**
34
+ * Whether or not the zlib header shall be written (when
35
+ * compressing) or expected (when decompressing).
36
+ * @return true if zlib header shall be written
37
+ */
38
+ public boolean withZlibHeader() {
39
+ return zlibHeader;
40
+ }
41
+
42
+ /**
43
+ * Sets the zlib header presence parameter.
44
+ *
45
+ * <p>This affects whether or not the zlib header will be written
46
+ * (when compressing) or expected (when decompressing).</p>
47
+ *
48
+ * @param zlibHeader true if zlib header shall be written
49
+ */
50
+ public void setWithZlibHeader(final boolean zlibHeader) {
51
+ this.zlibHeader = zlibHeader;
52
+ }
53
+
54
+ /**
55
+ * The compression level.
56
+ * @see #setCompressionLevel
57
+ * @return the compression level
58
+ */
59
+ public int getCompressionLevel() {
60
+ return compressionLevel;
61
+ }
62
+
63
+ /**
64
+ * Sets the compression level.
65
+ *
66
+ * @param compressionLevel the compression level (between 0 and 9)
67
+ * @see Deflater#NO_COMPRESSION
68
+ * @see Deflater#BEST_SPEED
69
+ * @see Deflater#DEFAULT_COMPRESSION
70
+ * @see Deflater#BEST_COMPRESSION
71
+ */
72
+ public void setCompressionLevel(final int compressionLevel) {
73
+ if (compressionLevel < -1 || compressionLevel > 9) {
74
+ throw new IllegalArgumentException("Invalid Deflate compression level: " + compressionLevel);
75
+ }
76
+ this.compressionLevel = compressionLevel;
77
+ }
78
+
79
+ }
local-test-commons-compress-delta-02/afc-commons-compress/src/test/resources/org/apache/commons/compress/ar/number_parsing/bad_length-fail.ar ADDED
@@ -0,0 +1,8 @@
 
 
 
 
 
 
 
 
 
1
+ !<arch>
2
+ // 68 `
3
+ this_is_a_long_file_name.txt/
4
+ this_is_a_long_file_name_as_well.txt/
5
+ /0 1454693980 1000 1000 100664 1.23 `
6
+ Hello, world!
7
+ /30 1454694016 1000 1000 100664 4 `
8
+ Bye
local-test-commons-compress-delta-02/afc-commons-compress/src/test/resources/org/apache/commons/compress/ar/number_parsing/bad_long_namelen_bsd-fail.ar ADDED
@@ -0,0 +1,5 @@
 
 
 
 
 
 
1
+ !<arch>
2
+ #1/123456789012 1311256511 1000 1000 100644 42 `
3
+ this_is_a_long_file_name.txtHello, world!
4
+ #1/36 1454694016 1000 1000 100664 40 `
5
+ this_is_a_long_file_name_as_well.txtBye
local-test-commons-compress-delta-02/afc-commons-compress/src/test/resources/org/apache/commons/compress/ar/number_parsing/bad_long_namelen_gnu1-fail.ar ADDED
@@ -0,0 +1,8 @@
 
 
 
 
 
 
 
 
 
1
+ !<arch>
2
+ // 68 `
3
+ this_is_a_long_file_name.txt/
4
+ this_is_a_long_file_name_as_well.txt/
5
+ /9999999999 1454693980 1000 1000 100664 14 `
6
+ Hello, world!
7
+ /30 1454694016 1000 1000 100664 4 `
8
+ Bye
local-test-commons-compress-delta-02/afc-commons-compress/src/test/resources/org/apache/commons/compress/ar/number_parsing/bad_long_namelen_gnu2-fail.ar ADDED
@@ -0,0 +1,6 @@
 
 
 
 
 
 
 
1
+ !<arch>
2
+ // 68 `
3
+ this_is_a_long_file_name.txt/
4
+ this_is_a_long_file_name_as_well.txt/
5
+ /29 1454694016 1000 1000 100664 4 `
6
+ Bye
local-test-commons-compress-delta-02/afc-commons-compress/src/test/resources/org/apache/commons/compress/ar/number_parsing/bad_long_namelen_gnu3-fail.ar ADDED
Binary file (274 Bytes). View file
 
local-test-commons-compress-delta-02/afc-commons-compress/src/test/resources/org/apache/commons/compress/ar/number_parsing/bad_user-fail.ar ADDED
@@ -0,0 +1,8 @@
 
 
 
 
 
 
 
 
 
1
+ !<arch>
2
+ // 68 `
3
+ this_is_a_long_file_name.txt/
4
+ this_is_a_long_file_name_as_well.txt/
5
+ /0 1454693980 9e99 1000 100664 14 `
6
+ Hello, world!
7
+ /30 1454694016 1000 1000 100664 4 `
8
+ Bye
local-test-commons-compress-delta-02/afc-commons-compress/src/test/resources/org/apache/commons/compress/fuzz/crash-f2efd9eaeb86cda597d07b5e3c3d81363633c2da ADDED
Binary file (8.94 kB). View file
 
local-test-commons-compress-delta-02/afc-commons-compress/src/test/resources/org/apache/commons/compress/ico/commons-daemon/procrunr.ico ADDED
local-test-commons-compress-delta-02/afc-commons-compress/src/test/resources/org/apache/commons/compress/ico/commons-daemon/procruns.ico ADDED
local-test-commons-compress-delta-02/afc-commons-compress/src/test/resources/org/apache/commons/compress/ico/commons-daemon/procrunw.ico ADDED
local-test-commons-compress-delta-02/afc-commons-compress/src/test/resources/org/apache/commons/compress/ico/commons-imaging/OutOfMemory_epine.ico ADDED
local-test-commons-compress-delta-02/afc-commons-compress/src/test/resources/org/apache/commons/compress/pack/signatures_oom.pack ADDED
Binary file (121 Bytes). View file
 
local-test-commons-compress-delta-02/fuzz-tooling/infra/build_specified_commit.py ADDED
@@ -0,0 +1,410 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Copyright 2019 Google LLC
2
+ #
3
+ # Licensed under the Apache License, Version 2.0 (the "License");
4
+ # you may not use this file except in compliance with the License.
5
+ # You may obtain a copy of the License at
6
+ #
7
+ # http://www.apache.org/licenses/LICENSE-2.0
8
+ #
9
+ # Unless required by applicable law or agreed to in writing, software
10
+ # distributed under the License is distributed on an "AS IS" BASIS,
11
+ # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12
+ # See the License for the specific language governing permissions and
13
+ # limitations under the License.
14
+ """Module to build a image from a specific commit, branch or pull request.
15
+
16
+ This module is allows each of the OSS Fuzz projects fuzzers to be built
17
+ from a specific point in time. This feature can be used for implementations
18
+ like continuious integration fuzzing and bisection to find errors
19
+ """
20
+ import argparse
21
+ import bisect
22
+ import datetime
23
+ import os
24
+ import collections
25
+ import json
26
+ import logging
27
+ import re
28
+ import shutil
29
+ import tempfile
30
+
31
+ import helper
32
+ import repo_manager
33
+ import retry
34
+ import utils
35
+
36
+ BuildData = collections.namedtuple(
37
+ 'BuildData', ['project_name', 'engine', 'sanitizer', 'architecture'])
38
+
39
+ _GIT_DIR_MARKER = 'gitdir: '
40
+ _IMAGE_BUILD_TRIES = 3
41
+
42
+
43
+ class BaseBuilderRepo:
44
+ """Repo of base-builder images."""
45
+
46
+ def __init__(self):
47
+ self.timestamps = []
48
+ self.digests = []
49
+
50
+ def add_digest(self, timestamp, digest):
51
+ """Add a digest."""
52
+ self.timestamps.append(timestamp)
53
+ self.digests.append(digest)
54
+
55
+ def find_digest(self, timestamp):
56
+ """Find the latest image before the given timestamp."""
57
+ index = bisect.bisect_right(self.timestamps, timestamp)
58
+ if index > 0:
59
+ return self.digests[index - 1]
60
+
61
+ logging.error('Failed to find suitable base-builder.')
62
+ return None
63
+
64
+
65
+ def _replace_gitdir(src_dir, file_path):
66
+ """Replace gitdir with a relative path."""
67
+ with open(file_path) as handle:
68
+ lines = handle.readlines()
69
+
70
+ new_lines = []
71
+ for line in lines:
72
+ if line.startswith(_GIT_DIR_MARKER):
73
+ absolute_path = line[len(_GIT_DIR_MARKER):].strip()
74
+ if not os.path.isabs(absolute_path):
75
+ # Already relative.
76
+ return
77
+
78
+ current_dir = os.path.dirname(file_path)
79
+ # Rebase to /src rather than the host src dir.
80
+ base_dir = current_dir.replace(src_dir, '/src')
81
+ relative_path = os.path.relpath(absolute_path, base_dir)
82
+ logging.info('Replacing absolute submodule gitdir from %s to %s',
83
+ absolute_path, relative_path)
84
+
85
+ line = _GIT_DIR_MARKER + relative_path
86
+
87
+ new_lines.append(line)
88
+
89
+ with open(file_path, 'w') as handle:
90
+ handle.write(''.join(new_lines))
91
+
92
+
93
+ def _make_gitdirs_relative(src_dir):
94
+ """Make gitdirs relative."""
95
+ for root_dir, _, files in os.walk(src_dir):
96
+ for filename in files:
97
+ if filename != '.git':
98
+ continue
99
+
100
+ file_path = os.path.join(root_dir, filename)
101
+ _replace_gitdir(src_dir, file_path)
102
+
103
+
104
+ def _replace_base_builder_digest(dockerfile_path, digest):
105
+ """Replace the base-builder digest in a Dockerfile."""
106
+ with open(dockerfile_path) as handle:
107
+ lines = handle.readlines()
108
+
109
+ new_lines = []
110
+ for line in lines:
111
+ if line.strip().startswith('FROM'):
112
+ line = 'FROM ghcr.io/aixcc-finals/base-builder@' + digest + '\n'
113
+
114
+ new_lines.append(line)
115
+
116
+ with open(dockerfile_path, 'w') as handle:
117
+ handle.write(''.join(new_lines))
118
+
119
+
120
+ def copy_src_from_docker(project_name, host_dir):
121
+ """Copy /src from docker to the host."""
122
+ # Copy /src to host.
123
+ image_name = 'gcr.io/oss-fuzz/' + project_name
124
+ src_dir = os.path.join(host_dir, 'src')
125
+ if os.path.exists(src_dir):
126
+ shutil.rmtree(src_dir, ignore_errors=True)
127
+
128
+ docker_args = [
129
+ '-v',
130
+ host_dir + ':/out',
131
+ image_name,
132
+ 'cp',
133
+ '-r',
134
+ '-p',
135
+ '/src',
136
+ '/out',
137
+ ]
138
+ helper.docker_run(docker_args)
139
+
140
+ # Submodules can have gitdir entries which point to absolute paths. Make them
141
+ # relative, as otherwise we can't do operations on the checkout on the host.
142
+ _make_gitdirs_relative(src_dir)
143
+ return src_dir
144
+
145
+
146
+ @retry.wrap(_IMAGE_BUILD_TRIES, 2)
147
+ def _build_image_with_retries(project_name):
148
+ """Build image with retries."""
149
+ return helper.build_image_impl(helper.Project(project_name))
150
+
151
+
152
+ def get_required_post_checkout_steps(dockerfile_path):
153
+ """Get required post checkout steps (best effort)."""
154
+
155
+ checkout_pattern = re.compile(r'\s*RUN\s*(git|svn|hg)')
156
+
157
+ # If the build.sh is copied from upstream, we need to copy it again after
158
+ # changing the revision to ensure correct building.
159
+ post_run_pattern = re.compile(r'\s*RUN\s*(.*build\.sh.*(\$SRC|/src).*)')
160
+
161
+ with open(dockerfile_path) as handle:
162
+ lines = handle.readlines()
163
+
164
+ subsequent_run_cmds = []
165
+ for i, line in enumerate(lines):
166
+ if checkout_pattern.match(line):
167
+ subsequent_run_cmds = []
168
+ continue
169
+
170
+ match = post_run_pattern.match(line)
171
+ if match:
172
+ workdir = helper.workdir_from_lines(lines[:i])
173
+ command = match.group(1)
174
+ subsequent_run_cmds.append((workdir, command))
175
+
176
+ return subsequent_run_cmds
177
+
178
+
179
+ # pylint: disable=too-many-locals
180
+ def build_fuzzers_from_commit(commit,
181
+ build_repo_manager,
182
+ host_src_path,
183
+ build_data,
184
+ base_builder_repo=None):
185
+ """Builds a OSS-Fuzz fuzzer at a specific commit SHA.
186
+
187
+ Args:
188
+ commit: The commit SHA to build the fuzzers at.
189
+ build_repo_manager: The OSS-Fuzz project's repo manager to be built at.
190
+ build_data: A struct containing project build information.
191
+ base_builder_repo: A BaseBuilderRepo.
192
+ Returns:
193
+ 0 on successful build or error code on failure.
194
+ """
195
+ oss_fuzz_repo_manager = repo_manager.RepoManager(helper.OSS_FUZZ_DIR)
196
+ num_retry = 1
197
+
198
+ def cleanup():
199
+ # Re-copy /src for a clean checkout every time.
200
+ copy_src_from_docker(build_data.project_name,
201
+ os.path.dirname(host_src_path))
202
+ build_repo_manager.fetch_all_remotes()
203
+
204
+ projects_dir = os.path.join('projects', build_data.project_name)
205
+ dockerfile_path = os.path.join(projects_dir, 'Dockerfile')
206
+
207
+ for i in range(num_retry + 1):
208
+ build_repo_manager.checkout_commit(commit, clean=False)
209
+
210
+ post_checkout_steps = get_required_post_checkout_steps(dockerfile_path)
211
+ for workdir, post_checkout_step in post_checkout_steps:
212
+ logging.info('Running post-checkout step `%s` in %s.', post_checkout_step,
213
+ workdir)
214
+ helper.docker_run([
215
+ '-w',
216
+ workdir,
217
+ '-v',
218
+ host_src_path + ':' + '/src',
219
+ 'gcr.io/oss-fuzz/' + build_data.project_name,
220
+ '/bin/bash',
221
+ '-c',
222
+ post_checkout_step,
223
+ ])
224
+
225
+ project = helper.Project(build_data.project_name)
226
+ result = helper.build_fuzzers_impl(project=project,
227
+ clean=True,
228
+ engine=build_data.engine,
229
+ sanitizer=build_data.sanitizer,
230
+ architecture=build_data.architecture,
231
+ env_to_add=None,
232
+ source_path=host_src_path,
233
+ mount_path='/src')
234
+ if result or i == num_retry:
235
+ break
236
+
237
+ # Retry with an OSS-Fuzz builder container that's closer to the project
238
+ # commit date.
239
+ commit_date = build_repo_manager.commit_date(commit)
240
+
241
+ # Find first change in the projects/<PROJECT> directory before the project
242
+ # commit date.
243
+ oss_fuzz_commit, _, _ = oss_fuzz_repo_manager.git([
244
+ 'log', '--before=' + commit_date.isoformat(), '-n1', '--format=%H',
245
+ projects_dir
246
+ ],
247
+ check_result=True)
248
+ oss_fuzz_commit = oss_fuzz_commit.strip()
249
+ if not oss_fuzz_commit:
250
+ logging.info(
251
+ 'Could not find first OSS-Fuzz commit prior to upstream commit. '
252
+ 'Falling back to oldest integration commit.')
253
+
254
+ # Find the oldest commit.
255
+ oss_fuzz_commit, _, _ = oss_fuzz_repo_manager.git(
256
+ ['log', '--reverse', '--format=%H', projects_dir], check_result=True)
257
+
258
+ oss_fuzz_commit = oss_fuzz_commit.splitlines()[0].strip()
259
+
260
+ if not oss_fuzz_commit:
261
+ logging.error('Failed to get oldest integration commit.')
262
+ break
263
+
264
+ logging.info('Build failed. Retrying on earlier OSS-Fuzz commit %s.',
265
+ oss_fuzz_commit)
266
+
267
+ # Check out projects/<PROJECT> dir to the commit that was found.
268
+ oss_fuzz_repo_manager.git(['checkout', oss_fuzz_commit, projects_dir],
269
+ check_result=True)
270
+
271
+ # Also use the closest base-builder we can find.
272
+ if base_builder_repo:
273
+ base_builder_digest = base_builder_repo.find_digest(commit_date)
274
+ if not base_builder_digest:
275
+ return False
276
+
277
+ logging.info('Using base-builder with digest %s.', base_builder_digest)
278
+ _replace_base_builder_digest(dockerfile_path, base_builder_digest)
279
+
280
+ # Rebuild image and re-copy src dir since things in /src could have changed.
281
+ if not _build_image_with_retries(build_data.project_name):
282
+ logging.error('Failed to rebuild image.')
283
+ return False
284
+
285
+ cleanup()
286
+
287
+ cleanup()
288
+ return result
289
+
290
+
291
+ def detect_main_repo(project_name, repo_name=None, commit=None):
292
+ """Checks a docker image for the main repo of an OSS-Fuzz project.
293
+
294
+ Note: The default is to use the repo name to detect the main repo.
295
+
296
+ Args:
297
+ project_name: The name of the oss-fuzz project.
298
+ repo_name: The name of the main repo in an OSS-Fuzz project.
299
+ commit: A commit SHA that is associated with the main repo.
300
+
301
+ Returns:
302
+ A tuple containing (the repo's origin, the repo's path).
303
+ """
304
+
305
+ if not repo_name and not commit:
306
+ logging.error(
307
+ 'Error: can not detect main repo without a repo_name or a commit.')
308
+ return None, None
309
+ if repo_name and commit:
310
+ logging.info(
311
+ 'Both repo name and commit specific. Using repo name for detection.')
312
+
313
+ # Change to oss-fuzz main directory so helper.py runs correctly.
314
+ utils.chdir_to_root()
315
+ if not _build_image_with_retries(project_name):
316
+ logging.error('Error: building %s image failed.', project_name)
317
+ return None, None
318
+ docker_image_name = 'gcr.io/oss-fuzz/' + project_name
319
+ command_to_run = [
320
+ 'docker', 'run', '--rm', '-t', docker_image_name, 'python3',
321
+ os.path.join('/opt', 'cifuzz', 'detect_repo.py')
322
+ ]
323
+ if repo_name:
324
+ command_to_run.extend(['--repo_name', repo_name])
325
+ else:
326
+ command_to_run.extend(['--example_commit', commit])
327
+ out, _, _ = utils.execute(command_to_run)
328
+ match = re.search(r'\bDetected repo: ([^ ]+) ([^ ]+)', out.rstrip())
329
+ if match and match.group(1) and match.group(2):
330
+ return match.group(1), match.group(2)
331
+
332
+ logging.error('Failed to detect repo:\n%s', out)
333
+ return None, None
334
+
335
+
336
+ def load_base_builder_repo():
337
+ """Get base-image digests."""
338
+ gcloud_path = shutil.which('gcloud')
339
+ if not gcloud_path:
340
+ logging.warning('gcloud not found in PATH.')
341
+ return None
342
+
343
+ result, _, _ = utils.execute([
344
+ gcloud_path,
345
+ 'container',
346
+ 'images',
347
+ 'list-tags',
348
+ 'ghcr.io/aixcc-finals/base-builder',
349
+ '--format=json',
350
+ '--sort-by=timestamp',
351
+ ],
352
+ check_result=True)
353
+ result = json.loads(result)
354
+
355
+ repo = BaseBuilderRepo()
356
+ for image in result:
357
+ timestamp = datetime.datetime.fromisoformat(
358
+ image['timestamp']['datetime']).astimezone(datetime.timezone.utc)
359
+ repo.add_digest(timestamp, image['digest'])
360
+
361
+ return repo
362
+
363
+
364
+ def main():
365
+ """Main function."""
366
+ logging.getLogger().setLevel(logging.INFO)
367
+
368
+ parser = argparse.ArgumentParser(
369
+ description='Build fuzzers at a specific commit')
370
+ parser.add_argument('--project_name',
371
+ help='The name of the project where the bug occurred.',
372
+ required=True)
373
+ parser.add_argument('--commit',
374
+ help='The newest commit SHA to be bisected.',
375
+ required=True)
376
+ parser.add_argument('--engine',
377
+ help='The default is "libfuzzer".',
378
+ default='libfuzzer')
379
+ parser.add_argument('--sanitizer',
380
+ default='address',
381
+ help='The default is "address".')
382
+ parser.add_argument('--architecture', default='x86_64')
383
+
384
+ args = parser.parse_args()
385
+
386
+ repo_url, repo_path = detect_main_repo(args.project_name, commit=args.commit)
387
+
388
+ if not repo_url or not repo_path:
389
+ raise ValueError('Main git repo can not be determined.')
390
+
391
+ with tempfile.TemporaryDirectory() as tmp_dir:
392
+ host_src_dir = copy_src_from_docker(args.project_name, tmp_dir)
393
+ build_repo_manager = repo_manager.RepoManager(
394
+ os.path.join(host_src_dir, os.path.basename(repo_path)))
395
+ base_builder_repo = load_base_builder_repo()
396
+
397
+ build_data = BuildData(project_name=args.project_name,
398
+ engine=args.engine,
399
+ sanitizer=args.sanitizer,
400
+ architecture=args.architecture)
401
+ if not build_fuzzers_from_commit(args.commit,
402
+ build_repo_manager,
403
+ host_src_dir,
404
+ build_data,
405
+ base_builder_repo=base_builder_repo):
406
+ raise RuntimeError('Failed to build.')
407
+
408
+
409
+ if __name__ == '__main__':
410
+ main()
local-test-commons-compress-delta-02/fuzz-tooling/infra/cifuzz/test_data/external-project/.clusterfuzzlite/build.sh ADDED
@@ -0,0 +1,24 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ #!/bin/bash -eu
2
+ # Copyright 2020 Google Inc.
3
+ #
4
+ # Licensed under the Apache License, Version 2.0 (the "License");
5
+ # you may not use this file except in compliance with the License.
6
+ # You may obtain a copy of the License at
7
+ #
8
+ # http://www.apache.org/licenses/LICENSE-2.0
9
+ #
10
+ # Unless required by applicable law or agreed to in writing, software
11
+ # distributed under the License is distributed on an "AS IS" BASIS,
12
+ # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
13
+ # See the License for the specific language governing permissions and
14
+ # limitations under the License.
15
+ #
16
+ ################################################################################
17
+
18
+ make clean # Not strictly necessary, since we are building in a fresh dir.
19
+ make -j$(nproc) all # Build the fuzz targets.
20
+
21
+ # Copy the fuzzer executables, zip-ed corpora, option and dictionary files to $OUT
22
+ find . -name '*_fuzzer' -exec cp -v '{}' $OUT ';'
23
+ find . -name '*_fuzzer.dict' -exec cp -v '{}' $OUT ';' # If you have dictionaries.
24
+ find . -name '*_fuzzer.options' -exec cp -v '{}' $OUT ';' # If you have custom options.
local-test-commons-compress-delta-02/fuzz-tooling/infra/constants.py ADDED
@@ -0,0 +1,49 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Copyright 2021 Google LLC
2
+ #
3
+ # Licensed under the Apache License, Version 2.0 (the "License");
4
+ # you may not use this file except in compliance with the License.
5
+ # You may obtain a copy of the License at
6
+ #
7
+ # http://www.apache.org/licenses/LICENSE-2.0
8
+ #
9
+ # Unless required by applicable law or agreed to in writing, software
10
+ # distributed under the License is distributed on an "AS IS" BASIS,
11
+ # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12
+ # See the License for the specific language governing permissions and
13
+ # limitations under the License.
14
+ #
15
+ ################################################################################
16
+ """Constants for OSS-Fuzz."""
17
+
18
+ DEFAULT_EXTERNAL_BUILD_INTEGRATION_PATH = '.clusterfuzzlite'
19
+
20
+ DEFAULT_LANGUAGE = 'c++'
21
+ DEFAULT_SANITIZER = 'address'
22
+ DEFAULT_ARCHITECTURE = 'x86_64'
23
+ DEFAULT_ENGINE = 'libfuzzer'
24
+ LANGUAGES = [
25
+ 'c',
26
+ 'c++',
27
+ 'go',
28
+ 'javascript',
29
+ 'jvm',
30
+ 'python',
31
+ 'rust',
32
+ 'swift',
33
+ 'ruby',
34
+ ]
35
+ LANGUAGES_WITH_COVERAGE_SUPPORT = [
36
+ 'c', 'c++', 'go', 'jvm', 'python', 'rust', 'swift', 'javascript', 'ruby'
37
+ ]
38
+ SANITIZERS = [
39
+ 'address',
40
+ 'none',
41
+ 'memory',
42
+ 'undefined',
43
+ 'thread',
44
+ 'coverage',
45
+ 'introspector',
46
+ 'hwaddress',
47
+ ]
48
+ ARCHITECTURES = ['i386', 'x86_64', 'aarch64']
49
+ ENGINES = ['libfuzzer', 'afl', 'honggfuzz', 'centipede', 'none', 'wycheproof']
local-test-commons-compress-delta-02/fuzz-tooling/infra/tools/wycheproof/.gitignore ADDED
@@ -0,0 +1 @@
 
 
1
+ wycheproof.zip
local-test-commons-compress-delta-02/fuzz-tooling/infra/tools/wycheproof/generate_job.py ADDED
@@ -0,0 +1,50 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ #!/usr/bin/env python3
2
+ # Copyright 2022 Google LLC
3
+ #
4
+ # Licensed under the Apache License, Version 2.0 (the "License");
5
+ # you may not use this file except in compliance with the License.
6
+ # You may obtain a copy of the License at
7
+ #
8
+ # http://www.apache.org/licenses/LICENSE-2.0
9
+ #
10
+ # Unless required by applicable law or agreed to in writing, software
11
+ # distributed under the License is distributed on an "AS IS" BASIS,
12
+ # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
13
+ # See the License for the specific language governing permissions and
14
+ # limitations under the License.
15
+ #
16
+ ################################################################################
17
+ """Script for generating an OSS-Fuzz job for a wycheproof project."""
18
+ import sys
19
+
20
+
21
+ def main():
22
+ """Usage generate_job.py <project>."""
23
+ project = sys.argv[1]
24
+ print(f'Name: wycheproof_nosanitizer_{project}')
25
+ job_definition = f"""CUSTOM_BINARY = False
26
+ BAD_BUILD_CHECK = False
27
+ APP_NAME = WycheproofTarget.bash
28
+ THREAD_ALIVE_CHECK_INTERVAL = 10
29
+ TEST_TIMEOUT = 3600
30
+ CRASH_RETRIES = 1
31
+ AGGREGATE_COVERAGE = False
32
+ TESTCASE_COVERAGE = False
33
+ FILE_GITHUB_ISSUE = False
34
+ MANAGED = False
35
+ MAX_FUZZ_THREADS = 1
36
+ RELEASE_BUILD_BUCKET_PATH = gs://clusterfuzz-builds-wycheproof/{project}/{project}-none-([0-9]+).zip
37
+ PROJECT_NAME = {project}
38
+ SUMMARY_PREFIX = {project}
39
+ REVISION_VARS_URL = https://commondatastorage.googleapis.com/clusterfuzz-builds-wycheproof/{project}/{project}-none-%s.srcmap.json
40
+ FUZZ_LOGS_BUCKET = {project}-logs.clusterfuzz-external.appspot.com
41
+ CORPUS_BUCKET = {project}-corpus.clusterfuzz-external.appspot.com
42
+ QUARANTINE_BUCKET = {project}-quarantine.clusterfuzz-external.appspot.com
43
+ BACKUP_BUCKET = {project}-backup.clusterfuzz-external.appspot.com
44
+ AUTOMATIC_LABELS = Proj-{project},Engine-wycheproof
45
+ """
46
+ print(job_definition)
47
+
48
+
49
+ if __name__ == '__main__':
50
+ main()