_id stringlengths 2 7 | text stringlengths 6 2.61k | title stringclasses 1
value |
|---|---|---|
c8300 | // huffSort will sort symbols, decreasing order. | |
c8301 | // NewWriter creates a new Writer.
// Writes to the returned Writer are compressed and written to w.
//
// It is the caller's responsibility to call Close on the WriteCloser when done.
// Writes may be buffered and not flushed until Close. | |
c8302 | // NewWriterLevel is like NewWriter but specifies the compression level instead
// of assuming DefaultCompression.
//
// The compression level can be DefaultCompression, NoCompression, HuffmanOnly
// or any integer value between BestSpeed and BestCompression inclusive.
// The error returned will be nil if the level is ... | |
c8303 | // NewWriterLevelDict is like NewWriterLevel but specifies a dictionary to
// compress with.
//
// The dictionary may be nil. If not, its contents should not be modified until
// the Writer is closed. | |
c8304 | // Reset clears the state of the Writer z such that it is equivalent to its
// initial state from NewWriterLevel or NewWriterLevelDict, but instead writing
// to w. | |
c8305 | // writeHeader writes the ZLIB header. | |
c8306 | // Write writes a compressed form of p to the underlying io.Writer. The
// compressed bytes are not necessarily flushed until the Writer is closed or
// explicitly flushed. | |
c8307 | // NewWriter returns a new Writer writing a zip file to w. | |
c8308 | // SetOffset sets the offset of the beginning of the zip data within the
// underlying writer. It should be used when the zip data is appended to an
// existing file, such as a binary executable.
// It must be called before any data is written. | |
c8309 | // Flush flushes any buffered data to the underlying writer.
// Calling Flush is not normally necessary; calling Close is sufficient. | |
c8310 | // RegisterCompressor registers or overrides a custom compressor for a specific
// method ID. If a compressor for a given method is not found, Writer will
// default to looking up the compressor at the package level. | |
c8311 | // estimateSize returns the estimated size in bytes of the input represented in the
// histogram supplied. | |
c8312 | // minSize returns the minimum possible size considering the shannon limit. | |
c8313 | // decSymbolValue returns the transformed decSymbol for the given symbol. | |
c8314 | // setRLE will set the decoder til RLE mode. | |
c8315 | // transform will transform the decoder table into a table usable for
// decoding without having to apply the transformation while decoding.
// The state will contain the base value and the number of bits to read. | |
c8316 | // Initialize and decodeAsync first state and symbol. | |
c8317 | // next returns the current symbol and sets the next state.
// At least tablelog bits must be available in the bit reader. | |
c8318 | // final returns the current state symbol without decoding the next. | |
c8319 | // NewReader creates a new ReadCloser.
// Reads from the returned ReadCloser read and decompress data from r.
// If r does not implement io.ByteReader, the decompressor may read more
// data than necessary from r.
// It is the caller's responsibility to call Close on the ReadCloser when done.
//
// The ReadCloser retur... | |
c8320 | // NewReaderDict is like NewReader but uses a preset dictionary.
// NewReaderDict ignores the dictionary if the compressed data does not refer to it.
// If the compressed data refers to a different dictionary, NewReaderDict returns ErrDictionary.
//
// The ReadCloser returned by NewReaderDict also implements Resetter. | |
c8321 | // Calling Close does not close the wrapped io.Reader originally passed to NewReader.
// In order for the ZLIB checksum to be verified, the reader must be
// fully consumed until the io.EOF. | |
c8322 | // reset will reset the history to initial state of a frame.
// The history must already have been initialized to the desired size. | |
c8323 | // append bytes to history.
// This function will make sure there is space for it,
// if the buffer has been allocated with enough extra space. | |
c8324 | // append bytes to history without ever discarding anything. | |
c8325 | // Decompress1X will decompress a 1X encoded stream.
// The length of the supplied input must match the end of a block exactly.
// Before this is called, the table must be initialized with ReadTable unless
// the encoder re-used the table. | |
c8326 | // matches will compare a decoding table to a coding table.
// Errors are written to the writer.
// Nothing will be written if table is ok. | |
c8327 | // Decompress a block of data.
// You can provide a scratch buffer to avoid allocations.
// If nil is provided a temporary one will be allocated.
// It is possible, but by no way guaranteed that corrupt data will
// return an error.
// It is up to the caller to verify integrity of the returned data.
// Use a predefined... | |
c8328 | // allocDtable will allocate decoding tables if they are not big enough. | |
c8329 | // decompress will decompress the bitstream.
// If the buffer is over-read an error is returned. | |
c8330 | // init will initialize the decoder and read the first state from the stream. | |
c8331 | // next returns the next symbol and sets the next state.
// At least tablelog bits must be available in the bit reader. | |
c8332 | // next will start decoding the next block from stream. | |
c8333 | // sendEOF will queue an error block on the frame.
// This will cause the frame decoder to return when it encounters the block.
// Returns true if the decoder was added. | |
c8334 | // checkCRC will check the checksum if the frame has one.
// Will return ErrCRCMismatch if crc check failed, otherwise nil. | |
c8335 | // runDecoder will create a sync decoder that will decodeAsync a block of data. | |
c8336 | // dynamicSize returns the size of dynamically encoded data in bits. | |
c8337 | // fixedSize returns the size of dynamically encoded data in bits. | |
c8338 | // storedSize calculates the stored size, including header.
// The function returns the size in bits and whether the block
// fits inside a single block. | |
c8339 | // Write the header of a dynamic Huffman block to the output stream.
//
// numLiterals The number of literals specified in codegen
// numOffsets The number of offsets specified in codegen
// numCodegens The number of codegens used in codegen | |
c8340 | // writeBlock will write a block of tokens with the smallest encoding.
// The original input can be supplied, and if the huffman encoded data
// is larger than the original bytes, the data will be written as a
// stored block.
// If the input is nil, the tokens will always be Huffman encoded. | |
c8341 | // indexTokens indexes a slice of tokens, and updates
// literalFreq and offsetFreq, and generates literalEncoding
// and offsetEncoding.
// The number of literal and offset tokens is returned. | |
c8342 | // writeTokens writes a slice of tokens to the output.
// codes for literal and offset encoding must be supplied. | |
c8343 | // writeBlockHuff encodes a block of bytes as either
// Huffman encoded literals or uncompressed bytes if the
// results only gains very little from compression. | |
c8344 | // OpenReader will open the Zip file specified by name and return a ReadCloser. | |
c8345 | // NewReader returns a new Reader reading from r, which is assumed to
// have the given size in bytes. | |
c8346 | // RegisterDecompressor registers or overrides a custom decompressor for a
// specific method ID. If a decompressor for a given method is not found,
// Reader will default to looking up the decompressor at the package level.
//
// Must not be called concurrently with Open on any Files in the Reader. | |
c8347 | // DataOffset returns the offset of the file's possibly-compressed
// data, relative to the beginning of the zip file.
//
// Most callers should instead use Open, which transparently
// decompresses data and verifies checksums. | |
c8348 | // Open returns a ReadCloser that provides access to the File's contents.
// Multiple files may be read concurrently. | |
c8349 | // findBodyOffset does the minimum work to verify the file has a header
// and returns the file body offset. | |
c8350 | // readDirectory64End reads the zip64 directory end and updates the
// directory end with the zip64 directory end values. | |
c8351 | // FileInfoHeader creates a partially-populated FileHeader from an
// os.FileInfo.
// Because os.FileInfo's Name method returns only the base name of
// the file it describes, it may be necessary to modify the Name field
// of the returned header to provide the full path name of the file. | |
c8352 | // ModTime returns the modification time in UTC.
// The resolution is 2s. | |
c8353 | // SetModTime sets the ModifiedTime and ModifiedDate fields to the given time in UTC.
// The resolution is 2s. | |
c8354 | // Mode returns the permission and mode bits for the FileHeader. | |
c8355 | // SetMode changes the permission and mode bits for the FileHeader. | |
c8356 | // isZip64 reports whether the file size exceeds the 32 bit limit | |
c8357 | // set sets the code and length of an hcode. | |
c8358 | // Generates a HuffmanCode corresponding to the fixed literal table | |
c8359 | // Look at the leaves and assign them a bit count and an encoding as specified
// in RFC 1951 3.2.2 | |
c8360 | // Returns the offset code corresponding to a specific offset | |
c8361 | // init initializes dictDecoder to have a sliding window dictionary of the given
// size. If a preset dict is provided, it will initialize the dictionary with
// the contents of dict. | |
c8362 | // histSize reports the total amount of historical data in the dictionary. | |
c8363 | // readFlush returns a slice of the historical buffer that is ready to be
// emitted to the user. The data returned by readFlush must be fully consumed
// before calling any other dictDecoder methods. | |
c8364 | // Reset the encoding table. | |
c8365 | // WithDecoderLowmem will set whether to use a lower amount of memory,
// but possibly have to allocate more while running. | |
c8366 | // WithDecoderConcurrency will set the concurrency,
// meaning the maximum number of decoders to run concurrently.
// The value supplied must be at least 1.
// By default this will be set to GOMAXPROCS. | |
c8367 | // HistogramFinished can be called to indicate that the histogram has been populated.
// maxSymbol is the index of the highest set symbol of the next data segment.
// maxCount is the number of entries in the most populated entry.
// These are accepted at face value. | |
c8368 | // prepare will prepare and allocate scratch tables used for both compression and decompression. | |
c8369 | // Estimate returns a normalized compressibility estimate of block b.
// Values close to zero are likely uncompressible.
// Values above 0.1 are likely to be compressible.
// Values above 0.5 are very compressible.
// Very small lengths will return 0. | |
c8370 | // init will initialize the reader and set the input. | |
c8371 | // Uint8 returns the next byte | |
c8372 | // Copy a single uncompressed data block from input to output. | |
c8373 | // copyData copies f.copyLen bytes from the underlying reader into f.hist.
// It pauses for reads when f.hist is full. | |
c8374 | // noEOF returns err, unless err == io.EOF, in which case it returns io.ErrUnexpectedEOF. | |
c8375 | // Read the next Huffman-encoded symbol from f according to h. | |
c8376 | // NewReader returns a new ReadCloser that can be used
// to read the uncompressed version of r.
// If r does not also implement io.ByteReader,
// the decompressor may read more data than necessary from r.
// It is the caller's responsibility to call Close on the ReadCloser
// when finished reading.
//
// The ReadClose... | |
c8377 | // fillBase will precalculate base offsets with the given bit distributions. | |
c8378 | // Compress the input bytes. Input must be < 2GB.
// Provide a Scratch buffer to avoid memory allocations.
// Note that the output is also kept in the scratch buffer.
// If input is too hard to compress, ErrIncompressible is returned.
// If input is a single byte value repeated ErrUseRLE is returned. | |
c8379 | // init will initialize the compression state to the first symbol of the stream. | |
c8380 | // flush will write the tablelog to the output and flush the remaining full bytes. | |
c8381 | // String prints values as a human readable string. | |
c8382 | // allocCtable will allocate tables needed for compression.
// If existing tables a re big enough, they are simply re-used. | |
c8383 | // normalizeCount will normalize the count of the symbols so
// the total is equal to the table size. | |
c8384 | // validateNorm validates the normalized histogram table. | |
c8385 | // Read bytes from the decompressed stream into p.
// Returns the number of bytes written and any error that occurred.
// When the stream is done, io.EOF will be returned. | |
c8386 | // Reset will reset the decoder the supplied stream after the current has finished processing.
// Note that this functionality cannot be used after Close has been called. | |
c8387 | // drainOutput will drain the output until errEndOfStream is sent. | |
c8388 | // WriteTo writes data to w until there's no more data to write or when an error occurs.
// The return value n is the number of bytes written.
// Any error encountered during the write is also returned. | |
c8389 | // DecodeAll allows stateless decoding of a blob of bytes.
// Output will be appended to dst, so if the destination size is known
// you can pre-allocate the destination slice to avoid allocations.
// DecodeAll can be used concurrently.
// The Decoder concurrency limits will be respected. | |
c8390 | // nextBlock returns the next block.
// If an error occurs d.err will be set. | |
c8391 | // Close will release all resources.
// It is NOT possible to reuse the decoder after this. | |
c8392 | // start conditional rendering | |
c8393 | // bind a user-defined varying out variable to a fragment shader color number | |
c8394 | // specify whether data read via should be clamped | |
c8395 | // define a color lookup table | |
c8396 | // define a one-dimensional convolution filter | |
c8397 | // define a two-dimensional convolution filter | |
c8398 | // copy pixels into a color table | |
c8399 | // copy pixels into a one-dimensional convolution filter |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.