Dataset Viewer
Auto-converted to Parquet Duplicate
repo
stringclasses
1 value
pull_number
int64
14
6.65k
instance_id
stringlengths
19
21
issue_numbers
sequencelengths
1
2
base_commit
stringlengths
40
40
patch
stringlengths
505
226k
test_patch
stringlengths
265
112k
problem_statement
stringlengths
99
16.2k
hints_text
stringlengths
0
33.2k
created_at
stringlengths
20
20
version
stringlengths
3
4
environment_setup_commit
stringlengths
40
40
apache/arrow-rs
6,649
apache__arrow-rs-6649
[ "6648" ]
7bcc1ad988498def843180c6a4c95f9732f31a4b
diff --git a/parquet/src/record/reader.rs b/parquet/src/record/reader.rs index 1f9128a8b4f..fd6ca7cdd57 100644 --- a/parquet/src/record/reader.rs +++ b/parquet/src/record/reader.rs @@ -138,7 +138,17 @@ impl TreeBuilder { .column_descr_ptr(); let col_reader = row_group_reader.get_column_rea...
diff --git a/parquet-testing b/parquet-testing index 50af3d8ce20..550368ca77b 160000 --- a/parquet-testing +++ b/parquet-testing @@ -1,1 +1,1 @@ -Subproject commit 50af3d8ce206990d81014b1862e5ce7380dc3e08 +Subproject commit 550368ca77b97231efead39251a96bd6f8f08c6e
Primitive REPEATED fields not contained in LIST annotated groups aren't read as lists by record reader **Describe the bug** Primitive REPEATED fields not contained in LIST annotated groups should be read as lists according to the format but aren't. **To Reproduce** <!-- Steps to reproduce the behavior: --> **...
2024-10-29T23:41:15Z
53.2
7bcc1ad988498def843180c6a4c95f9732f31a4b
apache/arrow-rs
6,453
apache__arrow-rs-6453
[ "6282" ]
f41c258246cd4bd9d89228cded9ed54dbd00faff
diff --git a/arrow-flight/examples/flight_sql_server.rs b/arrow-flight/examples/flight_sql_server.rs index 81afecf85625..dd3a3943dd95 100644 --- a/arrow-flight/examples/flight_sql_server.rs +++ b/arrow-flight/examples/flight_sql_server.rs @@ -19,6 +19,7 @@ use arrow_flight::sql::server::PeekableFlightDataStream; use a...
diff --git a/arrow-integration-test/src/lib.rs b/arrow-integration-test/src/lib.rs index d1486fd5a153..ea5b545f2e81 100644 --- a/arrow-integration-test/src/lib.rs +++ b/arrow-integration-test/src/lib.rs @@ -21,6 +21,7 @@ //! //! This is not a canonical format, but provides a human-readable way of verifying language i...
What is the highest compression level in gzip? **Which part is this question about** What is the highest compression level in gzip? **Describe your question** I see from other sources, including `flate2`, the highest compression level for gzip is 9 instead of 10. If we pass 10, it should be accepted by parquet b...
It seems flate2 documentation is wrong. ```rust /// Returns an integer representing the compression level, typically on a /// scale of 0-9 pub fn level(&self) -> u32 { self.0 } ``` But internally, inside `DeflateBackend::make` they have `debug_assert!(level.level() <= 10);`. Using compression level up t...
2024-09-25T04:27:02Z
53.0
f41c258246cd4bd9d89228cded9ed54dbd00faff
apache/arrow-rs
6,368
apache__arrow-rs-6368
[ "6366" ]
0491294828a6480959ba3983355b415abbaf1174
diff --git a/.github/workflows/integration.yml b/.github/workflows/integration.yml index 1937fafe3a62..41edc1bb194e 100644 --- a/.github/workflows/integration.yml +++ b/.github/workflows/integration.yml @@ -48,7 +48,6 @@ on: - arrow/** jobs: - integration: name: Archery test With other arrows run...
diff --git a/arrow/tests/pyarrow.rs b/arrow/tests/pyarrow.rs index a1c365c31798..d9ebd0daa1cd 100644 --- a/arrow/tests/pyarrow.rs +++ b/arrow/tests/pyarrow.rs @@ -18,6 +18,8 @@ use arrow::array::{ArrayRef, Int32Array, StringArray}; use arrow::pyarrow::{FromPyArrow, ToPyArrow}; use arrow::record_batch::RecordBatch; +...
Exporting Binary/Utf8View from arrow-rs to pyarrow fails **Describe the bug** Exporting binaryview arrow to pyarrow fails with `Expected at least 3 buffers for imported type binary_view, ArrowArray struct has 2` **To Reproduce** Construct binaryview array and export it over c data interface to pyarrow **Ex...
2024-09-06T21:16:12Z
53.0
f41c258246cd4bd9d89228cded9ed54dbd00faff
apache/arrow-rs
6,332
apache__arrow-rs-6332
[ "6331" ]
d4be752ef54ee30198d0aa1abd3838188482e992
diff --git a/arrow-flight/src/bin/flight_sql_client.rs b/arrow-flight/src/bin/flight_sql_client.rs index 296efc1c308e..c334b95a9a96 100644 --- a/arrow-flight/src/bin/flight_sql_client.rs +++ b/arrow-flight/src/bin/flight_sql_client.rs @@ -20,7 +20,10 @@ use std::{sync::Arc, time::Duration}; use anyhow::{bail, Context,...
diff --git a/arrow-flight/tests/flight_sql_client_cli.rs b/arrow-flight/tests/flight_sql_client_cli.rs index 168015d07e2d..6e1f6142c8b6 100644 --- a/arrow-flight/tests/flight_sql_client_cli.rs +++ b/arrow-flight/tests/flight_sql_client_cli.rs @@ -23,10 +23,12 @@ use crate::common::fixture::TestFixture; use arrow_array...
Add Catalog DB Schema subcommands to `flight_sql_client` **Is your feature request related to a problem or challenge? Please describe what you are trying to do.** When using the `flight_sql_client` it can be helpful to interrogate the Flight SQL server for information about available catalogs, db schemas and tables....
2024-08-29T19:02:56Z
53.0
f41c258246cd4bd9d89228cded9ed54dbd00faff
apache/arrow-rs
6,328
apache__arrow-rs-6328
[ "6179" ]
678517018ddfd21b202a94df13b06dfa1ab8a378
diff --git a/.github/workflows/rust.yml b/.github/workflows/rust.yml index a1644ee49b8d..1b65c5057de1 100644 --- a/.github/workflows/rust.yml +++ b/.github/workflows/rust.yml @@ -100,6 +100,13 @@ jobs: run: rustup component add rustfmt - name: Format arrow run: cargo fmt --all -- --check + ...
diff --git a/parquet/src/util/test_common/file_util.rs b/parquet/src/util/test_common/file_util.rs index c2dcd677360d..6c031358e795 100644 --- a/parquet/src/util/test_common/file_util.rs +++ b/parquet/src/util/test_common/file_util.rs @@ -19,8 +19,7 @@ use std::{fs, path::PathBuf, str::FromStr}; /// Returns path to ...
Is cargo fmt no longer working properly in parquet crate **Which part is this question about** Code formatter. **Describe your question** I've noticed recently that running `cargo fmt` while I'm editing files doesn't always seem to catch problems. Running rustfmt directly will work. For instance, running `cargo fm...
Fascinatingly, I am seeing the same thing For example I deliberately introduced a formatting issue: ``` andrewlamb@Andrews-MacBook-Pro-2:~/Software/arrow-rs$ git diff diff --git a/parquet/src/compression.rs b/parquet/src/compression.rs index 10560210e4e..119af7e156f 100644 --- a/parquet/src/compression.rs ++...
2024-08-29T16:36:10Z
52.2
678517018ddfd21b202a94df13b06dfa1ab8a378
apache/arrow-rs
6,320
apache__arrow-rs-6320
[ "6318" ]
a937869f892dc12c4730189e216bf3bd48c2561d
diff --git a/arrow/src/pyarrow.rs b/arrow/src/pyarrow.rs index 43cdb4fe0919..336398cbf22f 100644 --- a/arrow/src/pyarrow.rs +++ b/arrow/src/pyarrow.rs @@ -59,7 +59,7 @@ use std::convert::{From, TryFrom}; use std::ptr::{addr_of, addr_of_mut}; use std::sync::Arc; -use arrow_array::{RecordBatchIterator, RecordBatchRea...
diff --git a/arrow-pyarrow-integration-testing/tests/test_sql.py b/arrow-pyarrow-integration-testing/tests/test_sql.py index 5320d0a5343e..3b46d5729a1f 100644 --- a/arrow-pyarrow-integration-testing/tests/test_sql.py +++ b/arrow-pyarrow-integration-testing/tests/test_sql.py @@ -476,6 +476,29 @@ def test_tensor_array():...
Allow converting empty `pyarrow.RecordBatch` to `arrow::RecordBatch` **Is your feature request related to a problem or challenge? Please describe what you are trying to do.** `datafusion-python` currently errors when calling `select count(*) from t` when `t` is a `pyarrow.Dataset`. The resulting `pyarrow.RecordBatc...
2024-08-28T17:30:36Z
52.2
678517018ddfd21b202a94df13b06dfa1ab8a378
apache/arrow-rs
6,295
apache__arrow-rs-6295
[ "3577" ]
8c956a9f9ab26c14072740cce64c2b99cb039b13
diff --git a/parquet/src/encodings/rle.rs b/parquet/src/encodings/rle.rs index 581f14b3c99a..97a122941f17 100644 --- a/parquet/src/encodings/rle.rs +++ b/parquet/src/encodings/rle.rs @@ -20,7 +20,6 @@ use std::{cmp, mem::size_of}; use bytes::Bytes; use crate::errors::{ParquetError, Result}; -use crate::util::bit_ut...
diff --git a/parquet/tests/arrow_reader/bad_data.rs b/parquet/tests/arrow_reader/bad_data.rs index 6e325f119710..cbd5d4d3b29e 100644 --- a/parquet/tests/arrow_reader/bad_data.rs +++ b/parquet/tests/arrow_reader/bad_data.rs @@ -134,3 +134,28 @@ fn read_file(name: &str) -> Result<usize, ParquetError> { } Ok(num...
Don't Panic on Invalid Parquet Statistics **Is your feature request related to a problem or challenge? Please describe what you are trying to do.** <!-- A clear and concise description of what the problem is. Ex. I'm always frustrated when [...] (This section helps Arrow developers understand the context and *why* ...
2024-08-23T19:10:39Z
52.2
678517018ddfd21b202a94df13b06dfa1ab8a378
apache/arrow-rs
6,290
apache__arrow-rs-6290
[ "6289" ]
ebcc4a585136cd1d9696c38c41f71c9ced181f57
diff --git a/parquet/Cargo.toml b/parquet/Cargo.toml index b97b2a571646..1d38e67a0f02 100644 --- a/parquet/Cargo.toml +++ b/parquet/Cargo.toml @@ -68,6 +68,7 @@ twox-hash = { version = "1.6", default-features = false } paste = { version = "1.0" } half = { version = "2.1", default-features = false, features = ["num-tr...
diff --git a/parquet/tests/arrow_reader/checksum.rs b/parquet/tests/arrow_reader/checksum.rs new file mode 100644 index 000000000000..c60908d8b95d --- /dev/null +++ b/parquet/tests/arrow_reader/checksum.rs @@ -0,0 +1,73 @@ +// Licensed to the Apache Software Foundation (ASF) under one +// or more contributor license ag...
Optionally verify 32-bit CRC checksum when decoding parquet pages Currently the PageHeader::crc is never used
2024-08-22T23:28:20Z
53.0
f41c258246cd4bd9d89228cded9ed54dbd00faff
apache/arrow-rs
6,269
apache__arrow-rs-6269
[ "6268" ]
acdd27a66ac7b5e07816dc648db00532110fb89a
diff --git a/parquet_derive/src/lib.rs b/parquet_derive/src/lib.rs index 16b6a6699e2d..9c93e2cca978 100644 --- a/parquet_derive/src/lib.rs +++ b/parquet_derive/src/lib.rs @@ -146,10 +146,10 @@ pub fn parquet_record_writer(input: proc_macro::TokenStream) -> proc_macro::Toke /// Derive flat, simple RecordReader implemen...
diff --git a/parquet_derive_test/src/lib.rs b/parquet_derive_test/src/lib.rs index e7c7896cb7f3..2cd69d03d731 100644 --- a/parquet_derive_test/src/lib.rs +++ b/parquet_derive_test/src/lib.rs @@ -73,9 +73,9 @@ struct APartiallyCompleteRecord { struct APartiallyOptionalRecord { pub bool: bool, pub string: Stri...
parquet_derive: support reading selected columns from parquet file # Feature Description I'm effectively using `parquet_derive` in my project, and I found that there are two inconvenient constraints: 1. The `ParquetRecordReader` enforces the struct to organize fields exactly in the **same order** in the parquet f...
2024-08-18T14:39:49Z
52.2
678517018ddfd21b202a94df13b06dfa1ab8a378
apache/arrow-rs
6,204
apache__arrow-rs-6204
[ "6203" ]
db239e5b3aa05985b0149187c8b93b88e2285b48
"diff --git a/parquet/benches/arrow_reader.rs b/parquet/benches/arrow_reader.rs\nindex 814e75c249bf.(...TRUNCATED)
"diff --git a/parquet/src/util/test_common/page_util.rs b/parquet/src/util/test_common/page_util.rs\(...TRUNCATED)
"Add benchmarks for `BYTE_STREAM_SPLIT` encoded Parquet `FIXED_LEN_BYTE_ARRAY` data\n**Is your featu(...TRUNCATED)
2024-08-06T22:53:02Z
52.2
678517018ddfd21b202a94df13b06dfa1ab8a378
End of preview. Expand in Data Studio
README.md exists but content is empty.
Downloads last month
4