repository
stringclasses
156 values
issue title
stringlengths
1
1.01k
labels
stringclasses
8 values
body
stringlengths
1
270k
prestodbpresto
native disable hive partial aggregation pushdown enable
Bug
hiveconnector in velox doesn t support aggregation pushdown which lead to error like veloxusererror unsupported hive column type aggregate this functionality be not need because in velox aggregation support pushdown via lazyvector this be a more generic function which allow to avoid couple aggregation with tablescan and couple execution with the optimizer we should disable hive partial aggregation pushdown enable configuration property and corresponding partial aggregation pushdown enable session property for prestissimo cc amitkdutta spershin tdcmeehan aditi pandit majetideepak
prestodbpresto
presto start with discovery service failure
Bug
your environment presto version use 0 2781 deployment cloud or on prem cloud pastebin link to the complete debug log get error during presto start 2024 02 29t02 22 12 018z info main bootstrap discovery server enable false false 2024 02 29t02 22 17 947z error discovery 1 com facebook airlift discovery client cachingserviceselector can not connect to discovery server for refresh presto general lookup of presto fail with status code 404 2024 02 29t02 22 17 947z error discovery 0 com facebook airlift discovery client cachingserviceselector can not connect to discovery server for refresh collector general lookup of collector fail with status code 404 my config cat config properties coordinator true node scheduler include coordinator true http server http port 8080 query max memory 12 gb query max memory per node 2 gb query max total memory per node 4 gb discovery uri discovery uri additional information curl I http 1 1 301 move permanently date thu 29 feb 2024 02 31 09 gmt location content length 0
prestodbpresto
like expression with initial and end do not work on char column
Bug
your environment local mac expect behavior like somevalue should work over varchar and char column current behavior return an error over char unexpected parameter char 15 varchar bigint for function split expect split varchar x varchar y bigint split varchar x varchar y possible solution see discussion in 20436 simple solution as advocate there be to remove the optimization find in that pr over char column since it doesn t work alternatively we could try to create the function that s miss finally we could revert 20436 step to reproduce 1 run the memoryqueryrunner in intellij 2 from the cli create table order char as select cast orderpriority as char 15 as orderpriority from order 3 from the cli select from order char where orderpriority like 1 urgent 4 observe the failure screenshot if appropriate context this have be present since 0 284 so we ll need to create patch fix for 0 284 and 0 285 we ll also need to hold up 0 286 so this go in
prestodbpresto
commit standard link in contribute md be break
Bug
commit standard link in be break cc tdcmeehan steveburnett
prestodbpresto
pacific kanton timezone cause testtimezoneutil testnamedzone to fail
Bug
your environment presto version use head java version openjdk version 1 8 0 302 openjdk runtime environment temurin build 1 8 0 302 b08 openjdk 64 bit server vm temurin build 25 302 b08 mixed mode expect behavior test pass current behavior error test run 8198 failure 1 error 0 skip 1 time elapse 3 007 826 s failure in testsuite error com facebook presto util testtimezoneutil testnamedzone time elapse 0 037 s failure java time zone zonerulesexception unknown time zone i d pacific kanton at java time zone zonerulesprovider getprovider zonerulesprovider java 272 at java time zone zonerulesprovider getrules zonerulesprovider java 227 at java time zoneregion ofid zoneregion java 120 at java time zoneid of zoneid java 411 at java time zoneid of zoneid java 359 at com facebook presto util testtimezoneutil asserttimezone testtimezoneutil java 108 possible solution either upgrade minimum jdk to 8u321 currently document as 8u151 in contribute or special case it in the test step to reproduce run mvn test on jdk 8u302 see for detail on minimum jdk version that contain this zone
prestodbpresto
native task getdetail i d xyz crash with std out of range out of range in dynamic array
Bug
look like the crash happen in task tojson when access driverobj index folly dynamic driverobj folly dynamic array int index 0 for auto driver driver if driver driverobj index driver tojson we should use driverobj push back driver tojson or change driverobj to folly dynamic object get task getdetail i d xyz terminate call after throw an instance of std out of range what out of range in dynamic array e0213 00 38 52 602445 556 exceptiontracer cpp 220 exception type std out of range 36 frame 000000000419580c cxa throw fbcode folly experimental exception tracer exceptiontracerlib cpp 75 0000000004266966 void folly throw exception std out of range fbcode folly lang exception h 43 000000000417fa02 void folly detail throw exception char const fbcode folly lang exception h 89 000000000456d0fb folly dynamic atimpl folly dynamic const const fbcode folly lang exception h 115 000000000f48d9ec facebook velox exec task tojson const fbcode folly json dynamic inl h 779 000000000fa0911f facebook presto prestotask tojson const fbcode github presto trunk presto native execution presto cpp main prestotask cpp 731 000000000f9f9e25 facebook presto prestoserveroperation taskoperation abi cxx11 facebook presto serveroperation const proxygen httpmessage fbcode github presto trunk presto native execution presto cpp main prestoserveroperation cpp 208 000000000f9f7557 facebook presto prestoserveroperation runoperation proxygen httpmessage proxygen responsehandler fbcode github presto trunk presto native execution presto cpp main prestoserveroperation cpp 90 cc tangjiangling spershin
prestodbpresto
presto hive connector create too many small split
Bug
we have a table with wide column map and it take a long time 3 6 min select count 1 from table meta internal query i d 20240212 231123 91020 7kizs op perf use session property hive file splittable false make the same query take 10 meta internal query i d 20240212 230955 07056 7jgdp op perf this session property create exactly 1 split from 1 file in table if otherwise true presto scheduler will create multiple split from the same file this be because presto scheduler create split accord to file size but do not take into account if we read only select column from the file this be an extreme case where we read no datum but just metadata in general we should be able to tune split size base on amount of datum we select from the file so we can have few split and presto run fast there be an orthogonal problem when presto be slow to process a large number of split that need to be look at in future this issue focus on increase split size when possible so query can be run with few split there can be 2 way to fix this 1 use column stat from metastore to find relative size of file that we will be read and make split size accord to it not the whole file size 2 make it adaptive if worker report they didn t read too much from the file when processing split increase split size
prestodbpresto
prestocteprojectpushdown create wrong order of column
Bug
add a test case which be fail with a different join order on debug the order of the last two column interchange
prestodbpresto
singlestore test fail with not enough disk space
Bug
testsinglestoredistributedquerie be fail on ci with sql conn 55 not enough disk space to create or attach database tpch on leaf 127 0 0 1 3307 estimate available 11099 mb estimate need 13763 mb sample run your environment presto version use master branch storage hdfs s3 gcs datum source and connector use deployment cloud or on prem pastebin link to the complete debug log expect behavior the test should not fail current behavior the test be flaky and have fail couple of time for my pr
prestodbpresto
from unixtime double implementation harbor double precision bug
Bug
your environment presto cluster late version expect behavior query see the reproduce section should return event time action ts 1 7041507095805e9 2024 01 01 15 11 49 580 machine representable double for the 1 7041507095805e9 be 1704150709 58049988746643066406 1704150709 go to second and 0 580499887466 should be round to 580 millisecond because 580 499887466 580 5 current behavior currently the query return event time action ts 1 7041507095805e9 2024 01 01 15 11 49 581 the result timestamp get 1704150709 second and 581 millisecond it be because the function be implement as math round unixtime 1000 unixtime 1000 part lose precision in some corner case in our case it push the result number to 1704150709580 500000000 which after round give we 1704150709581 mean 581 millisecond possible solution change the exist code to long math floor unixtime 1000 math round unixtime math floor unixtime 1000 would avoid accumulate float point error and will produce the right result step to reproduce a table with a double column in this example event time query to reproduce select event time from unixtime event time action ts from select if event time be not null 1 7041507095805e9 1 7e9 as event time from table1 limit 1 context the issue be notice when compare presto native with presto on real world datum and query
prestodbpresto
consolidate legacysqlqueryscheduler and sqlqueryscheduler
Bug
the difference between sqlqueryscheduler and legacysqlqueryscheduler be in the former ability to support full section retrie in presto unlimited this be implement before move to presto on spark strategy that support more granular retrie through spark the new scheduler have never be fully roll out the proposal be to remove the sqlqueryscheduler and rename legacysqlqueryscheduler to sqlqueryscheduler drop related property and interface
prestodbpresto
native hide column miss in prestissimo hive connector
Bug
presto hive connector support 3 hide column path file size file modify time as document here extra hide column while java presto nicely show the 3 hide column on my unmanaged hive table only 1 of the 3 hide column work in prestissimo on the same unmanaged hive table current behavior below query be run in prestissimo show that only path be there but not file size and file modify time same query work in java presto presto tpcd sf10000 hive select path from call center limit 2 path s3a tpcdssf10000hive call center 20230524 070932 00669 hu23v 1dbbf97d 0734 4e0a 89d5 60e29be4ef35 s3a tpcdssf10000hive call center 20230524 070932 00669 hu23v 1dbbf97d 0734 4e0a 89d5 60e29be4ef35 2 row presto tpcd sf10000 hive select file size from call center limit 2 query 20240204 200855 00007 7x2xd fail 2 node split 2 total 0 do 0 00 latency client side 160ms server side 140ms 0 row 0b 0 rows s 0b s query 20240204 200855 00007 7x2xd fail field not find file size available field be cc call center sk cc call center i d cc rec start date cc rec end date cc closed date sk cc open date sk cc name cc class cc employee cc sq ft cc hour cc manager cc mkt i d cc mkt class cc mkt desc cc market manager cc division cc division name cc company cc company name cc street number cc street name cc street type cc suite number cc city cc county cc state cc zip cc country cc gmt offset cc tax percentage split hive s3a tpcdssf10000hive call center 20230524 070932 00669 hu23v 1dbbf97d 0734 4e0a 89 d5 60e29be4ef35 0 14229 task 20240204 200855 00007 7x2xd 1 0 0 0 presto tpcd sf10000 hive select file modify time from call center limit 5 query 20240204 200904 00008 7x2xd fail 2 node split 2 total 0 do 0 00 latency client side 166ms server side 147m 0 row 0b 0 rows s 0b s query 20240204 200904 00008 7x2xd fail field not find file modify time available field be cc call center sk cc call center i d cc rec start date cc rec end date cc closed date sk cc open date sk cc name cc class cc employee cc sq ft cc hour cc manager cc mkt i d cc mkt class cc mkt desc c c market manager cc division cc division name cc company cc company name cc street number cc street name cc street type cc suite number cc city cc county cc state cc zip cc country cc gmt offset cc tax percentage split hive s3a tpcdssf10000hive call center 20230524 070932 00669 hu23v 1dbbf97d 073 4 4e0a 89d5 60e29be4ef35 0 14229 task 20240204 200904 00008 7x2xd 1 0 0 0
prestodbpresto
add documentation for history base optimizer
Bug
many prominent improvement be be add to the history base optimizer we already have documentation for the redis hbo provider yet lack documentation for how hbo work and how to enable it we should add this documentation to encourage its use it seem like this should be add to the query optimizer section cc pranjalssh mlyublena feilong liu jaystarshot
prestodbpresto
error throw when try to execute insert into statement when white space be present in column name in iceberg connector
Bug
when attempt to execute an insert into statement within the iceberg connector an error be encounter specifically when the column name involve in the operation contain white space the error presto type be null be get throw and insertion operation be not get successful environment presto version use v0 286 datum source and connector use iceberg connector expect behavior the error presto type be null should not be throw and thereby allow successful execution of the insert into statement current behavior in iceberg connector the insert into statement be not work as expect when white space be present in column name in a table possible solution pr step to reproduce 1 create a table in iceberg with column name in the table have space in they for example create table berry berry name varchar 20 2 try insert value into the table use insert into statement eg insert into berry value berry one 3 insertion will not be possible and throw the error presto type be null
prestodbpresto
native prestissimo build be fail on macos
Bug
a recent fmt upgrade in velox can cause the build to fail this be be fix
prestodbpresto
documentation add in 21057 doesn t accurately reflect the state of resource group management in presto
Bug
accord to resource manager selector rule support a usergroup component however presto 0 285 doesn t actually support this selector doesn t contain a member that represent this component doesn t have one either your environment presto version use 0 285 storage hdfs s3 gcs n a datum source and connector use n a deployment cloud or on prem on prem pastebin link to the complete debug log n a expect behavior when I configure a selector rule with a usergroup component e g via resource group json all query from user in say group must be run in the group specifice in the selector rule current behavior this selector doesn t fire possible solution update the documentation to reflect the real status of this feature until it s actually implement and release step to reproduce 1 install presto 0 285 and configure the file resource group manager as show on 2 run a query as a user that be in the admin user group but isn t bob 3 the query should run in the admin resource group but run in an ad hoc global adhoc other user resource group context this be one of the feature from the other very similar engine that be miss in presto
prestodbpresto
drop v1 task endpoint in favor of v1 task async
Good First Issue
make asyncpagetransportservlet respond to v1 task
prestodbpresto
native output operator be miss in stage summary page
Bug
current behavior in same production query cpp vs java possible reason from debug front end the output operator should have the same plannodeid as its parent jsx logic l390 cpp s partitionedoutput s plannodeid be root cpp stageid 1 stageexecutionid 0 pipelineid 0 operatorid 1 plannodeid 185 operatortype partialaggregation stageid 1 stageexecutionid 0 pipelineid 0 operatorid 2 plannodeid root operatortype partitionedoutput java stageid 1 stageexecutionid 0 pipelineid 0 operatorid 1 plannodeid 185 operatortype aggregationoperator stageid 1 stageexecutionid 0 pipelineid 0 operatorid 2 plannodeid 185 operatortype taskoutputoperator
prestodbpresto
query 20240119 063836 00856 pugph fail unsupported column type timestamp
Bug
I be try to use mysql connector and insert data which be result select from other table into the mysql table there be a column with datetime type in the d table when I execute the insert into select query an exception occur now I can only create a tmp table in mysql and use it as a bridge why timestamp not support yet presto version use 0 280 a95c1b4 storage hdfs s3 gcs datum source and connector use mysql connector possible solution step to reproduce 1 2 3 4 screenshot if appropriate upload image png
prestodbpresto
memorycachestats ssdstat be null at the time of record metric
Bug
must check if object be null here l281 abort at 1705102873 unix time try date d 1705102873 if you be use gnu date pc 0x0 name bit sigsegv 0x20 receive by pid 35075 tid 0x16f46b000 stack trace 0x187469a24 sigtramp 0x100fd6a1c facebook presto periodictaskmanager updatecachestat 0x100feb66c facebook presto periodictaskmanager addcachestatsupdatetask 7 operator 0x100feb644 folly detail function functiontrait callsmall 0x108b1629c folly detail function functiontrait operator 0x108cbb16c folly functionscheduler runonefunction 0x108cba9f0 folly functionscheduler run 0x108cc4c14 folly functionscheduler start 2 operator 0x108cc4bc4 znst3 1l8 invokeizn5folly17functionscheduler5starteve3 2jeeedtclsct fp spsct0 fp0 eeos4 dpos5 0x108cc4b60 znst3 1l16 thread executein 10unique ptrin 15 thread structen 14default deleteis2 eeeezn5folly17functionscheduler5starteve3 2jejeeevrns 5tupleijt t0 dpt1 eeen 15 tuple indicesijxspt2 eeee 0x108cc44f0 std 1 thread proxy 0x18743b034 pthread start process finish with exit code 139 interrupt by signal 11 sigsegv your environment presto version use storage hdfs s3 gcs datum source and connector use deployment cloud or on prem pastebin link to the complete debug log expect behavior current behavior possible solution step to reproduce 1 2 3 4 screenshot if appropriate context
prestodbpresto
add documentation of the function format for float type
Bug
add documentation of the function format for float type your environment presto version use storage hdfs s3 gcs datum source and connector use deployment cloud or on prem pastebin link to the complete debug log expect behavior documentation current behavior possible solution step to reproduce screenshot if appropriate context after diff 57df596d2b9a7b1093567006ad32935b2f3a8d78c296be1d75d0a5aad567fa9c this pr be merge we can take up this task
prestodbpresto
parquet dereference pushdown be break
Bug
presto test set session hive parquet dereference pushdown enable true set session presto test select address city from person query 20240107 071606 00090 jrwtp fail 1 node split 6 total 0 do 0 00 latency client side 91ms server side 83ms 0 row 0b 0 rows s 0b s query 20240107 071606 00090 jrwtp fail error opening hive split file tmp hive data test person 20240106 112604 00068 53jda 206000b4 ac48 430f 8f9e 063fce5e99a7 offset 0 length 940 index 1 out of bound for length 3 presto test set session hive parquet dereference pushdown enable false set session presto test select address city from person city shanghai hello 2 row query 20240107 071727 00092 jrwtp finish 1 node split 6 total 6 do 100 00 latency client side 88ms server side 72ms 2 row 2 08 kb 27 row s 28 9 kb s
prestodbpresto
there be a memory leak problem when query the iceberg table with delete file
Bug
one of our internal cluster have recently launch many iceberg query with delete file after a day I find that the coordinator s memory be always above 90 as show in the follow picture xnip2023 12 28 17 24 15 we find in the coordinator s gc log that there be almost full gc record every minute we suspect there be a memory leak so we dump the coordinator s memory we use mat s parseheapdump sh script to analyze the dump datum we filter common type discard ratio 85 discard pattern char byte java lang string java lang long java lang object java lang integer after analysis we find that the org apache iceberg deletefileindex object occupy 87 58 of the memory 1703756084267 because the memory of the online cluster be too large we can not know which class the org apache iceberg deletefileindex object be reference by so I reproduce the problem locally and find that it be reference by the filescantaskiterable of the icebergsplitsource class and every time the iceberg table with delete file be query the number of org apache iceberg deletefileindex in the memory will increase
prestodbpresto
precision loss issue in the decimal type division
Bug
your environment presto version use 0 272 expect behavior presto select 0 015 30 result 0 0005 select 0 0150 30 result 0 0005 select 0 01500 30 result 0 00050 spark select 0 015 30 result 0 000500 select 0 0150 30 result 0 0005000 select 0 01500 30 result 0 00050000 current behavior presto select 0 015 30 result 0 001
prestodbpresto
preserve case for rowtype s field name and json content when cast
Bug
description fix issue 20701 in parse stage when we try to build a cast expression we should not simply translate the target type string to low case as the target type string may have some quote field name which should retain their original case so as to json string when we try to cast it to other type like rowtype we should not roughly translate it to low case that would corrupt the json datum this pr try to fix the problem above test plan enhance test case testrowoperator testjsontorow add case reservation verify for json cast to other type enhance test case testrowoperator testrowcast add the example mention in issue 20701 make sure this fix do not affect exist test case contributor checklist x please make sure your submission complie with our development development format format commit message commit format and pull request and attribution guideline attribution x pr description address the issue accurately and concisely if the change be non trivial a github issue be reference x document new property with its default value sql syntax function or other functionality x if release note be require they follow the release note guideline x adequate test be add if applicable x ci pass release note no release note
prestodbpresto
fix bug for like pattern with multiple byte character
Bug
description fix 21577 in utf8 many character like chinese character be represent by more than one byte so we shouldn t simply use the character length of a string to handle it s corresponding slice this pr fix the problem test plan enhance test case testcondition testlike contributor checklist x please make sure your submission complie with our development development format format commit message commit format and pull request and attribution guideline attribution x pr description address the issue accurately and concisely if the change be non trivial a github issue be reference x document new property with its default value sql syntax function or other functionality x if release note be require they follow the release note guideline x adequate test be add if applicable x ci pass release note no release note
prestodbpresto
do like clause support unicode
Bug
presto sf1 select like a like helloworld like hello ab like ab col0 col1 col2 col3 col4 false false true false 1 row query 20231220 052149 00012 danvg finish 1 node split 5 total 5 do 100 00 latency client side 106ms server side 78ms 0 row 0b 0 rows s 0b s it seem that when the input string or the pattern contain unicode the result be not right
prestodbpresto
tpc ds sf 10k q78 veloxruntimeerror allocatenoncontiguous fail with 2 page from memory pool
Bug
click to expand the query sql tpc ds q78 with ws as select d year as ws sell year ws item sk ws bill customer sk ws customer sk sum ws quantity ws qty sum ws wholesale cost ws wc sum ws sale price ws sp from web sale leave join web return on wr order number ws order number and ws item sk wr item sk join date dim on ws sell date sk d date sk where wr order number be null group by d year ws item sk ws bill customer sk cs as select d year as cs sell year cs item sk cs bill customer sk cs customer sk sum cs quantity cs qty sum cs wholesale cost cs wc sum cs sale price cs sp from catalog sale leave join catalog return on cr order number cs order number and cs item sk cr item sk join date dim on cs sell date sk d date sk where cr order number be null group by d year cs item sk cs bill customer sk ss as select d year as ss sell year ss item sk ss customer sk sum ss quantity ss qty sum ss wholesale cost ss wc sum ss sale price ss sp from store sale leave join store return on sr ticket number ss ticket number and ss item sk sr item sk join date dim on ss sell date sk d date sk where sr ticket number be null group by d year ss item sk ss customer sk select ss customer sk round cast ss qty as decimal 20 0 coalesce ws qty 0 coalesce cs qty 0 2 ratio ss qty store qty ss wc store wholesale cost ss sp store sale price coalesce ws qty 0 coalesce cs qty 0 other chan qty coalesce ws wc 0 coalesce cs wc 0 other chan wholesale cost coalesce ws sp 0 coalesce cs sp 0 other chan sale price from ss leave join ws on ws sell year ss sell year and ws item sk ss item sk and ws customer sk ss customer sk leave join cs on cs sell year ss sell year and cs item sk ss item sk and cs customer sk ss customer sk where coalesce ws qty 0 0 or coalesce cs qty 0 0 and ss sell year 2001 order by ss customer sk ss qty desc ss wc desc ss sp desc other chan qty other chan wholesale cost other chan sale price ratio limit 100 query json query 78 no spill json the query fail with the follow error on a 16 node cluster r5 8xlarge vcpu 32 memory 256 gb x 16 veloxruntimeerror allocatenoncontiguous fail with 2 page from memory pool op root 0 3 partitionedoutput leaf root 20231110 010110 00029 jt2nx parent node root mmap track usage thread safe after fail to evict from cache state asyncdatacache cache size 517 68 mb tinysize 1 64 kb large size 517 68 mb cache entry 112 read pin 103 write pin 9 pin share 516 11 mb pin exclusive 1 57 mb num write wait 12003 empty entry 57150 cache access miss 305519 hit 1029689 hit byte 3 37 tb eviction 305407 eviction check 118377075 prefetch entry 2 byte 1 57 mb alloc megaclock 1457738 allocate page 56885248 cached page 132526 backing memory allocator mmap capacity 54 25 mb allocate page 56885248 map page 56885248 external map page 56661845 size 1 47 0 mb allocate 47 map size 2 27422 214 mb allocate 27422 map size 4 126 1 mb allocate 126 map size 8 145 4 mb allocate 145 map size 16 368 23 mb allocate 368 map size 32 320 40 mb allocate 320 map size 64 139 34 mb allocate 139 map size 128 136 68 mb allocate 136 map size 256 486 486 mb allocate 486 map at unknown 0 zn8facebook5velox7process10stacktracec1ei unknown source at unknown 1 zn8facebook5velox14veloxexceptionc2epkcms3 st17basic string viewicst11char traitsicees7 s7 s7 bns1 4typees7 unknown source at unknown 2 zn8facebook5velox6detail14veloxcheckfailins0 17veloxruntimeerrorerknst7 cxx1112basic stringicst11char traitsicesaiceeeeevrkns1 18veloxcheckfailargset0 unknown source at unknown 3 zn8facebook5velox6memory14memorypoolimpl21allocatenoncontiguousemrns1 10allocationem unknown source at unknown 4 zn8facebook5velox11streamarena8newrangeeipns0 9byterangee unknown source at unknown 5 zn8facebook5velox10bytestream6extendei unknown source at unknown 6 zn8facebook5velox10bytestream10appendboolebi unknown source at unknown 7 zzzn8facebook5velox10serializer6presto12 global n 115serializecolumnepkns0 10basevectorerkn5folly5rangeipkns0 10indexrangeeeepns3 12vectorstreameenkulve clevenkulve clev unknown source at unknown 8 zn8facebook5velox10serializer6presto12 global n 115serializecolumnepkns0 10basevectorerkn5folly5rangeipkns0 10indexrangeeeepns3 12vectorstreame unknown source at unknown 9 zn8facebook5velox10serializer6presto12 global n 122prestovectorserializer6appenderkst10shared ptrins0 9rowvectoreerkn5folly5rangeipkns0 10indexrangeeee unknown source at unknown 10 zn8facebook5velox4exec6detail11destination7advanceemrkst6vectoriisaiieerkst10share ptrins0 9rowvectoreerns1 19outputbuffermanagererkst8functionifvveepbpn5folly10semifutureinsm 4uniteee unknown source at unknown 11 zn8facebook5velox4exec17partitionedoutput9getoutputev unknown source at unknown 12 zn8facebook5velox4exec6driver11runinternalerst10shared ptris2 ers3 ins1 13blockingstateeers3 ins0 9rowvectoree unknown source at unknown 13 zn8facebook5velox4exec6driver3runest10shared ptris2 e unknown source at unknown 14 zn5folly6detail8function14functiontraitsifvvee9callsmallizn8facebook5velox4exec6driver7enqueueest10share ptris9 eeulve eevrns1 4datae unknown source at unknown 15 zn5folly6detail8function14functiontraitsifvveeclev unknown source at unknown 16 zn5folly18threadpoolexecutor7runtaskerkst10share ptrins0 6threadeeons0 4taske unknown source at unknown 17 zn5folly21cputhreadpoolexecutor9threadrunest10share ptrin 18threadpoolexecutor6threadee unknown source at unknown 18 zst13 invoke implivrmn5folly18threadpoolexecutorefvst10share ptrins1 6threadeeerps1 jrs4 eet st21 invoke memfun derefot0 ot1 dpot2 unknown source at unknown 19 zst8 invokeirmn5folly18threadpoolexecutorefvst10share ptrins1 6threadeeejrps1 rs4 eenst15 invoke resultit jdpt0 ee4typeeosc dposd unknown source at unknown 20 znst5 bindifmn5folly18threadpoolexecutorefvst10share ptrins1 6threadeeeps1 s4 ee6 callivjejlm0elm1eeeet ost5tupleijdpt0 eest12 index tupleijxspt1 eee unknown source at unknown 21 znst5 bindifmn5folly18threadpoolexecutorefvst10share ptrins1 6threadeeeps1 s4 eeclijeveet0 dpot unknown source at unknown 22 zn5folly6detail8function14functiontraitsifvvee9callsmallist5 bindifmn 18threadpoolexecutorefvst10shared ptrins7 6threadeeeps7 sa eeeevrns1 4datae unknown source at unknown 23 0x0000000000000000 unknown source at unknown 24 start thread unknown source at unknown 25 clone unknown source
prestodbpresto
if the url contain unescaped character url extract path function return null
Bug
describe the problem you face a clear and concise description of the problem be this work before or be this a first try if this work before then what have change that recently provide table ddls and explain analyze for your presto query environment description presto version use 0 286 storage hdfs s3 gcs general datum source and connector or catalog use no connector deployment cloud or on prem on prem pastebin link to the complete debug log step to reproduce if an url contain the character or like in the follow url the function return null this seem not to be entirely true since we be see that browser be use such unescaped char in urls and it seem also be allow by the spec in section 2 4 3 which define explicit some character which can be leave unescape thank in advance step to reproduce the behavior expect behavior not null as output additional context trino also have similar behaviour stacktrace null postgresql postgre create or replace function url extract path url text return text as declare path text begin select regexp match url https 1 into path return path end language plpgsql create function postgre select url extract path url extract path de test jsp 1 row mysql mysql create function url extract path url text return text deterministic begin declare path text set path substre index substre index url 1 1 return path end query ok 0 row affect 0 01 sec mysql mysql delimiter mysql select url extract path url extract path www example com 1 row in set 0 00 sec trino and prestodb
prestodbpresto
cte materialization create wrong plan for similar name cte in completely different scope
Bug
materialization of cte involve identify their usage through cte name and relative path however there be two identify bug in this process which have be notice in 0 2 of production query in presto sql the allowance for nest and similarly name cte in numerous instance contribute to these issue observe 2 edge case from production edge case 1 local scope be not always prioritize over global scope with test as select 1 query as with test as select from test mark select from test select from query join query on true here the mark use of test should be from outside of the current scope use cte materialization strategy all give a graph cycle error edge case 2 select with t as select 1 select from t from with t as select columna columnb from value 1 a 2 b 3 c 4 d as temptable columna columnb select from t here project and from clause both have a completely new scope create the same cte t hence the cte name key should have this baseline value in it so that wrong cte be not use at the wrong place
prestodbpresto
native translate sink max buffer size and driver max page partition buffer size config to velox
Bug
prestissimo need to translate presto s worker config sink max buffer size and driver max page partition buffer size to correspond velox config max arbitrary buffer size and max page partition buffer size see cc kewang1024 spershin majetideepak aditi pandit
prestodbpresto
cte materialization internal error hive bucket hashcode be not support for hive category struct and binary
Bug
type java lang unsupportedoperationexception message computation of hive bucket hashcode be not support for hive category struct suppress stack com facebook presto hive hivebuckete hash hivebuckete java 191 com facebook presto hive hivebuckete hashoflist hivebuckete java 268 com facebook presto hive hivebuckete hash hivebuckete java 183 com facebook presto hive hivebucketing getbuckethashcode hivebuckete java 113 com facebook presto hive hivebuckete gethivebucket hivebuckete java 89 com facebook presto hive hivebucketfunction getbucket hivebucketfunction java 76 com facebook presto operator bucketpartitionfunction getpartition bucketpartitionfunction java 47 com facebook presto operator repartition partitionedoutputoperator pagepartitioner partitionpage partitionedoutputoperator java 433 com facebook presto operator repartition partitionedoutputoperator addinput partitionedoutputoperator java 269 com facebook presto operator driver processinternal driver java 438 com facebook presto operator driver lambda processfor 9 driver java 311 com facebook presto operator driver trywithlock driver java 732 com facebook presto operator driver processfor driver java 304 com facebook presto execution sqltaskexecution driversplitrunner processfor sqltaskexecution java 1079 com facebook presto execution executor prioritizedsplitrunner process prioritizedsplitrunner java 165 com facebook presto execution executor taskexecutor taskrunner run taskexecutor java 670
prestodbpresto
testjdbcresultset testobjecttype test fail in test other module job
Bug
module presto jdbc and test failure be error com facebook presto jdbc testjdbcresultset testobjecttype time elapse 1 554 s failure java lang assertionerror expect 1 1001 but find fst 1 snd 1001 at org testng assert fail assert java 110 at org testng assert failnotequal assert java 1413 at org testng assert assertequalsimpl assert java 149 at org testng assert assertequal assert java 131 at org testng assert assertequal assert java 643 at com facebook presto jdbc testjdbcresultset lambda testobjecttype 30 testjdbcresultset java 232 at com facebook presto jdbc testjdbcresultset checkrepresentation testjdbcresultset java 250 at com facebook presto jdbc testjdbcresultset testobjecttype testjdbcresultset java 231 at sun reflect nativemethodaccessorimpl invoke0 native method at sun reflect nativemethodaccessorimpl invoke nativemethodaccessorimpl java 62 at sun reflect delegatingmethodaccessorimpl invoke delegatingmethodaccessorimpl java 43 at java lang reflect method invoke method java 498 at org testng internal invoker methodinvocationhelper invokemethod methodinvocationhelper java 135 seem like this query be fail the check the datum structure return by the query seem like a pair vs expect type an array statement execute create type cat sch pair as fst integer snd varchar checkrepresentation cast 1 1001 as cat sch pair type java object rs column assertequal rs getobject 1 list newarraylist 1 1001 job
prestodbpresto
security vulnerability issue with spring boot maven plugin and spring core
Bug
module presto benchto benchmark dependency 1 org springframework boot spring boot maven plugin 1 2 5 release vulnerability cve 2023 37460 cve 2022 4245 cve 2022 4244 cve 2021 26291 cve 2018 1002200 2 org springframework spring core 4 1 6 release child dependency of com teradata benchto benchto driver 0 4 vulnerability cve 2023 20863 cve 2023 20861 cve 2022 22971 cve 2022 22970 cve 2022 22968 cve 2018 15756 cve 2018 1275 cve 2018 1272 cve 2018 1271 cve 2018 1270 cve 2018 1257 cve 2016 5007 cve 2015 5211 cve 2015 3192 cve 2022 23307 cve 2022 23305 cve 2022 23302 cve 2021 4104 cve 2019 17571 to resolve the above cve have to upgrade the spring boot maven plugin to version 2 7 18 and exclude spring core dependency from teradata benchto driver and add spring core 5 3 31 which be the late version of the dependency compile and compatible with java 8
prestodbpresto
native unit test be not be run in ci
Bug
I see the follow log example updatectestconfiguration from root project presto native execution build debug dartconfiguration tcl updatectestconfiguration from root project presto native execution build debug dartconfiguration tcl test project root project presto native execution build debug construct a list of test update test list for fixture add 0 test to meet fixture requirement check test dependency graph check test dependency graph end no test be find
prestodbpresto
hbo not canonicalize plan node ids and repeat variable name
Bug
sample json type com facebook presto sql planner canonicaltablescannode i d 82 table connectorid hive tablehandle schemaname tpch tablename nation layoutidentifi bucketfilter null constraint columndomain domainpredicate columndomain remainingpredicate valueblock cgaaaejzvevfqvjsqvkbaaaaaae type boolean schematablename schema tpch table nation outputvariable type variable name name varchar 25 1 regular 438 type varchar 25 type variable name nationkey bigint 0 regular 439 type bigint type variable name regionkey bigint 2 regular 440 type bigint assignment name varchar 25 1 regular 438 name name hivetype varchar 25 typesignature varchar 25 hivecolumnindex 1 columntype regular requiredsubfield nationkey bigint 0 regular 439 name nationkey hivetype bigint typesignature bigint hivecolumnindex 0 columntype regular requiredsubfield regionkey bigint 2 regular 440 name regionkey hivetype bigint typesignature bigint hivecolumnindex 2 columntype regular requiredsubfield
prestodbpresto
native cmake failure object of target duckdb reference but no such target exist
Bug
try to build prestissimo and get this error wonder if there be a fix or workaround cmake error at presto cpp main type test cmakelist txt 52 target link librarie error evaluate generator expression object of target duckdb reference but no such target exist cc majetideepak pedroerp assignuser
prestodbpresto
can t use a noobaa cache bucket without cache datum to run tpc ds benchmark
Bug
noobaa be use to create a cache bucket to cache tpc ds sf100 benchmark datum from a remote s3 bucket and use presto to query the table in cache bucket create schema benchmarkcache sf1000cache with location s3a benchmark cache 6a8dc04a 6927 42e5 aad6 627b4b879faa sf1000 create table benchmarkcache sf1000cache customer c customer sk bigint c customer i d char 16 c current cdemo sk bigint c current hdemo sk bigint c current addr sk bigint c first shipto date sk bigint c first sale date sk bigint c salutation char 10 c first name char 20 c last name char 30 c prefer cust flag char 1 c birth day integer c birth month integer c birth year integer c birth country varchar 20 c login char 13 c email address char 50 c last review date sk bigint with external location s3a benchmark cache 6a8dc04a 6927 42e5 aad6 627b4b879faa sf1000 customer format parquet presto query datum cache bucket craete by noobaa cache datum remote bucket s3 in cloud however query will be fail if there be no cache datum in the cache select from benchmarkcache sf1000cache customer error reading from s3a benchmarkcache 6a8dc04a 6927 42e5 aad6 627b4b879faa sf1000 customer abc7a7ad 1d52 4d61 9b7b b2ae6cfc18e7 parquet at position 205676755 after I manually download the object from cache bucket warmup the cache then the query will be successfull your environment presto version use v0 282 storage hdfs s3 gcs s3 datum source and connector use parquet in s3 bucket deployment cloud or on prem on prem pastebin link to the complete debug log expect behavior even there be no cache datum in the cache the query should be successful as noobaa will fill the datum into the cache current behavior query fail with error error reading from s3a benchmark cache 6a8dc04a 6927 42e5 aad6 627b4b879faa sf1000 customer abc7a7ad 1d52 4d61 9b7b b2ae6cfc18e7 parquet at position 205676755 possible solution I doubt the noobaa cache bucket do not support the object get with range option and make some test but it support to get object with range option be there any other uncommon s3 operation use in presto query or any idea about how to fix this error error reading from s3a benchmark cache 6a8dc04a 6927 42e5 aad6 627b4b879faa sf1000 customer abc7a7ad 1d52 4d61 9b7b b2ae6cfc18e7 parquet at position 205676755 thank for your help step to reproduce 1 2 3 4 screenshot if appropriate context
prestodbpresto
be unstable
Bug
the presto website be unstable I often could not access it
prestodbpresto
404 on dockerhub can not pull image
Bug
dockerhub be return 404 on image and get similar error when pull image image your environment I don t think it be relate to my environment in any way expect behavior image should be find and pull current behavior 404 or access deny error step to reproduce docker image pull ahanaio prestodb sandbox 0 284
prestodbpresto
error be raise when select a literal array
Bug
your environment presto version use late master branch commit e4d8fc142e804923e7d99ae868fe646dfd62a734 expect behavior current behavior presto sf1 select array 1 query be go server restart possible solution step to reproduce 1 start tpchqueryrunner 2 use presto cli to connect to the running server 3 execute select array 1 screenshot if appropriate context
prestodbpresto
native miss decimal signature for aggregate function approx distinct in velox
Bug
query with aggregate function approx distinct be fail with miss decimal signature in velox current behavior select approx distinct cast custkey as decimal 18 0 from order output against c worker query 20231106 150303 00058 nbggi fail aggregate function signature be not support presto default approx distinct decimal 18 0 support signature boolean varbinary bigint boolean double varbinary bigint tinyint varbinary bigint tinyint double varbinary bigint smallint varbinary bigint smallint double varbinary bigint integer varbinary bigint integer double varbinary bigint bigint varbinary bigint bigint double varbinary bigint real varbinary bigint real double varbinary bigint double varbinary bigint double double varbinary bigint varchar varbinary bigint varchar double varbinary bigint timestamp varbinary bigint timestamp double varbinary bigint date varbinary bigint date double varbinary bigint
prestodbpresto
security vulnerability report by yarn audit for presto ui
Bug
when run the yarn audit against the presto ui presto main src main resource webapp src it report two issue critical babel vulnerable to arbitrary code execution when compile specifically craft malicious code high d3 color vulnerable to redo for the first one it impact the machine run the babel to compile the code and generate the final ui code it may also impact the developer who be develop the presto ui although there be no path evaluate or path evaluatetruthy it be still good to upgrade to a safe version for the second one the d3 library must also be update and may need a regression test your environment presto version use 0 285 from current main branch expect behavior when run yarn audit there should be no vulnerability or no critical one current behavior 1 critical vulnerability and 1 high vulnerability possible solution upgrade babel traverse and relevant package d3 color and relevant package step to reproduce 1 cd presto main src main resource webapp src 2 yarn audit 3 it report 10 vulnerability find but underlie there be 2 vulnerability screenshot if appropriate context
prestodbpresto
cast varchar as doesn t trim lead and trail whitespace but fail
Bug
current behavior of cast varchar to integral type be to throw exception if the varchar be valid but contain lead or trail whitespace for example select cast 100 as bigint throw the follow exception com facebook presto spi prestoexception can not cast 100 to bigint at com facebook presto type varcharoperator casttobigint varcharoperator java 185 this be different from the behavior describe in ansi sql the declare type of the result of the be td let sd be the declare type of the let sv be its value if sd be character string then sv be replace by sv with any leading or trail s remove case I if sv do not comprise a then an exception condition be raise ii otherwise let lt be that the be equivalent to cast lt as td accordingly try cast varchar as and try cast varchar as return null rather than the actual valid integral number should presto change the above cast behavior to accept and trim lead and trail whitespace and be compliant with ansi sql the exception be throw because the implementation of cast varchar to integral type directly delegate to java ex l176 l187 and java do not accept whitespace or any character other than decimal digit and sign character parselong java lang string your environment presto version use expect behavior casting of varchar trim lead and trail whitespace current behavior casting of varchar to integral type throw exception if the varchar contain lead or trail whitespace possible solution casting of varchar trim lead and trail whitespace before delegate to java long parselong step to reproduce screenshot if appropriate context throw exception make presto have a different semantic of above cast from ansi sql and other mainstream engine or library that compliant with ansi sql ex mysql folly etc
prestodbpresto
native unused node memory gb config property in prestissimo
Bug
node memory gb as per definition in the readme here prestissimo runtime configuration and setting control amount of system memory set for prestoserver but in practice this config be not use at all if there be no use of this config be it safe to clean up the related code in presto native execution presto cpp main common config cpp cc xiaoxmeng mbasmanova your environment presto version use storage hdfs s3 gcs datum source and connector use deployment cloud or on prem pastebin link to the complete debug log expect behavior current behavior possible solution step to reproduce 1 2 3 4 screenshot if appropriate context
prestodbpresto
native tpc ds q43 q95 sf 10k too many page 109870 to allocate hashtable
Bug
q43 json veloxruntimeerror too many page 109870 to allocate the number of unit 429 at size class of 256 exceed the pagerun limit 65535 at unknown 0 zn8facebook5velox7process10stacktracec1ei unknown source at unknown 1 zn8facebook5velox14veloxexceptionc2epkcms3 st17basic string viewicst11char traitsicees7 s7 s7 bns1 4typees7 unknown source at unknown 2 zn8facebook5velox6detail14veloxcheckfailins0 17veloxruntimeerrorerknst7 cxx1112basic stringicst11char traitsicesaiceeeeevrkns1 18veloxcheckfailargset0 unknown source at unknown 3 znk8facebook5velox6memory15memoryallocator14allocationsizeemm unknown source at unknown 4 zn8facebook5velox6memory13mmapallocator33allocatenoncontiguouswithoutretryemrns1 10allocationest8functionifvlbeem unknown source at unknown 5 zn8facebook5velox6memory15memoryallocator21allocatenoncontiguousemrns1 10allocationest8functionifvlbeem unknown source at unknown 6 zn8facebook5velox6memory14memorypoolimpl21allocatenoncontiguousemrns1 10allocationem unknown source at unknown 7 zn8facebook5velox4exec13rowpartitionsc2eirns0 6memory10memorypoole unknown source at unknown 8 zn8facebook5velox4exec12rowcontainer19createrowpartitionserns0 6memory10memorypoole unknown source at unknown 9 zn8facebook5velox4exec9hashtableilb1ee17paralleljoinbuildev unknown source at unknown 10 zn8facebook5velox4exec9hashtableilb1ee6rehasheb unknown source at unknown 11 zn8facebook5velox4exec9hashtableilb1ee9checksizeeib unknown source at unknown 12 zn8facebook5velox4exec9hashtableilb1ee11sethashmodeens1 13basehashtable8hashmodeei unknown source at unknown 13 zn8facebook5velox4exec9hashtableilb1ee14decidehashmodeeib unknown source at unknown 14 zn8facebook5velox4exec9hashtableilb1ee16preparejointableest6vectorist10unique ptrins1 13basehashtableest14default deleteis6 eesais9 eepn5folly8executore unknown source at unknown 15 zn8facebook5velox4exec9hashbuild15finishhashbuildev unknown source at unknown 16 zn8facebook5velox4exec9hashbuild19nomoreinputinternalev unknown source at unknown 17 zn8facebook5velox4exec6driver11runinternalerst10shared ptris2 ers3 ins1 13blockingstateeers3 ins0 9rowvectoree unknown source at unknown 18 zn8facebook5velox4exec6driver3runest10shared ptris2 e unknown source at unknown 19 zn5folly6detail8function14functiontraitsifvvee9callsmallizn8facebook5velox4exec6driver7enqueueest10share ptris9 eeulve eevrns1 4datae unknown source at unknown 20 zn5folly6detail8function14functiontraitsifvveeclev unknown source at unknown 21 zn5folly18threadpoolexecutor7runtaskerkst10share ptrins0 6threadeeons0 4taske unknown source at unknown 22 zn5folly21cputhreadpoolexecutor9threadrunest10share ptrin 18threadpoolexecutor6threadee unknown source at unknown 23 zst13 invoke implivrmn5folly18threadpoolexecutorefvst10share ptrins1 6threadeeerps1 jrs4 eet st21 invoke memfun derefot0 ot1 dpot2 unknown source at unknown 24 zst8 invokeirmn5folly18threadpoolexecutorefvst10share ptrins1 6threadeeejrps1 rs4 eenst15 invoke resultit jdpt0 ee4typeeosc dposd unknown source at unknown 25 znst5 bindifmn5folly18threadpoolexecutorefvst10share ptrins1 6threadeeeps1 s4 ee6 callivjejlm0elm1eeeet ost5tupleijdpt0 eest12 index tupleijxspt1 eee unknown source at unknown 26 znst5 bindifmn5folly18threadpoolexecutorefvst10share ptrins1 6threadeeeps1 s4 eeclijeveet0 dpot unknown source at unknown 27 zn5folly6detail8function14functiontraitsifvvee9callsmallist5 bindifmn 18threadpoolexecutorefvst10shared ptrins7 6threadeeeps7 sa eeeevrns1 4datae unknown source at unknown 28 0x0000000000000000 unknown source at unknown 29 start thread unknown source at unknown 30 clone unknown source
prestodbpresto
native tpc ds q10 sf 10k exceed memory pool cap take much more memory to run than java
Bug
q10 json veloxruntimeerror exceed memory pool cap of 217 00 gb with max 217 00 gb when request 20 00 mb memory manager cap be 217 00 gb requestor op 0 0 12 tablescan with current usage 48 41 mb 20231030 200214 00010 7ckus usage 217 00 gb peak 217 00 gb task 20231030 200214 00010 7ckus 2 0 2 0 usage 7 76 gb peak 8 61 gb node root usage 106 00 mb peak 294 00 mb op root 0 13 partitionedoutput usage 9 23 mb peak 22 30 mb op root 0 12 partitionedoutput usage 128 00 kb peak 20 91 mb op root 0 11 partitionedoutput usage 4 78 mb peak 21 83 mb op root 0 9 partitionedoutput usage 128 00 kb peak 21 28 mb op root 0 14 partitionedoutput usage 9 35 mb peak 20 12 mb op root 0 8 partitionedoutput usage 10 79 mb peak 19 99 mb op root 0 10 partitionedoutput usage 152 00 kb peak 30 80 mb op root 0 7 partitionedoutput usage 5 94 mb peak 21 78 mb op root 0 6 partitionedoutput usage 12 05 mb peak 21 77 mb op root 0 1 partitionedoutput usage 128 00 kb peak 20 70 mb op root 0 15 partitionedoutput usage 15 30 mb peak 22 43 mb op root 0 2 partitionedoutput usage 7 70 mb peak 20 05 mb op root 0 3 partitionedoutput usage 120 00 kb peak 23 98 mb op root 0 0 partitionedoutput usage 9 00 mb peak 19 37 mb op root 0 4 partitionedoutput usage 3 97 mb peak 21 00 mb op root 0 5 partitionedoutput usage 8 07 mb peak 21 23 mb node 2615 usage 16 00 mb peak 726 00 mb op 2615 0 15 partialaggregation usage 348 38 kb peak 65 43 mb op 2615 0 13 partialaggregation usage 348 38 kb peak 65 43 mb op 2615 0 12 partialaggregation usage 348 38 kb peak 65 43 mb op 2615 0 10 partialaggregation usage 348 38 kb peak 65 43 mb op 2615 0 9 partialaggregation usage 412 38 kb peak 65 43 mb op 2615 0 8 partialaggregation usage 348 38 kb peak 65 43 mb op 2615 0 7 partialaggregation usage 348 38 kb peak 65 43 mb op 2615 0 6 partialaggregation usage 348 38 kb peak 65 43 mb op 2615 0 1 partialaggregation usage 348 38 kb peak 65 43 mb op 2615 0 14 partialaggregation usage 348 38 kb peak 65 43 mb op 2615 0 4 partialaggregation usage 348 38 kb peak 65 43 mb op 2615 0 5 partialaggregation usage 348 38 kb peak 65 43 mb op 2615 0 0 partialaggregation usage 348 38 kb peak 65 43 mb op 2615 0 11 partialaggregation usage 348 38 kb peak 65 43 mb op 2615 0 2 partialaggregation usage 348 38 kb peak 65 43 mb op 2615 0 3 partialaggregation usage 348 38 kb peak 65 43 mb node 10 usage 16 00 mb peak 16 00 mb op 10 0 15 filterproject usage 97 50 kb peak 291 50 kb op 10 0 13 filterproject usage 97 50 kb peak 291 50 kb op 10 0 12 filterproject usage 65 00 kb peak 291 50 kb op 10 0 11 filterproject usage 97 50 kb peak 291 50 kb op 10 0 10 filterproject usage 97 50 kb peak 291 50 kb op 10 0 9 filterproject usage 98 00 kb peak 291 50 kb op 10 0 8 filterproject usage 65 50 kb peak 291 50 kb op 10 0 14 filterproject usage 97 50 kb peak 291 50 kb op 10 0 7 filterproject usage 65 00 kb peak 291 50 kb op 10 0 2 filterproject usage 97 50 kb peak 291 50 kb op 10 0 1 filterproject usage 97 50 kb peak 291 50 kb op 10 0 3 filterproject usage 65 50 kb peak 291 50 kb op 10 0 0 filterproject usage 48 75 kb peak 291 50 kb op 10 0 4 filterproject usage 97 50 kb peak 291 50 kb op 10 0 6 filterproject usage 97 50 kb peak 291 50 kb op 10 0 5 filterproject usage 97 50 kb peak 291 50 kb node 2279 usage 28 00 mb peak 76 00 mb op 2279 0 15 exchange usage 1 09 mb peak 1 33 mb op 2279 0 3 exchange usage 1 02 mb peak 1 34 mb op 2279 0 4 exchange usage 1 10 mb peak 1 34 mb op 2279 0 10 exchange usage 1016 00 kb peak 1 23 mb op 2279 0 2 exchange usage 1 10 mb peak 1 34 mb op 2279 0 6 exchange usage 1 05 mb peak 1 29 mb op 2279 0 0 exchange usage 1 04 mb peak 1 28 mb op 2279 0 5 exchange usage 1 09 mb peak 1 33 mb op 2279 0 7 exchange usage 1015 50 kb peak 1 31 mb op 2279 0 1 exchange usage 1 04 mb peak 1 28 mb op 2279 0 12 exchange usage 1015 00 kb peak 1 31 mb op 2279 0 8 exchange usage 1000 00 kb peak 1 30 mb op 2279 0 9 exchange usage 1 02 mb peak 1 26 mb op 2279 0 13 exchange usage 1 09 mb peak 1 33 mb op 2279 0 11 exchange usage 1 04 mb peak 1 28 mb op 2279 0 14 exchange usage 1 02 mb peak 1 26 mb node 1155 usage 16 00 mb peak 16 00 mb op 1155 0 13 filterproject usage 2 25 kb peak 12 88 kb op 1155 0 11 filterproject usage 1 75 kb peak 12 75 kb op 1155 0 12 filterproject usage 3 25 kb peak 12 75 kb op 1155 0 9 filterproject usage 3 25 kb peak 12 75 kb op 1155 0 8 filterproject usage 3 25 kb peak 12 75 kb op 1155 0 10 filterproject usage 640b peak 12 75 kb op 1155 0 7 filterproject usage 4 25 kb peak 12 75 kb op 1155 0 6 filterproject usage 1 75 kb peak 12 75 kb op 1155 0 14 filterproject usage 4 25 kb peak 12 75 kb op 1155 0 1 filterproject usage 3 25 kb peak 12 75 kb op 1155 0 0 filterproject usage 6 25 kb peak 12 75 kb op 1155 0 2 filterproject usage 3 25 kb peak 12 75 kb op 1155 0 3 filterproject usage 3 25 kb peak 12 75 kb op 1155 0 15 filterproject usage 1 25 kb peak 12 75 kb op 1155 0 4 filterproject usage 1 75 kb peak 12 69 kb op 1155 0 5 filterproject usage 2 25 kb peak 12 75 kb node 2045 usage 32 00 mb peak 32 00 mb op 2045 3 14 hashbuild usage 68 00 kb peak 68 00 kb op 2045 3 13 hashbuild usage 64 00 kb peak 64 00 kb op 2045 3 12 hashbuild usage 64 00 kb peak 64 00 kb op 2045 3 11 hashbuild usage 64 00 kb peak 64 00 kb op 2045 3 10 hashbuild usage 64 00 kb peak 64 00 kb op 2045 3 15 hashbuild usage 64 00 kb peak 64 00 kb op 2045 3 7 hashbuild usage 64 00 kb peak 64 00 kb op 2045 3 3 hashbuild usage 64 00 kb peak 64 00 kb op 2045 3 8 hashbuild usage 64 00 kb peak 64 00 kb op 2045 3 2 hashbuild usage 64 00 kb peak 64 00 kb op 2045 3 1 hashbuild usage 64 00 kb peak 64 00 kb op 2045 3 0 hashbuild usage 64 00 kb peak 64 00 kb op 2045 0 4 hashprobe usage 12 50 kb peak 14 75 kb op 2045 0 10 hashprobe usage 12 31 kb peak 14 75 kb op 2045 3 4 hashbuild usage 64 00 kb peak 64 00 kb op 2045 0 3 hashprobe usage 12 62 kb peak 14 75 kb op 2045 0 8 hashprobe usage 12 88 kb peak 14 75 kb op 2045 0 2 hashprobe usage 12 88 kb peak 14 75 kb op 2045 0 7 hashprobe usage 13 12 kb peak 14 75 kb op 2045 3 6 hashbuild usage 64 00 kb peak 64 00 kb op 2045 0 12 hashprobe usage 12 88 kb peak 14 75 kb op 2045 0 0 hashprobe usage 7 12 kb peak 16 00 kb op 2045 0 14 hashprobe usage 12 88 kb peak 14 75 kb op 2045 3 5 hashbuild usage 64 00 kb peak 64 00 kb op 2045 0 1 hashprobe usage 12 62 kb peak 16 00 kb op 2045 3 9 hashbuild usage 64 00 kb peak 64 00 kb op 2045 0 15 hashprobe usage 12 38 kb peak 14 75 kb op 2045 0 6 hashprobe usage 12 50 kb peak 16 00 kb op 2045 0 9 hashprobe usage 12 62 kb peak 16 00 kb op 2045 0 5 hashprobe usage 12 50 kb peak 16 00 kb op 2045 0 11 hashprobe usage 12 50 kb peak 14 75 kb op 2045 0 13 hashprobe usage 12 62 kb peak 16 00 kb node 4 usage 7 55 gb peak 7 70 gb op 4 1 2 hashbuild usage 7 53 gb peak 7 70 gb op 4 0 2 hashprobe usage 24 25 kb peak 84 00 kb op 4 0 10 hashprobe usage 24 25 kb peak 84 00 kb op 4 0 6 hashprobe usage 24 25 kb peak 84 00 kb op 4 0 0 hashprobe usage 18 25 kb peak 84 00 kb op 4 0 5 hashprobe usage 24 25 kb peak 84 00 kb op 4 0 1 hashprobe usage 24 25 kb peak 84 00 kb op 4 0 7 hashprobe usage 24 25 kb peak 84 00 kb op 4 0 4 hashprobe usage 24 25 kb peak 84 00 kb op 4 0 8 hashprobe usage 24 25 kb peak 84 00 kb op 4 0 3 hashprobe usage 24 25 kb peak 84 00 kb op 4 0 9 hashprobe usage 24 25 kb peak 84 00 kb op 4 0 13 hashprobe usage 24 25 kb peak 84 00 kb op 4 0 11 hashprobe usage 24 25 kb peak 84 00 kb op 4 0 12 hashprobe usage 24 25 kb peak 84 00 kb op 4 0 14 hashprobe usage 24 25 kb peak 84 00 kb op 4 0 15 hashprobe usage 24 25 kb peak 84 00 kb task 20231030 200214 00010 7ckus 3 0 7 0 usage 1 14 gb peak 1 54 gb node root usage 73 00 mb peak 178 00 mb op root 0 13 partitionedoutput usage 4 02 mb peak 15 56 mb op root 0 12 partitionedoutput usage 2 80 mb peak 17 61 mb op root 0 11 partitionedoutput usage 4 00 mb peak 19 95 mb op root 0 9 partitionedoutput usage 3 35 mb peak 16 04 mb op root 0 14 partitionedoutput usage 5 00 mb peak 18 16 mb op root 0 8 partitionedoutput usage 3 52 mb peak 16 13 mb op root 0 10 partitionedoutput usage 5 00 mb peak 16 32 mb op root 0 7 partitionedoutput usage 3 28 mb peak 18 27 mb op root 0 6 partitionedoutput usage 1 98 mb peak 16 06 mb op root 0 1 partitionedoutput usage 6 47 mb peak 41 28 mb op root 0 15 partitionedoutput usage 3 77 mb peak 18 17 mb op root 0 2 partitionedoutput usage 6 37 mb peak 16 31 mb op root 0 3 partitionedoutput usage 3 39 mb peak 17 97 mb op root 0 0 partitionedoutput usage 2 41 mb peak 17 59 mb op root 0 4 partitionedoutput usage 3 56 mb peak 18 87 mb op root 0 5 partitionedoutput usage 7 21 mb peak 18 18 mb node 0 usage 1 07 gb peak 1 44 gb op 0 0 13 tablescan usage 86 41 mb peak 96 05 mb op 0 0 12 tablescan usage 48 41 mb peak 96 05 mb op 0 0 11 tablescan usage 95 08 mb peak 96 05 mb op 0 0 10 tablescan usage 95 08 mb peak 96 05 mb op 0 0 9 tablescan usage 47 08 mb peak 96 05 mb op 0 0 8 tablescan usage 86 41 mb peak 97 05 mb op 0 0 2 tablescan usage 40 41 mb peak 96 05 mb op 0 0 15 tablescan usage 26 41 mb peak 96 05 mb op 0 0 1 tablescan usage 47 32 mb peak 96 05 mb op 0 0 14 tablescan usage 95 08 mb peak 96 05 mb op 0 0 0 tablescan usage 70 41 mb peak 96 05 mb op 0 0 3 tablescan usage 93 57 mb peak 97 05 mb op 0 0 5 tablescan usage 46 41 mb peak 96 05 mb op 0 0 7 tablescan usage 95 08 mb peak 96 05 mb op 0 0 6 tablescan usage 49 44 mb peak 96 05 mb op 0 0 4 tablescan usage 46 41 mb peak 96 05 mb task 20231030 200214 00010 7ckus 1 0 2 0 usage 208 09 gb peak 208 09 gb node 12 usage 187 67 gb peak 187 67 gb op 12 0 2 aggregation usage 187 67 gb peak 187 67 gb node 2611 usage 2 00 mb peak 17 00 mb op 2611 1 10 localpartition usage 144 00 kb peak 1 56 mb op 2611 1 13 localpartition usage 32 00 kb peak 1 50 mb node 38 usage 6 84 gb peak 6 91 gb op 38 2 2 hashbuild usage 6 84 gb peak 6 91 gb node 59 usage 13 54 gb peak 13 69 gb op 59 4 2 hashbuild usage 13 53 gb peak 13 68 gb node 2617 usage 35 00 mb peak 135 00 mb op 2617 1 15 exchange usage 1 46 mb peak 19 20 mb op 2617 1 4 exchange usage 1 46 mb peak 7 05 mb op 2617 1 3 exchange usage 1 46 mb peak 5 35 mb op 2617 1 5 exchange usage 1 46 mb peak 18 35 mb op 2617 1 12 exchange usage 998 00 kb peak 13 61 mb op 2617 1 1 exchange usage 1 46 mb peak 10 71 mb op 2617 1 14 exchange usage 1 46 mb peak 20 18 mb op 2617 1 8 exchange usage 1 46 mb peak 26 02 mb op 2617 1 9 exchange usage 1 46 mb peak 10 21 mb exchangeclient 2617 1 usage 2 00 mb peak 68 10 mb op 2617 1 0 exchange usage 1 46 mb peak 32 10 mb op 2617 1 7 exchange usage 1 46 mb peak 13 24 mb op 2617 1 6 exchange usage 998 00 kb peak 16 29 mb op 2617 1 13 exchange usage 1 87 mb peak 25 29 mb op 2617 1 10 exchange usage 4 13 mb peak 26 51 mb op 2617 1 2 exchange usage 1 46 mb peak 11 90 mb op 2617 1 11 exchange usage 1 46 mb peak 21 40 mb top 10 leaf memory pool usage op 12 0 2 aggregation usage 187 67 gb peak 187 67 gb op 59 4 2 hashbuild usage 13 53 gb peak 13 68 gb op 4 1 2 hashbuild usage 7 53 gb peak 7 70 gb op 38 2 2 hashbuild usage 6 84 gb peak 6 91 gb op 0 0 7 tablescan usage 95 08 mb peak 96 05 mb op 0 0 11 tablescan usage 95 08 mb peak 96 05 mb op 0 0 14 tablescan usage 95 08 mb peak 96 05 mb op 0 0 10 tablescan usage 95 08 mb peak 96 05 mb op 0 0 3 tablescan usage 93 57 mb peak 97 05 mb op 0 0 8 tablescan usage 86 41 mb peak 97 05 mb split hive s3a presto workload tpcd sf10000 parquet store sale 20231003 073445 00024 jvszt 1c1d1e06 be81 4da3 96ae a42d22953301 106837311488 268435456 task 20231030 200214 00010 7ckus 3 0 7 0 at unknown 0 zn8facebook5velox7process10stacktracec1ei unknown source at unknown 1 zn8facebook5velox14veloxexceptionc2epkcms3 st17basic string viewicst11char traitsicees7 s7 s7 bns1 4typees7 unknown source at unknown 2 zn8facebook5velox6detail14veloxcheckfailins0 17veloxruntimeerrorerknst7 cxx1112basic stringicst11char traitsicesaiceeeeevrkns1 18veloxcheckfailargset0 unknown source at unknown 3 zn8facebook5velox6memory14memorypoolimpl30incrementreservationthreadsafeepns1 10memorypoolem localalias unknown source at unknown 4 zn8facebook5velox6memory14memorypoolimpl30incrementreservationthreadsafeepns1 10memorypoolem localalias unknown source at unknown 5 zn8facebook5velox6memory14memorypoolimpl30incrementreservationthreadsafeepns1 10memorypoolem localalias unknown source at unknown 6 zn8facebook5velox6memory14memorypoolimpl30incrementreservationthreadsafeepns1 10memorypoolem localalias unknown source at unknown 7 zn8facebook5velox6memory14memorypoolimpl17reservethreadsafeemb unknown source at unknown 8 zn8facebook5velox6memory14memorypoolimpl7reserveemb unknown source at unknown 9 zn8facebook5velox6memory13mmapallocator26growcontiguouswithoutretryemrns1 20contiguousallocationest8functionifvlbee unknown source at unknown 10 zn8facebook5velox6memory15memoryallocator14growcontiguousemrns1 20contiguousallocationest8functionifvlbee unknown source at unknown 11 zn8facebook5velox6memory14memorypoolimpl14growcontiguousemrns1 20contiguousallocatione unknown source at unknown 12 zn8facebook5velox6memory14allocationpool18growlastallocationev unknown source at unknown 13 zn8facebook5velox6memory14allocationpool13allocatefixedemi unknown source at unknown 14 zn8facebook5velox4dwio6common13bufferedinput4loadens2 10metricslog11metricstypee unknown source at unknown 15 zn8facebook5velox7parquet18structcolumnreader12loadrowgroupejrkst10share ptrins0 4dwio6common13bufferedinputee unknown source at unknown 16 zn8facebook5velox7parquet10readerbase17schedulerowgroupserkst6vectorijsaijeeirns1 18structcolumnreadere unknown source at unknown 17 zn8facebook5velox7parquet16parquetrowreader21advancetonextrowgroupev unknown source at unknown 18 zn8facebook5velox7parquet16parquetrowreaderc1erkst10share ptrins1 10readerbaseeerkns0 4dwio6common16rowreaderoptionse unknown source at unknown 19 znk8facebook5velox7parquet13parquetreader15createrowreadererkns0 4dwio6common16rowreaderoptionse unknown source at unknown 20 zn8facebook5velox9connector4hive11splitreader12preparespliterkst10shared ptrins2 15hivetablehandleeerkns0 4dwio6common13readeroptionsest10unique ptrinsa 13bufferedinputest14default deleteisf ees4 ins0 6common14metadatafiltereernsa 17runtimestatisticse unknown source at unknown 21 zn8facebook5velox9connector4hive14hivedatasource8addsplitest10shared ptrins1 14connectorsplitee unknown source at unknown 22 zn8facebook5velox4exec9tablescan9getoutputev unknown source at unknown 23 zn8facebook5velox4exec6driver11runinternalerst10shared ptris2 ers3 ins1 13blockingstateeers3 ins0 9rowvectoree unknown source at unknown 24 zn8facebook5velox4exec6driver3runest10shared ptris2 e unknown source at unknown 25 zn5folly6detail8function14functiontraitsifvvee9callsmallizn8facebook5velox4exec6driver7enqueueest10share ptris9 eeulve eevrns1 4datae unknown source at unknown 26 zn5folly6detail8function14functiontraitsifvveeclev unknown source at unknown 27 zn5folly18threadpoolexecutor7runtaskerkst10share ptrins0 6threadeeons0 4taske unknown source at unknown 28 zn5folly21cputhreadpoolexecutor9threadrunest10share ptrin 18threadpoolexecutor6threadee unknown source at unknown 29 zst13 invoke implivrmn5folly18threadpoolexecutorefvst10share ptrins1 6threadeeerps1 jrs4 eet st21 invoke memfun derefot0 ot1 dpot2 unknown source at unknown 30 zst8 invokeirmn5folly18threadpoolexecutorefvst10share ptrins1 6threadeeejrps1 rs4 eenst15 invoke resultit jdpt0 ee4typeeosc dposd unknown source at unknown 31 znst5 bindifmn5folly18threadpoolexecutorefvst10share ptrins1 6threadeeeps1 s4 ee6 callivjejlm0elm1eeeet ost5tupleijdpt0 eest12 index tupleijxspt1 eee unknown source at unknown 32 znst5 bindifmn5folly18threadpoolexecutorefvst10share ptrins1 6threadeeeps1 s4 eeclijeveet0 dpot unknown source at unknown 33 zn5folly6detail8function14functiontraitsifvvee9callsmallist5 bindifmn 18threadpoolexecutorefvst10shared ptrins7 6threadeeeps7 sa eeeevrns1 4datae unknown source at unknown 34 0x0000000000000000 unknown source at unknown 35 start thread unknown source at unknown 36 clone unknown source
prestodbpresto
bug in from unixtime double
Bug
your environment presto native query runner step to reproduce 1 create table test insert event time name varchar event time timestamp ds varchar comment none with format dwrf 2 create table test select event time name varchar event time double ds varchar comment none with format dwrf 3 insert into test select event time value k1 1698300098 9999363 2121 11 11 4 insert into test insert event time event time select from unixtime event time from test select event time screenshot if appropriate expect behavior the insert query should succeed current behavior the insert query fail
prestodbpresto
more than 5 concurrent insert fail via iceberg connector
Bug
aim to fix concurrent insert via iceberg connector in case of hive metastore however the number of attempt to insert be control by the table property commit retry num retrie which be by default set to 4 to fix concurrent insert to iceberg table there be 2 scenario 1 new table creation this table property can be add with a custom value at the time of create a new table aim to fix this 2 exist table alter table set tblpropertie be not support via presto currently similar to other engine like spark alter table set tblpropertie your environment presto version use 0 284 storage hdfs s3 gcs s3 hdfs datum source and connector use iceberg deployment cloud or on prem local pastebin link to the complete debug log expect behavior any number of concurrent insert should be able to go through without any error current behavior below exception be throw for more than 5 concurrent insert statement org apache iceberg exception commitfailedexception 111 metadata location hdfs localhost 9000 user hive warehouse iceberg table1 metadata 00072 57523571 bc74 43f8 a677 aa2fceeb317e metadata json be not same as table metadata location hdfs localhost 9000 user hive warehouse iceberg table1 metadata 00073 b9560ca4 b815 4de3 b4e8 bc9d6d7bb00d metadata json for default iceberg table1 at com facebook presto iceberg hivetableoperation commit hivetableoperation java 275 at org apache iceberg basetransaction lambda commitsimpletransaction 5 basetransaction java 422 at org apache iceberg util tasks builder runtaskwithretry task java 413 at org apache iceberg util task builder runsinglethreade task java 219 at org apache iceberg util task builder run task java 203 at org apache iceberg util task builder run task java 196 at org apache iceberg basetransaction commitsimpletransaction basetransaction java 418 at org apache iceberg basetransaction committransaction basetransaction java 302 at com facebook presto iceberg icebergabstractmetadata finishinsert icebergabstractmetadata java 270 at com facebook presto spi connector classloader classloadersafeconnectormetadata finishinsert classloadersafeconnectormetadata java 452 at com facebook presto metadata metadatamanager finishinsert metadatamanager java 858 at com facebook presto sql planner localexecutionplanner lambda createtablefinish 3 localexecutionplanner java 3392 at com facebook presto operator tablefinishoperator getoutput tablefinishoperator java 289 at com facebook presto operator driver processinternal driver java 428 at com facebook presto operator driver lambda processfor 9 driver java 311 at com facebook presto operator driver trywithlock driver java 732 at com facebook presto operator driver processfor driver java 304 at com facebook presto execution sqltaskexecution driversplitrunner processfor sqltaskexecution java 1079 at com facebook presto execution executor prioritizedsplitrunner process prioritizedsplitrunner java 165 at com facebook presto execution executor taskexecutor taskrunner run taskexecutor java 603 at com facebook presto gen presto null testversion 20231010 093550 1 run unknown source at java util concurrent threadpoolexecutor runworker threadpoolexecutor java 1149 at java util concurrent threadpoolexecutor worker run threadpoolexecutor java 624 at java lang thread run thread java 750 possible solution already explain above step to reproduce setup jmeter to connect to local presto server and try run some insert statement with more than 5 thread context real world pipeline can have more than 5 concurrent insert as well at a give point of time
prestodbpresto
update snakeyaml to 2 2
Bug
prestodbpresto
avoid use inheritance to implement icebergparquetfilewriter
Bug
by remove extend parentclass myclass be no long a subclass of parentclass and will not inherit any of its method or field by add private datatype myvariable you be declare a new private variable name myvariable of type datatype in myclass the private keyword mean that myvariable can only be access within myclass public class myclass private datatype myvariable class content remove extend and add a private variable can have several benefit depend on the context and design of your application encapsulation by make a variable private you re encapsulate it within the class this mean that it can t be directly access from outside the class which can prevent bug and make your code easy to understand and maintain reduce complexity by remove the extend keyword you re remove inheritance from the class this can make the class easy to understand and test since you don t have to worry about the behavior of the parent class or about override or extend its method increase modularity by remove inheritance and add private variable you re make your class more self contain this can make it easy to reuse the class in different part of your application since the class have few dependency on other class datum hide the private variable be not accessible directly from outside the class it can only be access through method getter and setter of the class this provide control over what part of the datum can be access and how it can be access
prestodbpresto
make parquetfilewriter final
Bug
make a class final in java mean that the class can not be subclasse this be particularly useful when you want to create a class with method that should not be overridden or to prevent inheritance for security or design reason here s what it would look like to make your parquetfilewrite class final public final class parquetfilewrite class content in this case no other class can extend parquetfilewrite this ensure that all instance of parquetfilewrite have exactly the same behavior which can make code easy to reason about and can prevent bug that might be introduce by subclass change behavior however it also make the class less flexible and can inhibit reuse of code through inheritance
prestodbpresto
fix typo in pagebuilder
Bug
public pagebuilder newpagebuilderlike create a pagebuilder with give type a pagebuilder instance create with this constructor have no estimation about byte per entry therefore it can resize frequently while append new row this constructor should only be use to get the initial pagebuilder once the pagebuilder be full use reset or createpagebuilderlike to create a new pagebuilder instance with its size estimate base on previous datum
prestodbpresto
refactore testtimestampwithtimezone test
Bug
your environment presto version use storage hdfs s3 gcs datum source and connector use deployment cloud or on prem pastebin link to the complete debug log expect behavior current behavior possible solution step to reproduce 1 2 3 4 screenshot if appropriate context
prestodbpresto
native not return all the row if timezone be part of the timestamp against velox
Bug
expect behavior current behavior velox output select x from value timestamp 1970 01 01 00 01 00 00 00 timestamp 1970 01 01 08 01 00 08 00 timestamp 1970 01 01 00 01 00 08 00 t x where x in timestamp 1970 01 01 00 01 00 00 00 x 1970 01 01 00 01 00 000 utc 1 row java output select x from value timestamp 1970 01 01 00 01 00 00 00 timestamp 1970 01 01 08 01 00 08 00 timestamp 1970 01 01 00 01 00 08 00 t x where x in timestamp 1970 01 01 00 01 00 00 00 x 1970 01 01 00 01 00 000 utc 1970 01 01 08 01 00 000 08 00 2 row seem like the issue be only there if timezone be part of the timestamp
prestodbpresto
native query with reduce agg fail with state isnullat I lambda expression in reduce agg should not return null for non null input against velox
Bug
expect behavior follow query currently fail if run against the c worker select reduce agg x 0 x y try x y x y try x y from value 2817 9223372036854775807 as t x error return veloxusererror state isnullat I lambda expression in reduce agg should not return null for non null input this query be suppose to return null as try x y would have result in overflow for bigint
prestodbpresto
native query contain reduce agg return 0 0 where it should have return null against velox
Bug
expect behavior follow query return incorrect result if run against c worker select reduce agg x 0 x y try 1 x 1 y x y try 1 x 1 y from select 0 union all select 10 t x actual row up to 100 of 1 extra row show 1 row in total 0 0 expect row up to 100 of 1 missing row show 1 row in total null
prestodbpresto
native reduce agg return incorrect result
Bug
this query return incorrect result if run against c worker select x array join array sort reduce agg y array x a b a b a b a b from value 1 array a 1 array b 1 array c 2 array d 2 array e 3 array f as t x y group by x presto x col1 2 dex 1 abcx 3 fx 3 row velox actual row up to 100 of 2 extra row show 3 row in total 2 dexx 1 abcxxx expect row up to 100 of 2 missing row show 3 row in total 1 abcx 2 dex reference reduce agg
prestodbpresto
miss plugin document section from presto index section
Bug
here index page in document mention about plugin content but which seem to be miss from the presto document index ui
prestodbpresto
protobuf header leak into prestissimo google protobuf port def inc file not find
Bug
your environment base environment clone from presto repo use protobuf bundle from velox protobuf be not a dependency of prestissimo only of velox the build succeed if the system have the correct protobuf version instal if the bundled version be use the build will fail expect behavior build should succeed current behavior fail presto cpp main test cmakefile presto query runner test dir prestoqueryrunner cpp o ccache library developer commandlinetool usr bin c dboost atomic dyn link dboost atomic no lib dboost context dyn link dboost context no lib dboost filesystem dyn link dboost filesystem no lib dboost program option dyn link dboost program option no lib dboost regex dyn link dboost regex no lib dboost system dyn link dboost system no lib dboost thread dyn link dboost thread no lib dfmt locale dfolly have int128 t 1 dgflag be a dll 0 dsimdjson thread enable 1 I user czentgr gitspace presto presto native execution I user czentgr gitspace presto presto native execution velox I user czentgr gitspace presto presto native execution velox velox external xxhash I user czentgr gitspace presto presto native execution build debug velox I user czentgr gitspace presto presto native execution build debug I user czentgr gitspace presto presto native execution build debug dep xsimd src include I user czentgr gitspace presto presto native execution build debug dep simdjson src include isystem opt homebrew include isystem usr local include antlr4 runtime isystem usr local include isystem opt homebrew opt include isystem usr local include proxygen isystem user czentgr gitspace presto presto native execution build debug dep gtest src googlet include isystem user czentgr gitspace presto presto native execution build debug dep gtest src googlet mcpu apple m1 crc std c 17 fvisibility hide werror wno nullability completeness wno deprecate declaration wreorder g std gnu 17 arch arm64 isysroot library developer commandlinetools sdks macosx14 0 sdk mmacosx version min 13 5 md mt presto cpp main test cmakefile presto query runner test dir prestoqueryrunner cpp o mf presto cpp main test cmakefile presto query runner test dir prestoqueryrunner cpp o d o presto cpp main test cmakefile presto query runner test dir prestoqueryrunner cpp o c user czentgr gitspace presto presto native execution presto cpp main test prestoqueryrunner cpp in file include from user czentgr gitspace presto presto native execution presto cpp main test prestoqueryrunner cpp 22 in file include from user czentgr gitspace presto presto native execution velox velox dwio dwrf writer writer h 24 in file include from user czentgr gitspace presto presto native execution velox velox dwio dwrf common encryption h 20 in file include from user czentgr gitspace presto presto native execution velox velox dwio dwrf common encryptioncommon h 21 in file include from user czentgr gitspace presto presto native execution velox velox dwio dwrf common encryptionspecification h 22 in file include from user czentgr gitspace presto presto native execution velox velox dwio dwrf common wrap dwrf proto wrapper h 34 user czentgr gitspace presto presto native execution velox velox dwio dwrf proto dwrf proto pb h 10 10 fatal error google protobuf port def inc file not find include 1 error generate possible solution a temporary solution be to make sure the protobuf header be in the include path via change the cmakelist txt in presto native execution the actual path may vary depend on the platform the path could also be set to the expect bundle build path set protobuf prefix path opt homebrew opt protobuf 21 include opt homebrew opt protobuf 21 lib opt homebrew opt protobuf 21 bin list append cmake prefix path protobuf prefix path find package protobuf require include directory protobuf include dir thank to vivek bharathan for the workaround step to reproduce 1 uninstall protobuf from the system or ensure the bundled version be use for velox 2 make clean make in presto native execution screenshot if appropriate context
prestodbpresto
assertion failure in revocable memory in spillablehashaggregationbuilder
Bug
after be land we be see assertion failure in spillablehashaggregationbuilder the codepath which contain the assertion be never trigger in any of the unit test current behavior stacktrace 23 10 12 06 10 13 error tid 337 taskexecutor error processing split 20231012 122637 00000 urafm 7 0 82 0 48 start 8 4884298582635e8 wall 273809 ms cpu 57000 ms wait 465 ms call 7143 com google common base verifyexception at com google common base verify verify verify java 100 at com facebook presto operator aggregation builder spillablehashaggregationbuilder startmemoryrevoke spillablehashaggregationbuilder java 181 at com facebook presto operator hashaggregationoperator startmemoryrevoke hashaggregationoperator java 412 at com facebook presto operator driver handlememoryrevoke driver java 541 at com facebook presto operator driver processinternal driver java 386 at com facebook presto operator driver lambda processfor 9 driver java 311 at com facebook presto operator driver trywithlock driver java 732 at com facebook presto operator driver processfor driver java 304 at com facebook presto spark execution task prestosparktaskexecution driversplitrunner processfor prestosparktaskexecution java 480 at com facebook presto execution executor prioritizedsplitrunner process prioritizedsplitrunner java 165 at com facebook presto execution executor taskexecutor taskrunner run taskexecutor java 621 at com facebook presto gen presto 0 285 snapshot 0512e06 20231012 130516 1 run unknown source at java util concurrent threadpoolexecutor runworker threadpoolexecutor java 1149 at java util concurrent threadpoolexecutor worker run threadpoolexecutor java 624 at java lang thread run thread java 748 possible solution one hypothesis be that under rehash scenario we be not update the correct memory context step to reproduce a number of internal query fail n a working on a repro for oss context this bug block internal rollout as a number of query which trigger spilling crash
prestodbpresto
concurrent insertion with iceberg connector do not work
Bug
as per concurrent insertion with iceberg connector for hive metastore be support however more than 5 concurrent query be not execute properly lead to data loss your environment presto version use late storage hdfs s3 gcs hdfs datum source and connector use iceberg connector deployment cloud or on prem local setup pastebin link to the complete debug log org apache iceberg exception commitfailedexception 123 metadata location hdfs localhost 9000 user hive warehouse iceberg table1 metadata 00069 0c51fbbf 579c 4308 beeb e1283a6feaac metadata json be not same as table metadata location hdfs localhost 9000 user hive warehouse iceberg table1 metadata 00070 7df8fb1e 666d 4675 832e b1f5e4cc28c6 metadata json for default iceberg table1 at com facebook presto iceberg hivetableoperation commit hivetableoperation java 275 at org apache iceberg basetransaction lambda commitsimpletransaction 5 basetransaction java 422 at org apache iceberg util tasks builder runtaskwithretry task java 413 at org apache iceberg util task builder runsinglethreade task java 219 at org apache iceberg util task builder run task java 203 at org apache iceberg util task builder run task java 196 at org apache iceberg basetransaction commitsimpletransaction basetransaction java 418 at org apache iceberg basetransaction committransaction basetransaction java 302 at com facebook presto iceberg icebergabstractmetadata finishinsert icebergabstractmetadata java 270 at com facebook presto spi connector classloader classloadersafeconnectormetadata finishinsert classloadersafeconnectormetadata java 452 at com facebook presto metadata metadatamanager finishinsert metadatamanager java 858 at com facebook presto sql planner localexecutionplanner lambda createtablefinish 3 localexecutionplanner java 3392 at com facebook presto operator tablefinishoperator getoutput tablefinishoperator java 289 at com facebook presto operator driver processinternal driver java 428 at com facebook presto operator driver lambda processfor 9 driver java 311 at com facebook presto operator driver trywithlock driver java 732 at com facebook presto operator driver processfor driver java 304 at com facebook presto execution sqltaskexecution driversplitrunner processfor sqltaskexecution java 1079 at com facebook presto execution executor prioritizedsplitrunner process prioritizedsplitrunner java 165 at com facebook presto execution executor taskexecutor taskrunner run taskexecutor java 603 at com facebook presto gen presto null testversion 20231010 093550 1 run unknown source at java util concurrent threadpoolexecutor runworker threadpoolexecutor java 1149 at java util concurrent threadpoolexecutor worker run threadpoolexecutor java 624 at java lang thread run thread java 750 expect behavior any number of concurrent query should go through current behavior max of 5 concurrent query be only support this be because snapshotproducer api of iceberg support maximum of 5 attempt to commit the snapshot before fail the insert query possible solution na step to reproduce 1 setup jmeter for jdbc load test and configure presto endpoint 2 fire simple insert query with 5 thread n 3 n 5 query end up with above exception
prestodbpresto
the split tab of a query show empty content
Bug
when there be multiple task in a stage the split tab show empty content your environment presto version use almost the late code from the master branch this commit storage hdfs s3 gcs datum source and connector use tpch deployment cloud or on prem single server pastebin link to the complete debug log expect behavior when there be multiple task in a stage each task should have a bar to represent the task current behavior it show empty content now possible solution should use the 3rd number of a taskid for the task number step to reproduce 1 use tpchqueryrunner java to start the presto server with 4 worker node 2 issue this query select s name s address from supplier s nation n where s suppkey in select ps suppkey from partsupp ps where ps partkey in select p partkey from part p where p name like forest and ps availqty select 0 5 sum l quantity from lineitem l where l partkey ps partkey and l suppkey ps suppkey and l shipdate date 1994 01 01 and l shipdate date 1994 01 01 interval 1 year and s nationkey n nationkey and n name canada order by s name 3 check the query detail and click on the split tab screenshot if appropriate context
prestodbpresto
properly ban javax inject usage
Bug
javax inject be a set of annotation for use with dependency injection framework while it can be useful in some case there be several reason why you might want to ban its usage interoperability not all framework support javax inject if you be use a framework that doesn t support it or if you plan to switch to one that doesn t it can cause problem simplicity use javax inject can add complexity to your code if you can achieve the same result without it it may be well to avoid it performance dependency injection can sometimes lead to performance issue especially in large application to ban javax inject usage in your project you can use a static analysis tool like checkstyle pmd or findbug
prestodbpresto
ban javax annotation dependency
Bug
to ban a dependency in maven you can use the element in your dependency configuration this allow you to exclude a transitive dependency
prestodbpresto
hiveexternalworkerqueryrunner fail to run query no node available to run query
Bug
follow build from source and start hiveexternalworkerqueryrunner and presto server as direct the hiveexternalworkerqueryrunner and presto server start successfully and be able to run show schemas or show table but fail to run select query presto tpch select from tpch sf1 lineitem query 20231010 071426 00002 hmtb9 fail no node available to run query com facebook presto spi prestoexception no node available to run query at com facebook presto spi nodemanager getrequiredworkernode nodemanager java 34 at com facebook presto tpch tpchnodepartitioningprovider getbucketnodemap tpchnodepartitioningprovider java 53 at com facebook presto sql planner nodepartitioningmanager getconnectorbucketnodemap nodepartitioningmanager java 223 at com facebook presto sql planner nodepartitioningmanager getbucketnodemap nodepartitioningmanager java 181 at com facebook presto execution scheduler sectionexecutionfactory createstageschedul sectionexecutionfactory java 344 at com facebook presto execution scheduler sectionexecutionfactory createstreaminglinkedstageexecution sectionexecutionfactory java 252 at com facebook presto execution scheduler sectionexecutionfactory createstreaminglinkedstageexecution sectionexecutionfactory java 230 at com facebook presto execution scheduler sectionexecutionfactory createsectionexecution sectionexecutionfactory java 176 at com facebook presto execution scheduler legacysqlqueryscheduler createstageexecution legacysqlqueryscheduler java 355 at com facebook presto execution scheduler legacysqlqueryscheduler legacysqlqueryscheduler java 244 at com facebook presto execution scheduler legacysqlqueryscheduler createsqlqueryscheduler legacysqlqueryscheduler java 173 at com facebook presto execution sqlqueryexecution getscheduler sqlqueryexecution java 633 at com facebook presto execution sqlqueryexecution plandistribution sqlqueryexecution java 614 at com facebook presto execution sqlqueryexecution start sqlqueryexecution java 457 at com facebook presto gen presto null testversion 20231010 070813 9 run unknown source at com facebook presto execution sqlquerymanager createquery sqlquerymanager java 306 at com facebook presto dispatcher localdispatchquery lambda startexecution 8 localdispatchquery java 211 at java util concurrent threadpoolexecutor runworker threadpoolexecutor java 1149 at java util concurrent threadpoolexecutor worker run threadpoolexecutor java 624 at java lang thread run thread java 748 your environment presto version use late master as of today storage hdfs s3 gcs maco datum source and connector use tpch
prestodbpresto
presto native execution build failure unknown pool name link job pool
Bug
cmake error run application clion app content bin ninja mac ninja c user yingsu repo presto3 presto presto native execution cmake build debug t recompact fail with ninja error build ninja 17117 unknown pool name link job pool your environment presto version use late master storage hdfs s3 gcs macos file system datum source and connector use deployment cloud or on prem pastebin link to the complete debug log expect behavior build succeed current behavior build fail possible solution step to reproduce follow build from source screenshot if appropriate context
prestodbpresto
hiveexternalworkerqueryrunner fail to start with nullpointerexception on datum dir
Bug
the follow error be show when start hiveexternalworkerqueryrunner in intellij with all require setting in development application intellij idea app contents lib idea rt jar com facebook presto nativeworker hiveexternalworkerqueryrunner user yingsu connect to the target vm address 127 0 0 1 53995 transport socket 2023 10 09t23 00 38 372 0500 info main com facebook airlift log log log to stderr 2023 10 09t23 02 37 831 0500 info main stderr exception in thread main 2023 10 09t23 02 37 832 0500 info main stderr java lang nullpointerexception 2023 10 09t23 02 37 832 0500 info main stderr at sun nio fs unixpath normalizeandcheck unixpath java 77 2023 10 09t23 02 37 832 0500 info main stderr at sun nio fs unixpath unixpath java 71 2023 10 09t23 02 37 832 0500 info main stderr at sun nio fs unixfilesystem getpath unixfilesystem java 280 2023 10 09t23 02 37 832 0500 info main stderr at java nio file path get path java 84 2023 10 09t23 02 37 833 0500 info main stderr at com facebook presto nativeworker prestonativequeryrunnerutil createjavaqueryrunner prestonativequeryrunnerutil java 103 2023 10 09t23 02 37 833 0500 info main stderr at com facebook presto nativeworker prestonativequeryrunnerutil createjavaqueryrunner prestonativequeryrunnerutil java 97 2023 10 09t23 02 37 833 0500 info main stderr at com facebook presto nativeworker hiveexternalworkerqueryrunner main hiveexternalworkerqueryrunner java 32 disconnect from the target vm address 127 0 0 1 53995 transport socket process finish with exit code 1 your environment presto version use storage hdfs s3 gcs datum source and connector use deployment cloud or on prem pastebin link to the complete debug log current behavior hiveexternalworkerqueryrunner fail to start possible solution should use system getenv step to reproduce 1 in edit configuration for hiveexternalworkerqueryrunner configure the environment variable presto server user git presto presto native execution cmake build debug presto cpp main presto server datum dir user desktop datum worker count 0 2 run hiveexternalworkerqueryrunner in intellij
prestodbpresto
query detail ui miss task list
Bug
there be a recent regression in presto query ui where it do not print information for all task your environment presto version use 0 284 edge24 expect behavior query detail ui should print a list of task for each stage example of how it work in previous presto version image current behavior no list of task be print make it impossible to debug query in presto image step to reproduce open query detail ui use url like ui query html 20231009 213130 04267 2vtkr couple prs which might be relate
prestodbpresto
arm ubuntu library not find nativelib linux aarch64 libhadoop so
Bug
your environment presto version use 0 283 storage hdfs s3 gcs hdfs hadoop3 2 datum source and connector use hive hudi deployment cloud or on prem on prem pastebin link to the complete debug log expect behavior current behavior when I use bin launcher run cmd to run presto server it throw exception possible solution I see there have some pr to resolve this error like this and this be merge into master but I don t know how to package this and fix it
prestodbpresto
native select timestamp 1960 01 22 3 04 05 1 return wrong timestamp value if the input string have fraction of second
Bug
current behavior presto tpch select timestamp 1960 01 22 3 04 05 1 col0 2544 08 11 02 38 38 809 note this be a wrong result problem seem to be only when the fraction part of the timestamp be supply presto tpch select timestamp 1960 01 22 3 04 05 col0 1960 01 22 03 04 05 000 presto tpch explain select timestamp 1960 01 22 3 04 05 1 query plan output plannodeid 5 col0 expr timestamp estimate source costbasedsourceinfo row 1 9b cpu 9 00 memory 0 00 network 0 00 col0 expr 1 16 project plannodeid 1 projectlocality local expr timestamp estimate source costbasedsourceinfo row 1 9b cpu 9 00 memory 0 00 network 0 00 expr timestamp 1960 01 22 03 04 05 100 localexchange plannodeid 139 round robin estimate source costbasedsourceinfo row 1 9b cpu 0 00 memory 0 00 network 0 00 value plannodeid 0 estimate source costbasedsourceinfo row 1 9b cpu 0 00 memory 0 00 network 0 00
prestodbpresto
singlestore test fail with not enough disk space
Bug
gh action ci be fail pretty frequently with an error like error test run 535 failure 1 error 0 skip 20 time elapse 1 102 629 s failure in testsuite error com facebook presto plugin singlestore testsinglestoretypemappe init time elapse 17 298 s failure java sql sqlexception conn 517 not enough disk space to create or attach database tpch on leaf 127 0 0 1 3307 estimate available 11397 mb estimate need 12736 mb at com singlestore jdbc export exceptionfactory createexception exceptionfactory java 216 at com singlestore jdbc export exceptionfactory create exceptionfactory java 250 at com singlestore jdbc message clientmessage readpacket clientmessage java 85 at com singlestore jdbc client impl standardclient readpacket standardclient java 711 at com singlestore jdbc client impl standardclient readresult standardclient java 659 at com singlestore jdbc client impl standardclient readresponse standardclient java 582 at com singlestore jdbc client impl standardclient execute standardclient java 556 your environment gh action ci runner expect behavior no failure current behavior ci fail to create schema even with 10 g of disk space available possible solution investigate
prestodbpresto
from unixtime generate a timestamp that do not fit as int64 t millisecond
Bug
I need help with something select to unixtime from unixtime 3 87111e 37 give I as output 9223372036854800 this number 9223372036854800 will overflow if we try to store it in int64 t as millisecond presto shuffle serialize a timestamp as int64 t millis be that a bug in from unixtime or be I miss something mbasmanova
prestodbpresto
product test product test basic environment fail frequently with no space leave on device
Bug
the product test product test basic environment be fail frequently with fail to register layer write usr share ghostscript 8 70 resource cmap unicns utf16 h no space leave on device your environment presto ci check possible solution its possible that recent test addition to ci check be cause an increase in the require disk space either queue the ci check to limit the number of prs that can run ci check simultaneously or assume all test be run on a share cluster the number of parallel job that be run can be reduce for e g there be three product test product test basic environment product test specific environment1 product test specific environment2 can be run sequentially since they be resource heavy
prestodbpresto
tpc ds sf 1k datum mismatch velox vs java query 95 in both original and ahana rewritten query
Bug
tpc ds sf 1k datum mismatch while compare velox vs java result for query 95 attach screenshot original tpc ds 95 query image rewitten tpc ds 95 query image your environment presto version use 0 284 deployment cloud or on prem cloud expect behavior datum should match current behavior datum mismatch in all column possible solution step to reproduce run query 95 with ws wh as select ws1 ws order number ws1 ws warehouse sk wh1 ws2 ws warehouse sk wh2 from web sale ws1 web sale ws2 where ws1 ws order number ws2 ws order number and ws1 ws warehouse sk ws2 ws warehouse sk select count distinct ws order number as order count sum ws ext ship cost as total shipping cost sum ws net profit as total net profit from web sale ws1 date dim customer address web site where d date between cast 1999 2 01 as date and cast 1999 2 01 as date interval 60 day and ws1 ws ship date sk d date sk and ws1 ws ship addr sk can address sk and can state il and ws1 ws web site sk web site sk and web company name pri and ws1 ws order number in select ws order number from ws wh and ws1 ws order number in select wr order number from web return ws wh where wr order number ws wh ws order number order by count distinct ws order number fetch first 100 row only screenshot if appropriate original tpc ds 95 query image rewitten tpc ds 95 query image context
prestodbpresto
tpc ds sf 1k datum mismatch velox vs java query 39 in both original and ahana rewritten query
Bug
tpc ds sf 1k datum mismatch while compare velox vs java result for query 39 the issue be float point precision mismatch attach screenshot image your environment presto version use 0 284 deployment cloud or on prem cloud expect behavior result should match current behavior precision mismatch possible solution step to reproduce run follow query 39 with inv as select w warehouse name w warehouse sk I item sk d moy stdev mean case mean when 0 then null else stdev mean end cov from select w warehouse name w warehouse sk I item sk d moy stddev samp inv quantity on hand stdev avg cast inv quantity on hand as double mean from inventory item warehouse date dim where inv item sk I item sk and inv warehouse sk w warehouse sk and inv date sk d date sk and d year 2001 group by w warehouse name w warehouse sk I item sk d moy foo where case mean when 0 then 0 else stdev mean end 1 select inv1 w warehouse sk inv1 I item sk inv1 d moy inv1 mean inv1 cov inv2 w warehouse sk inv2 I item sk inv2 d moy inv2 mean inv2 cov from inv inv1 inv inv2 where inv1 I item sk inv2 I item sk and inv1 w warehouse sk inv2 w warehouse sk and inv1 d moy 1 and inv2 d moy 1 1 order by inv1 w warehouse sk inv1 I item sk inv1 d moy inv1 mean inv1 cov inv2 d moy inv2 mean inv2 cov with inv as select w warehouse name w warehouse sk I item sk d moy stdev mean case mean when 0 then null else stdev mean end cov from select w warehouse name w warehouse sk I item sk d moy stddev samp inv quantity on hand stdev avg cast inv quantity on hand as double mean from inventory item warehouse date dim where inv item sk I item sk and inv warehouse sk w warehouse sk and inv date sk d date sk and d year 2001 group by w warehouse name w warehouse sk I item sk d moy foo where case mean when 0 then 0 else stdev mean end 1 select inv1 w warehouse sk inv1 I item sk inv1 d moy inv1 mean inv1 cov inv2 w warehouse sk inv2 I item sk inv2 d moy inv2 mean inv2 cov from inv inv1 inv inv2 where inv1 I item sk inv2 I item sk and inv1 w warehouse sk inv2 w warehouse sk and inv1 d moy 1 and inv2 d moy 1 1 and inv1 cov 1 5 order by inv1 w warehouse sk inv1 I item sk inv1 d moy inv1 mean inv1 cov inv2 d moy inv2 mean inv2 cov screenshot if appropriate image context
prestodbpresto
set union apply to all null return empty array not null
Bug
I notice that set union apply to all null input return an empty array I expect a null it would be great to clarify whether this behavior be correct presto select set union x from select cast null as array bigint as t x col0 discover by run velox aggregationfuzzer with presto as the source of truth cc kaikalur
prestodbpresto
map union sum x real fail with unsupportedoperationexception com facebook presto common type realtype
Bug
create table tmp c0 g0 g1 as select cast null as map tinyint real cast null as timestamp cast null as bigint select g0 as g0 g1 as g1 array sort map key a0 as p2 from select g0 g1 map union sum c0 as a0 from tmp group by g0 g1 java lang unsupportedoperationexception com facebook presto common type realtype at com facebook presto common type abstracttype writedouble abstracttype java 128 at com facebook presto operator aggregation mapunionsumresult appendvalue mapunionsumresult java 82 at com facebook presto operator aggregation mapunionsumresult singlemapblock appendvalue mapunionsumresult java 202 at com facebook presto operator aggregation mapunionsumresult unionsum mapunionsumresult java 140 at com facebook presto operator aggregation mapunionsumresult unionsum mapunionsumresult java 165 at com facebook presto operator aggregation mapunionsumaggregation input mapunionsumaggregation java 135 this issue be discover by run velox aggregationfuzzer with presto as source of truth cc kaikalur tdcmeehan
prestodbpresto
typo in bitwise right shift documentation value shift digit
Bug
bitwise right shift value shift digit same as value bitwise right shift there be only two input to bitwise right shift digit be no input
prestodbpresto
presto can not able to connect to s3
Bug
have follow the documentation provide by presto but however I couldn t able to establish the connection I try the below configuration hive property be the file name connector name hive hadoop2 hive metastore uri thrift jdbc mariadb md7wf1g369xf22 cluz8hwxjhb6 us east 2 rd amazonaw com 3306 organization5791930186914171 usessl true enabledsslprotocolsuite tlsv1 tlsv1 1 tlsv1 2 serversslcert databrick common mysql ssl can cert crt hive s3 endpoint hive s3 aw access key hive s3 aw secret key error prestodb arex 6 error inject constructor java lang illegalargumentexception metastoreuri host be miss thrift jdbc mariadb md7wf1g369xf22 cluz8hwxjhb6 us east 2 rd amazonaw com 3306 organization5791930186914171 usessl true enabledsslprotocolsuite tlsv1 can someone let em know the right way to connect s from presto
prestodbpresto
error while parse columnstatistic of type timestamp
Bug
the spi allow columnstatistic of any type to be retrieve during show stat query this do not work for timestamp type since tostringliteral and we throw an illegalargumentexception l344 l359 your environment an iceberg table with a timestamp colum and table statistic up to date create table timestamp test t timestamp with format parquet insert into timestamp test value timestamp 2001 08 22 03 04 05 321 show stat for timestamp test expect behavior show stat for should show min max value for a timestamp column current behavior the query fail with below error copy from the query info json failureinfo type java lang illegalargumentexception message unexpected type timestamp suppress stack com facebook presto sql rewrite showstatsrewrite visitor tostringliteral showstatsrewrite java 358 com facebook presto sql rewrite showstatsrewrite visitor lambda tostringliteral 2 showstatsrewrite java 341 java base java util optional map optional java 265 com facebook presto sql rewrite showstatsrewrite visitor tostringliteral showstatsrewrite java 341 com facebook presto sql rewrite showstatsrewrite visitor createcolumnstatsrow showstatsrewrite java 300 com facebook presto sql rewrite showstatsrewrite visitor buildstatisticsrow showstatsrewrite java 281 com facebook presto sql rewrite showstatsrewrite visitor rewriteshowstat showstatsrewrite java 206 com facebook presto sql rewrite showstatsrewrite visitor visitshowstat showstatsrewrite java 146 com facebook presto sql rewrite showstatsrewrite visitor visitshowstat showstatsrewrite java 112 com facebook presto sql tree showstat accept showstat java 45 com facebook presto sql tree astvisitor process astvisitor java 27 com facebook presto sql rewrite showstatsrewrite rewrite showstatsrewrite java 109 com facebook presto sql rewrite statementrewrite rewrite statementrewrite java 58 com facebook presto sql analyzer analyzer analyzesemantic analyzer java 90 com facebook presto execution sqlqueryexecution u003cinit u003e sqlqueryexecution java 206 com facebook presto execution sqlqueryexecution u003cinit u003e sqlqueryexecution java 103 com facebook presto execution sqlqueryexecution sqlqueryexecutionfactory createqueryexecution sqlqueryexecution java 934 com facebook presto dispatcher localdispatchqueryfactory lambda createdispatchquery 0 localdispatchqueryfactory java 168 com google common util concurrent trustedlistenablefuturetask trustedfutureinterruptibletask runinterruptibly trustedlistenablefuturetask java 125 com google common util concurrent interruptibletask run interruptibletask java 57 com google common util concurrent trustedlistenablefuturetask run trustedlistenablefuturetask java 78 java base java util concurrent threadpoolexecutor runworker threadpoolexecutor java 1128 java base java util concurrent threadpoolexecutor worker run threadpoolexecutor java 628 java base java lang thread run thread java 829 errorcode code 65536 name generic internal error type internal error retriable false errorcause unknown possible solution fix the tostringliteral method
prestodbpresto
minbyn maxbyn do not check the consistency of the param n
Bug
test case presto select max by c0 c1 c2 from value cast null as bigint cast null as bigint cast null as bigint bigint 0 bigint 10 bigint 2 cast null as bigint cast null as bigint cast null as bigint bigint 1 bigint 9 bigint 3 cast null as bigint cast null as bigint cast null as bigint bigint 2 bigint 8 bigint 2 cast null as bigint cast null as bigint cast null as bigint bigint 3 bigint 7 bigint 2 cast null as bigint cast null as bigint cast null as bigint bigint 4 bigint 6 bigint 2 cast null as bigint cast null as bigint cast null as bigint bigint 5 bigint 5 bigint 2 cast null as bigint cast null as bigint cast null as bigint bigint 6 bigint 4 bigint 2 cast null as bigint cast null as bigint cast null as bigint bigint 7 bigint 3 bigint 2 cast null as bigint cast null as bigint cast null as bigint bigint 8 bigint 2 bigint 2 cast null as bigint cast null as bigint cast null as bigint bigint 9 bigint 1 bigint 2 as tmp c0 c1 c2 col0 0 1 1 row notice there be a inconsistent 3 in column c2 similar issue as just for different function
prestodbpresto
native query with aliase column in grouping set produce incorrect result
Bug
query select lna lnb sum quantity from select linenumber lna linenumber lnb cast quantity as bigint quantity from lineitem group by group set lna lnb lna lnb expect behavior assertquery select lna lnb sum quantity from select linenumber lna linenumber lnb cast quantity as bigint quantity from lineitem group by group set lna lnb lna lnb select linenumber linenumber sum cast quantity as bigint from lineitem group by linenumber union all select linenumber null sum cast quantity as bigint from lineitem group by linenumber union all select null linenumber sum cast quantity as bigint from lineitem group by linenumber union all select null null sum cast quantity as bigint from lineitem current behavior java lang assertionerror for query select lna lnb sum quantity from select linenumber lna linenumber lnb cast quantity as bigint quantity from lineitem group by group set lna lnb lna lnb not equal actual row up to 100 of 14 extra row show 22 row in total null 3 274364 null 3 274364 null 4 219863 null 4 219863 null 1 385698 null 1 385698 null 6 109157 null 6 109157 null 5 161918 null 5 161918 null 7 54701 null 7 54701 null 2 330426 null 2 330426 expect row up to 100 of 14 miss row show 22 row in total 1 1 385698 2 2 330426 3 3 274364 4 4 219863 5 5 161918 6 6 109157 7 7 54701 1 null 385698 2 null 330426 3 null 274364 4 null 219863 5 null 161918 6 null 109157 7 null 54701
prestodbpresto
native incorrect result for empty grouping set
Bug
expect behavior select sum cast quantity as bigint from lineitem where quantity 0 group by group set current behavior query fail with the follow difference java lang assertionerror for query select sum cast quantity as bigint from lineitem where quantity 0 group by group set not equal actual row up to 100 of 0 extra row show 0 row in total expect row up to 100 of 2 missing row show 2 row in total null null
prestodbpresto
native sum distinct give incorrect result
Bug
expect behavior select max orderstatus count orderkey sum distinct orderkey from order current behavior java lang assertionerror for query select max orderstatus count orderkey sum distinct orderkey from order not equal actual row up to 100 of 1 extra row show 1 row in total p 15000 null expect row up to 100 of 1 missing row show 1 row in total p 15000 449872500 property optimizer optimize mix distinct aggregation be enable in the presto to run this query
prestodbpresto
minn maxn do not check the consistency of the param n
Bug
test case select max c0 c1 from value cast null as bigint cast null as bigint bigint 0 bigint 2 cast null as bigint cast null as bigint bigint 1 bigint 3 cast null as bigint cast null as bigint bigint 2 bigint 2 cast null as bigint cast null as bigint bigint 3 bigint 2 cast null as bigint cast null as bigint bigint 4 bigint 2 cast null as bigint cast null as bigint bigint 5 bigint 2 cast null as bigint cast null as bigint bigint 6 bigint 2 cast null as bigint cast null as bigint bigint 7 bigint 2 cast null as bigint cast null as bigint bigint 8 bigint 2 cast null as bigint cast null as bigint bigint 9 bigint 2 as tmp c0 c1 it return sql col0 9 8 which be the result of max c0 2 but there be a 3 among other 2 s in the c1 column
prestodbpresto
template for open a documentation request
Bug
fail to execute goal org springframework boot spring boot maven plugin 1 2 5 release repackage default on project presto benchto benchmark execution default of goal org springframework boot spring boot maven plugin 1 2 5 release repackage fail plugin org springframework boot spring boot maven plugin 1 2 5 release or one of its dependency could not be resolve
prestodbpresto
native tpc ds sf 1k regression in q9
Bug
the tpc ds 1k native vs java show regression in only q9 number the plan be clearly different investigate the underlying issue java plan native plan your environment presto version use 0 284 deployment cloud or on prem cloud
prestodbpresto
discussion lead lag function s behavior when the offset be null
Bug
java presto testleadfunction java assertwindowquery lead orderkey null 1 over partition by orderstatus order by orderkey resultbuilder test session integer varchar integer row 3 f null row 5 f null row 6 f null row 33 f null row 1 o null row 2 o null row 4 o null row 7 o null row 32 o null row 34 o null build in this test case the offset be null default value be specify as 1 but the expect value be null rather than 1 while in the doc 1 it say that default value should be return if the offset be null or large than the window the default value be return or if it be not specify null be return lag function have similar issue 1
prestodbpresto
the timeline chart in the split page doesn t work
Bug
when check the split detail of a query the timeline chart doesn t work split timeline your environment presto version use 0 284 snapshot storage hdfs s3 gcs n a datum source and connector use tpch deployment cloud or on prem local pastebin link to the complete debug log expect behavior 1 the split page should have the same layout as the overview live plan and stage performance page then user can switch between these page more easily 2 the timeline should depict the correct split information current behavior the split page have its own layout and be not able to switch back to other page and the timeline chart doesn t have any split information possible solution step to reproduce 1 issue a simple query 2 click on the query to check query detail 3 click on the split tab 4 the split page would contain an empty timeline chart screenshot if appropriate context
prestodbpresto
native plan conversion fail constant vector can not wrap constant vector
Bug
select case when date from unixtime orderkey at time zone utc date 1970 01 01 then null else from unixtime orderkey at time zone utc end from order cause by veloxruntimeerror base basevector encode vectorencoding simple constant constant vs constant constant vector can not wrap constant vector cc amitkdutta spershin zacw7
prestodbpresto
flaky test com facebook presto orc metadata statistics testmapcolumnstatisticsbuilder testaddmapstatisticsnovalue
Bug
ci run
prestodbpresto
cast varchar contain utf 8 character to bigint yield unexpected result
Bug
cast a varchar contain non numeric utf 8 character to a bigint can return numeric result e g cast as bigint yield 1982 expect behavior I would expect cast non numeric string to fail current behavior I receive a numeric result this seem to be inherit from java s long parselong step to reproduce cast as bigint return 1982 I would expect it to fail some other example cast as bigint 35 cast as bigint 1410 cast as bigint 2559 context this be cause inconsistent result between presto and other engine
prestodbpresto
native presto protocol json hpp mustache need update to refer to new json location
Bug
pr 20619 move json hpp but didn t update presto protocol json hpp mustache as a result make presto protocol command generate file that do not compile cc majetideepak czentgr amitkdutta
prestodbpresto
can presto support to skip corrupt file rather than fail the query
Bug
your environment presto version use presto 0 275 storage hdfs s3 gcs s3 parquet file datum source and connector use hive deployment cloud or on prem cloud pastebin link to the complete debug log expect behavior when use presto to query parquet file on s3 the query would fail if some file be corrpute but this be not what we want we expect that the query can skip the corrpute file to be successful even the result may not be correct current behavior possible solution step to reproduce 1 2 3 4 screenshot if appropriate context
prestodbpresto
flaky test testhivedistributedquerieswithoptimizedrepartitione testrunawayregexanalyzertimeout
Bug
error failure error testhivedistributedquerieswithoptimizedrepartitione testrunawayregexanalyzertimeout test the exception be throw with the wrong message expect regexp matching interrupt but get the query optimizer exceed the timeout of 1 00 info error test run 496 failure 1 error 0 skip 1 cc tdcmeehan aditi pandit rschlussel
prestodbpresto
improve comparison operator and function document for in operator
Bug
image image
prestodbpresto
testbrutalshutdown testqueryretryonshutdown flaky
Bug
pr with no relevant change and this test fail ci run
prestodbpresto
native e2e test for probability function can not be run
Bug
expect behaviour e2e test for probability function be contain in the file abstracttestnativeprobabilityfunctionquerie java since this be an abstract class a separate subclass implementation be require to run these test current behaviour there be currently no subclass of abstracttestnativeprobabilityfunctionquerie therefore none of the test inside the file can be run possible solution create a subclass of abstracttestnativeprobabilityfunctionquerie and implement the abstract method step to reproduce this can be confirm with a quick github code search the class be reference only once there be no subclass
prestodbpresto
hbo framework seem not to work with dynamic filtering
Bug
try tpcds q4 with hbo with dynamic filtering wrong join type be choose on investigate canonicalplangenerator l268 do not seem to have any handling for join with runtime stat when runtime stat be apply to one side it could cause the post execution statistic to be inaccurate this be due to our reliance on a table scan node equivalent for statistic since the information about runtime stat be not at all canonicalize