repository stringclasses 156 values | issue title stringlengths 1 1.01k ⌀ | labels stringclasses 8 values | body stringlengths 1 270k ⌀ |
|---|---|---|---|
baomidoumybatisplus | pluginutil | Enhancement | 3 3 1 3 3 1 paginationinterceptor jvm fillinstacktrace 8 12 org apache ibatis reflection metaobject getvalue org apache ibatis reflection metaobject setvalue illegalaccessexception com baomidou mybatisplus core toolkit pluginutil realtarget h target paginationinterceptor boundsql parameterhandler metaobject 3 3 1 paginationinterceptor org apache ibatis reflection invoker getfieldinvoker 37 org apache ibatis reflection invoker setfieldinvoker 37 |
baomidoumybatisplus | mappedstatement | Enhancement | basemapper mappedstatement basemapper mybatis issue |
baomidoumybatisplus | mybatis | Duplicate | mybatis plus 3 4 0 mybatis |
baomidoumybatisplus | selectcount numberformatexception | Enhancement | mybatis plus boot starter 3 4 0 selectcount integer 2147483647 integer max value numberformatexception java lang numberformatexception for input string 4178000785 selectcount long selectcountlong long |
baomidoumybatisplus | tablenamehandler metaobject | Enhancement | 3 4 3 4 itablenamehandler tablenamehandler metaobject metaobject |
baomidoumybatisplus | lambdaquerychainwrapper func | Bug | 3 3 2 serviceimpl lambdaquery lambdaquerychainwrapper func java lang classcastexception com baomidou mybatisplus core condition query lambdaquerywrapper can not be cast to com baomidou mybatisplus extension condition query lambdaquerychainwrapper 1 demoservice extend serviceimpl 2 demoservice lambdaquery func e if input getdemotypeid size 1 e eq demotype getdemotypeid input getdemotypeid get 0 else e in demotype getdemotypeid input getdemotypeid list 3 new lambdaquerychainwrapper demotypemapper func e if input getdemotypeid size 1 e eq demotype getdemotypeid input getdemotypeid get 0 else e in demotype getdemotypeid input getdemotypeid list java lang classcastexception com baomidou mybatisplus core condition query lambdaquerywrapper can not be cast to com baomidou mybatisplus extension condition query lambdaquerychainwrapper |
baomidoumybatisplus | packageconfig modulename controller requestmapping user | Duplicate | mybatis plus generator 3 3 2 packageconfig modulename packageconfig modulename controller requestmapping user controller requestmapping user |
baomidoumybatisplus | size 1 | Bug | v3 3 2 time size 1 size 1 page paginationinterceptor intercept size 0 if null page page getsize 0 return invocation proceed string buildsql concatorderby originalsql page size querywrapper page wrapper page time if page getorder isempty page addorder orderitem desc time |
prestodbpresto | native intermittent failure in a query contain empty window frame | Bug | this be an intermittent failure sometimes the query succeed and sometimes it fail produce different actual output expect behavior assertquery select array agg a over order by a desc null last range between 10 follow and 1 follow from value 1 2 3 null null 2 1 null null t a value cast null as array null null null null array null null null null array null null null null array null null null null array null null null null current behavior output against c worker java lang assertionerror for query select array agg a over order by a desc null last range between 10 follow and 1 follow from value 1 2 3 null null 2 1 null null t a not equal actual row 5 of 5 extra row show 9 row in total 3 2 2 1 1 2 2 1 1 2 1 1 1 1 1 expect row 5 of 5 missing row show 9 row in total null null null null null |
prestodbpresto | join query result in large bytecode exceed the limit impose by jvm error in log | Bug | create two table use iceberg catalog tab 1 and tab 2 pfb the table creation script tab 1 and tab 2 have 1000 column only difference be in table name create table iceberg issjoin tab 1 column 1 varchar column 2 varchar column 3 varchar column 4 varchar column 5 varchar column 6 varchar column 7 varchar column 8 varchar column 9 varchar column 10 varchar column 11 varchar column 12 varchar column 13 varchar column 14 varchar column 15 varchar column 16 varchar column 17 varchar column 18 varchar column 19 varchar column 20 varchar column 21 varchar column 22 varchar column 23 varchar column 24 varchar column 25 varchar column 26 varchar column 27 varchar column 28 varchar column 29 varchar column 30 varchar column 31 varchar column 32 varchar column 33 varchar column 34 varchar column 35 varchar column 36 varchar column 37 varchar column 38 varchar column 39 varchar column 40 varchar column 41 varchar column 42 varchar column 43 varchar column 44 varchar column 45 varchar column 46 varchar column 47 varchar column 48 varchar column 49 varchar column 50 varchar column 51 varchar column 52 varchar column 53 varchar column 54 varchar column 55 varchar column 56 varchar column 57 varchar column 58 varchar column 59 varchar column 60 varchar column 61 varchar column 62 varchar column 63 varchar column 64 varchar column 65 varchar column 66 varchar column 67 varchar column 68 varchar column 69 varchar column 70 varchar column 71 varchar column 72 varchar column 73 varchar column 74 varchar column 75 varchar column 76 varchar column 77 varchar column 78 varchar column 79 varchar column 80 varchar column 81 varchar column 82 varchar column 83 varchar column 84 varchar column 85 varchar column 86 varchar column 87 varchar column 88 varchar column 89 varchar column 90 varchar column 91 varchar column 92 varchar column 93 varchar column 94 varchar column 95 varchar column 96 varchar column 97 varchar column 98 varchar column 99 varchar column 100 varchar column 101 varchar column 102 varchar column 103 varchar column 104 varchar column 105 varchar column 106 varchar column 107 varchar column 108 varchar column 109 varchar column 110 varchar column 111 varchar column 112 varchar column 113 varchar column 114 varchar column 115 varchar column 116 varchar column 117 varchar column 118 varchar column 119 varchar column 120 varchar column 121 varchar column 122 varchar column 123 varchar column 124 varchar column 125 varchar column 126 varchar column 127 varchar column 128 varchar column 129 varchar column 130 varchar column 131 varchar column 132 varchar column 133 varchar column 134 varchar column 135 varchar column 136 varchar column 137 varchar column 138 varchar column 139 varchar column 140 varchar column 141 varchar column 142 varchar column 143 varchar column 144 varchar column 145 varchar column 146 varchar column 147 varchar column 148 varchar column 149 varchar column 150 varchar column 151 varchar column 152 varchar column 153 varchar column 154 varchar column 155 varchar column 156 varchar column 157 varchar column 158 varchar column 159 varchar column 160 varchar column 161 varchar column 162 varchar column 163 varchar column 164 varchar column 165 varchar column 166 varchar column 167 varchar column 168 varchar column 169 varchar column 170 varchar column 171 varchar column 172 varchar column 173 varchar column 174 varchar column 175 varchar column 176 varchar column 177 varchar column 178 varchar column 179 varchar column 180 varchar column 181 varchar column 182 varchar column 183 varchar column 184 varchar column 185 varchar column 186 varchar column 187 varchar column 188 varchar column 189 varchar column 190 varchar column 191 varchar column 192 varchar column 193 varchar column 194 varchar column 195 varchar column 196 varchar column 197 varchar column 198 varchar column 199 varchar column 200 varchar column 201 varchar column 202 varchar column 203 varchar column 204 varchar column 205 varchar column 206 varchar column 207 varchar column 208 varchar column 209 varchar column 210 varchar column 211 varchar column 212 varchar column 213 varchar column 214 varchar column 215 varchar column 216 varchar column 217 varchar column 218 varchar column 219 varchar column 220 varchar column 221 varchar column 222 varchar column 223 varchar column 224 varchar column 225 varchar column 226 varchar column 227 varchar column 228 varchar column 229 varchar column 230 varchar column 231 varchar column 232 varchar column 233 varchar column 234 varchar column 235 varchar column 236 varchar column 237 varchar column 238 varchar column 239 varchar column 240 varchar column 241 varchar column 242 varchar column 243 varchar column 244 varchar column 245 varchar column 246 varchar column 247 varchar column 248 varchar column 249 varchar column 250 varchar column 251 varchar column 252 varchar column 253 varchar column 254 varchar column 255 varchar column 256 varchar column 257 varchar column 258 varchar column 259 varchar column 260 varchar column 261 varchar column 262 varchar column 263 varchar column 264 varchar column 265 varchar column 266 varchar column 267 varchar column 268 varchar column 269 varchar column 270 varchar column 271 varchar column 272 varchar column 273 varchar column 274 varchar column 275 varchar column 276 varchar column 277 varchar column 278 varchar column 279 varchar column 280 varchar column 281 varchar column 282 varchar column 283 varchar column 284 varchar column 285 varchar column 286 varchar column 287 varchar column 288 varchar column 289 varchar column 290 varchar column 291 varchar column 292 varchar column 293 varchar column 294 varchar column 295 varchar column 296 varchar column 297 varchar column 298 varchar column 299 varchar column 300 varchar column 301 varchar column 302 varchar column 303 varchar column 304 varchar column 305 varchar column 306 varchar column 307 varchar column 308 varchar column 309 varchar column 310 varchar column 311 varchar column 312 varchar column 313 varchar column 314 varchar column 315 varchar column 316 varchar column 317 varchar column 318 varchar column 319 varchar column 320 varchar column 321 varchar column 322 varchar column 323 varchar column 324 varchar column 325 varchar column 326 varchar column 327 varchar column 328 varchar column 329 varchar column 330 varchar column 331 varchar column 332 varchar column 333 varchar column 334 varchar column 335 varchar column 336 varchar column 337 varchar column 338 varchar column 339 varchar column 340 varchar column 341 varchar column 342 varchar column 343 varchar column 344 varchar column 345 varchar column 346 varchar column 347 varchar column 348 varchar column 349 varchar column 350 varchar column 351 varchar column 352 varchar column 353 varchar column 354 varchar column 355 varchar column 356 varchar column 357 varchar column 358 varchar column 359 varchar column 360 varchar column 361 varchar column 362 varchar column 363 varchar column 364 varchar column 365 varchar column 366 varchar column 367 varchar column 368 varchar column 369 varchar column 370 varchar column 371 varchar column 372 varchar column 373 varchar column 374 varchar column 375 varchar column 376 varchar column 377 varchar column 378 varchar column 379 varchar column 380 varchar column 381 varchar column 382 varchar column 383 varchar column 384 varchar column 385 varchar column 386 varchar column 387 varchar column 388 varchar column 389 varchar column 390 varchar column 391 varchar column 392 varchar column 393 varchar column 394 varchar column 395 varchar column 396 varchar column 397 varchar column 398 varchar column 399 varchar column 400 varchar column 401 varchar column 402 varchar column 403 varchar column 404 varchar column 405 varchar column 406 varchar column 407 varchar column 408 varchar column 409 varchar column 410 varchar column 411 varchar column 412 varchar column 413 varchar column 414 varchar column 415 varchar column 416 varchar column 417 varchar column 418 varchar column 419 varchar column 420 varchar column 421 varchar column 422 varchar column 423 varchar column 424 varchar column 425 varchar column 426 varchar column 427 varchar column 428 varchar column 429 varchar column 430 varchar column 431 varchar column 432 varchar column 433 varchar column 434 varchar column 435 varchar column 436 varchar column 437 varchar column 438 varchar column 439 varchar column 440 varchar column 441 varchar column 442 varchar column 443 varchar column 444 varchar column 445 varchar column 446 varchar column 447 varchar column 448 varchar column 449 varchar column 450 varchar column 451 varchar column 452 varchar column 453 varchar column 454 varchar column 455 varchar column 456 varchar column 457 varchar column 458 varchar column 459 varchar column 460 varchar column 461 varchar column 462 varchar column 463 varchar column 464 varchar column 465 varchar column 466 varchar column 467 varchar column 468 varchar column 469 varchar column 470 varchar column 471 varchar column 472 varchar column 473 varchar column 474 varchar column 475 varchar column 476 varchar column 477 varchar column 478 varchar column 479 varchar column 480 varchar column 481 varchar column 482 varchar column 483 varchar column 484 varchar column 485 varchar column 486 varchar column 487 varchar column 488 varchar column 489 varchar column 490 varchar column 491 varchar column 492 varchar column 493 varchar column 494 varchar column 495 varchar column 496 varchar column 497 varchar column 498 varchar column 499 varchar column 500 varchar column 501 varchar column 502 varchar column 503 varchar column 504 varchar column 505 varchar column 506 varchar column 507 varchar column 508 varchar column 509 varchar column 510 varchar column 511 varchar column 512 varchar column 513 varchar column 514 varchar column 515 varchar column 516 varchar column 517 varchar column 518 varchar column 519 varchar column 520 varchar column 521 varchar column 522 varchar column 523 varchar column 524 varchar column 525 varchar column 526 varchar column 527 varchar column 528 varchar column 529 varchar column 530 varchar column 531 varchar column 532 varchar column 533 varchar column 534 varchar column 535 varchar column 536 varchar column 537 varchar column 538 varchar column 539 varchar column 540 varchar column 541 varchar column 542 varchar column 543 varchar column 544 varchar column 545 varchar column 546 varchar column 547 varchar column 548 varchar column 549 varchar column 550 varchar column 551 varchar column 552 varchar column 553 varchar column 554 varchar column 555 varchar column 556 varchar column 557 varchar column 558 varchar column 559 varchar column 560 varchar column 561 varchar column 562 varchar column 563 varchar column 564 varchar column 565 varchar column 566 varchar column 567 varchar column 568 varchar column 569 varchar column 570 varchar column 571 varchar column 572 varchar column 573 varchar column 574 varchar column 575 varchar column 576 varchar column 577 varchar column 578 varchar column 579 varchar column 580 varchar column 581 varchar column 582 varchar column 583 varchar column 584 varchar column 585 varchar column 586 varchar column 587 varchar column 588 varchar column 589 varchar column 590 varchar column 591 varchar column 592 varchar column 593 varchar column 594 varchar column 595 varchar column 596 varchar column 597 varchar column 598 varchar column 599 varchar column 600 varchar column 601 varchar column 602 varchar column 603 varchar column 604 varchar column 605 varchar column 606 varchar column 607 varchar column 608 varchar column 609 varchar column 610 varchar column 611 varchar column 612 varchar column 613 varchar column 614 varchar column 615 varchar column 616 varchar column 617 varchar column 618 varchar column 619 varchar column 620 varchar column 621 varchar column 622 varchar column 623 varchar column 624 varchar column 625 varchar column 626 varchar column 627 varchar column 628 varchar column 629 varchar column 630 varchar column 631 varchar column 632 varchar column 633 varchar column 634 varchar column 635 varchar column 636 varchar column 637 varchar column 638 varchar column 639 varchar column 640 varchar column 641 varchar column 642 varchar column 643 varchar column 644 varchar column 645 varchar column 646 varchar column 647 varchar column 648 varchar column 649 varchar column 650 varchar column 651 varchar column 652 varchar column 653 varchar column 654 varchar column 655 varchar column 656 varchar column 657 varchar column 658 varchar column 659 varchar column 660 varchar column 661 varchar column 662 varchar column 663 varchar column 664 varchar column 665 varchar column 666 varchar column 667 varchar column 668 varchar column 669 varchar column 670 varchar column 671 varchar column 672 varchar column 673 varchar column 674 varchar column 675 varchar column 676 varchar column 677 varchar column 678 varchar column 679 varchar column 680 varchar column 681 varchar column 682 varchar column 683 varchar column 684 varchar column 685 varchar column 686 varchar column 687 varchar column 688 varchar column 689 varchar column 690 varchar column 691 varchar column 692 varchar column 693 varchar column 694 varchar column 695 varchar column 696 varchar column 697 varchar column 698 varchar column 699 varchar column 700 varchar column 701 varchar column 702 varchar column 703 varchar column 704 varchar column 705 varchar column 706 varchar column 707 varchar column 708 varchar column 709 varchar column 710 varchar column 711 varchar column 712 varchar column 713 varchar column 714 varchar column 715 varchar column 716 varchar column 717 varchar column 718 varchar column 719 varchar column 720 varchar column 721 varchar column 722 varchar column 723 varchar column 724 varchar column 725 varchar column 726 varchar column 727 varchar column 728 varchar column 729 varchar column 730 varchar column 731 varchar column 732 varchar column 733 varchar column 734 varchar column 735 varchar column 736 varchar column 737 varchar column 738 varchar column 739 varchar column 740 varchar column 741 varchar column 742 varchar column 743 varchar column 744 varchar column 745 varchar column 746 varchar column 747 varchar column 748 varchar column 749 varchar column 750 varchar column 751 varchar column 752 varchar column 753 varchar column 754 varchar column 755 varchar column 756 varchar column 757 varchar column 758 varchar column 759 varchar column 760 varchar column 761 varchar column 762 varchar column 763 varchar column 764 varchar column 765 varchar column 766 varchar column 767 varchar column 768 varchar column 769 varchar column 770 varchar column 771 varchar column 772 varchar column 773 varchar column 774 varchar column 775 varchar column 776 varchar column 777 varchar column 778 varchar column 779 varchar column 780 varchar column 781 varchar column 782 varchar column 783 varchar column 784 varchar column 785 varchar column 786 varchar column 787 varchar column 788 varchar column 789 varchar column 790 varchar column 791 varchar column 792 varchar column 793 varchar column 794 varchar column 795 varchar column 796 varchar column 797 varchar column 798 varchar column 799 varchar column 800 varchar column 801 varchar column 802 varchar column 803 varchar column 804 varchar column 805 varchar column 806 varchar column 807 varchar column 808 varchar column 809 varchar column 810 varchar column 811 varchar column 812 varchar column 813 varchar column 814 varchar column 815 varchar column 816 varchar column 817 varchar column 818 varchar column 819 varchar column 820 varchar column 821 varchar column 822 varchar column 823 varchar column 824 varchar column 825 varchar column 826 varchar column 827 varchar column 828 varchar column 829 varchar column 830 varchar column 831 varchar column 832 varchar column 833 varchar column 834 varchar column 835 varchar column 836 varchar column 837 varchar column 838 varchar column 839 varchar column 840 varchar column 841 varchar column 842 varchar column 843 varchar column 844 varchar column 845 varchar column 846 varchar column 847 varchar column 848 varchar column 849 varchar column 850 varchar column 851 varchar column 852 varchar column 853 varchar column 854 varchar column 855 varchar column 856 varchar column 857 varchar column 858 varchar column 859 varchar column 860 varchar column 861 varchar column 862 varchar column 863 varchar column 864 varchar column 865 varchar column 866 varchar column 867 varchar column 868 varchar column 869 varchar column 870 varchar column 871 varchar column 872 varchar column 873 varchar column 874 varchar column 875 varchar column 876 varchar column 877 varchar column 878 varchar column 879 varchar column 880 varchar column 881 varchar column 882 varchar column 883 varchar column 884 varchar column 885 varchar column 886 varchar column 887 varchar column 888 varchar column 889 varchar column 890 varchar column 891 varchar column 892 varchar column 893 varchar column 894 varchar column 895 varchar column 896 varchar column 897 varchar column 898 varchar column 899 varchar column 900 varchar column 901 varchar column 902 varchar column 903 varchar column 904 varchar column 905 varchar column 906 varchar column 907 varchar column 908 varchar column 909 varchar column 910 varchar column 911 varchar column 912 varchar column 913 varchar column 914 varchar column 915 varchar column 916 varchar column 917 varchar column 918 varchar column 919 varchar column 920 varchar column 921 varchar column 922 varchar column 923 varchar column 924 varchar column 925 varchar column 926 varchar column 927 varchar column 928 varchar column 929 varchar column 930 varchar column 931 varchar column 932 varchar column 933 varchar column 934 varchar column 935 varchar column 936 varchar column 937 varchar column 938 varchar column 939 varchar column 940 varchar column 941 varchar column 942 varchar column 943 varchar column 944 varchar column 945 varchar column 946 varchar column 947 varchar column 948 varchar column 949 varchar column 950 varchar column 951 varchar column 952 varchar column 953 varchar column 954 varchar column 955 varchar column 956 varchar column 957 varchar column 958 varchar column 959 varchar column 960 varchar column 961 varchar column 962 varchar column 963 varchar column 964 varchar column 965 varchar column 966 varchar column 967 varchar column 968 varchar column 969 varchar column 970 varchar column 971 varchar column 972 varchar column 973 varchar column 974 varchar column 975 varchar column 976 varchar column 977 varchar column 978 varchar column 979 varchar column 980 varchar column 981 varchar column 982 varchar column 983 varchar column 984 varchar column 985 varchar column 986 varchar column 987 varchar column 988 varchar column 989 varchar column 990 varchar column 991 varchar column 992 varchar column 993 varchar column 994 varchar column 995 varchar column 996 varchar column 997 varchar column 998 varchar column 999 varchar column 1000 varchar then select datum from table use join query select from iceberg issjoin tab 1 as a iceberg issjoin tab 2 as b where a column 1 b column 1 pfa error log appear in presto 1 what be the reason for such error 2 how to control such error 3 what be the impact of such error in query execution presto log txt |
prestodbpresto | prestissimo very slow hashprobe for an expand join | Bug | for the query below which have an expand join we see poor performance against tpcd sf1000 parquet varchar with ss as select ss net pay ss addr sk ss sell date sk ss item sk from store sale where ss sell date sk 2451200 select sum s1 ss item sk from ss s1 ss s2 where s1 ss item sk s2 ss item sk for the query 20240401 092330 00011 ikvfz the cte ss match 567 m row and 4 22 gb be exchange from stage 2 and stage 3 image the join be expand and it produce 2 14 t row the issue here be with the hashprobe operator be extremely slow net only a row transfer speed of 2 61k s hash probe be slow the net effect be that the query take 40 minute to finish full query info json be attach simple agg on self join 20240401 092330 00011 ikvfz json |
prestodbpresto | decimal multiplication fail with decimal scale must be in range 0 precision | Bug | look like there be a bug in how result type for decimal multiplication be calculate presto di select cast 1 2 as decimal 38 30 cast 1 2 as decimal 38 30 query 20240401 120207 79326 2jn6c fail decimal scale must be in range 0 precision see cc rui mo majetideepak tdcmeehan |
prestodbpresto | unexpected http status code 500 when retrieve version from | Bug | not sure why the test talk to azul probably something deep in the setup 2024 03 29t22 21 55 0412787z maven test b dair check skip all dmaven javadoc skip true dlogtestdurationlistener enable true no transfer progress fail at end 2024 03 29t22 21 55 0413801z retry github bin retry 2024 03 29t22 21 55 0414133z endgroup 2024 03 29t22 21 57 7367308z error unexpected http status code 500 when retrieve version from |
prestodbpresto | iceberg distinct value statistic be wrong when filter be apply | Bug | when call gettablestatistic with a filter on a partition column the distinct value count and some other statistic be wrong your environment iceberg connector with hadoop catalog configure presto 0 286 expect behavior stat should be correct or at least a close estimate ndvs should never exceed row count see test case below for exact scenario current behavior table statistic be incorrect when filter be push down to the table scan use the iceberg connector with a hadoop configure catalog possible solution for fix min max make sure predicate be apply properly in the table scan for fix distinct value count we don t have a good way to merge distinct count from separate partition iceberg partition statistics don t have a way to get well estimate through probabilistic datum structure like hll or theta sketch likely need to resort to some heuristic a simple heuristic to start could be to use the entire distinct count of the table and multiple by the ratio of select row from the partition filter e g table with r row have d distinct value a query for a set of partition return r p row when all manifest row count be add together the return distinct value count could be frac r p r cdot d this would assume a uniform distribution of distinct value across partition step to reproduce sql setup create table t I int with partition array I insert into t value 1 2 3 4 5 6 7 7 7 7 analyze t we can see stat for the whole table be correct with min max and distinct value sql presto z show stat for t column name datum size distinct value count null fraction row count low value high value I null 7 0 0 0 null 1 7 null null null null 10 0 null null 2 row now apply some filter to the table I 7 bug this should show distinct value count 1 min 7 max 7 row count should be 4 sql presto z show stat for select from t where I 7 column name datum size distinct value count null fraction row count low value high value I null 7 0 0 0 null 1 7 null null null null 10 0 null null I 7 bug distinct value should be 6 here sql presto z show stat for select from t where I 7 column name datum size distinct value count null fraction row count low value high value I null 7 0 0 0 null 1 6 null null null null 6 0 null null I 7 correct because this contain all row sql presto z show stat for select from t where I 7 column name datum size distinct value count null fraction row count low value high value I null 7 0 0 0 null 1 7 null null null null 10 0 null null I 7 bug distinct value count should be 1 other statistic be correct sql presto z show stat for select from t where I 7 column name datum size distinct value count null fraction row count low value high value I null 7 0 0 0 null 7 7 null null null null 4 0 null null I 7 correct no file should be scan here so we shouldn t have any statistic sql presto z show stat for select from t where I 7 column name datum size distinct value count null fraction row count low value high value I null null null null null null null null null null 0 0 null null context incorrect ndv count affect query join plan and result in long execution time |
prestodbpresto | bug order by limit sort problem be there any force sort configuration | Bug | select from select dev num from bi dm dm dev order pay month df where dt 20240318 order by dev num desc a limit 100 spark execute normally and presto sort randomly select dev num from bi dm dm dev order pay month df where dt 20240318 order by dev num desc limit 100 all execute normally |
prestodbpresto | native cast performance slow than java | Bug | spawn a new issue for the comment on issuecomment 1996569271 experiment with and without cast I be look at possible cause for the latency difference observe between native java cluster for tpcds q23 sf 10k and observe the below w r t performance of the cast operator on a native cluster measure read speed of the integer column ss quantity read as be presto tpcd sf10000 parquet varchar explain analyze select ss quantity from store sale query plan fragment 1 source cpu 16 68 m schedule 1 19h input 28 799 864 615 row 135 44 gb per task avg 1 799 991 538 44 std dev 1 135 109 560 01 output 28 799 864 615 row 105 94 gb 16 task output layout ss quantity output partition single stage execution strategy ungrouped execution tablescan plannodeid 0 tablehandle connectorid hive connectorhandle hivetablehandle schemaname tpcd sf10000 parquet varchar tablename store sale analyzepartitionvalue optional empty layout optional tpcd sf10000 parquet v cpu 16 68 m 100 00 schedule 1 19h 100 00 output 28 799 864 615 row 105 94 gb input avg 56 249 735 58 row input std dev 661 25 layout tpcd sf10000 parquet varchar store sale ss quantity ss quantity int 10 regular 1 41 input 28 799 864 615 row 135 44 gb filter 0 00 1 row query 20240314 043226 00018 kvcg finish 17 node split 7 771 total 7 771 do 100 00 latency client side 1 42 server side 1 42 28 8b row 1 49 gb 283 m rows s 14 9 mb s compare this against the read speed when we be force to cast the column to a decimal presto tpcd sf10000 parquet varchar explain analyze select cast ss quantity as decimal 10 0 from store sale query plan fragment 1 source cpu 25 40 m schedule 1 31h input 28 799 864 615 row 135 44 gb per task avg 1 799 991 538 44 std dev 1 500 572 958 27 output 28 799 864 615 row 208 40 gb 16 task output layout expr output partition single stage execution strategy ungrouped execution scanproject plannodeid 0 1 table tablehandle connectorid hive connectorhandle hivetablehandle schemaname tpcd sf10000 parquet varchar tablename store sale analyzepartitionvalue optional empty layout optional tpcd sf100 cpu 15 87 m 62 49 schedule 16 41 m 20 92 output 28 799 864 615 row 208 40 gb input avg 56 249 735 58 row input std dev 729 65 expr cast ss quantity as decimal 10 0 1 65 layout tpcd sf10000 parquet varchar store sale ss quantity ss quantity int 10 regular 1 64 input 28 799 864 615 row 135 44 gb filter 0 00 1 row query 20240314 043432 00019 kvcgs finish 17 node split 7 770 total 7 770 do 100 00 latency client side 3 24 server side 3 23 28 8b row 1 51 gb 142 m rows s 7 61 mb s latency increase 2x read speed decrease 2x on a java cluster measure read speed of the integer column ss quantity read as be presto use hive tpcd sf10000 parquet varchar use presto tpcd sf10000 parquet varchar explain analyze select ss quantity from store sale query plan fragment 1 source cpu 32 10 m schedule 3 06h input 28 799 864 615 row 29 70 gb per task avg 1 799 991 538 44 std dev 158 762 556 69 output 28 799 864 615 row 134 11 gb 16 task output layout ss quantity output partition single stage execution strategy ungrouped execution tablescan plannodeid 0 tablehandle connectorid hive connectorhandle hivetablehandle schemaname tpcd sf10000 parquet varchar tablename store sale analyzepartitionvalue optional empty layout optional tpcd sf10000 parquet v cpu 32 09 m 100 00 schedule 4 51h 100 00 output 28 799 864 615 row 134 11 gb input avg 3 714 194 56 row input std dev 19 14 layout tpcd sf10000 parquet varchar store sale ss quantity ss quantity int 10 regular 1 41 input 28 799 864 615 row 29 69 gb filter 0 00 1 row query 20240314 045541 00011 epfyg finish 17 node split 7 771 total 7 771 do 100 00 latency client side 3 41 server side 3 41 28 8b row 29 7 gb 131 m rows s 138 mb s compare this against the read speed when we be force to cast the column to a decimal presto tpcd sf10000 parquet varchar explain analyze select cast ss quantity as decimal 10 0 from store sale query plan fragment 1 source cpu 33 14 m schedule 1 91h input 28 799 864 615 row 29 71 gb per task avg 1 799 991 538 44 std dev 92 245 103 31 output 28 799 864 615 row 241 40 gb 16 task output layout expr output partition single stage execution strategy ungrouped execution scanproject plannodeid 0 1 table tablehandle connectorid hive connectorhandle hivetablehandle schemaname tpcd sf10000 parquet varchar tablename store sale analyzepartitionvalue optional empty layout optional tpcd sf100 cpu 33 11 m 100 00 schedule 3 13h 100 00 output 28 799 864 615 row 241 41 gb input avg 3 714 673 63 row input std dev 18 99 expr cast ss quantity as decimal 10 0 1 65 layout tpcd sf10000 parquet varchar store sale ss quantity ss quantity int 10 regular 1 64 input 28 799 864 615 row 29 69 gb filter 0 00 1 row query 20240314 045950 00012 epfyg finish 17 node split 7 770 total 7 770 do 100 00 latency client side 3 29 server side 3 28 28 8b row 29 7 gb 138 m rows s 146 mb s no impact in latency or read speed for the query with cast the latency be similar to what we observe in native cluster possible cause s cast be slow on native cluster the native parquet reader as compare against the java reader can read parquet into block with few overhead so be much fast for a straight read however since we be force to cast each value in the second query we lose any benefit we gain from the reader s well performance |
prestodbpresto | error while execute refreshresourcegroupruntimeinfo com facebook drift client uncheckedttransportexception no host available | Bug | not sure where the root cause be on this one yet but attach the log or not log be 45 mb and 25 mb be the max let I cut it into two part 2024 03 26t12 27 19 8767754z 2024 03 26t06 27 19 708 0600 error resource group manager refresher 1 2 com facebook presto execution resourcegroup internalresourcegroupmanager error while execute refreshresourcegroupruntimeinfo 2024 03 26t12 27 19 8770841z com facebook drift client uncheckedttransportexception no host available 2024 03 26t12 27 19 8772464z at com facebook drift client driftinvocationhandler invoke driftinvocationhandler java 126 2024 03 26t12 27 19 8774034z at com sun proxy proxy159 getresourcegroupinfo unknown source |
prestodbpresto | sql grammar issue rule querynowith contain an optional block with at least one alternative that can match an empty string | Bug | warn warn 154 com facebook presto sql parser sqlbase g4 231 0 rule querynowith contain an optional block with at least one alternative that can match an empty string warn user elharo presto com facebook presto sql parser sqlbase g4 231 0 rule querynowith contain an optional block with at least one alternative that can match an empty string |
prestodbpresto | where query with a column name with cause the query to fail | Bug | your environment presto version use select node version from system runtime node node version 0 281 amzn 2 0 281 amzn 2 storage s3 datum source and connector use presto python client 0 8 3 deployment aw emr pastebin link to the complete debug log expect behavior the query should work and show filter value current behavior it throw an exception possible solution step to reproduce run the follow query create table column test win double name st varchar age not int insert into column test value 12 13 bob 19 insert into column test value 72 11 cindy 12 now run the follow query select from column test where win 10 it throw an exception java lang illegalargumentexception invalid json byte for simple type class com facebook presto sql planner planfragment at com facebook airlift json jsoncodec fromjson jsoncodec java 199 at com facebook airlift json jsoncodec frombyte jsoncodec java 230 at java util optional map optional java 215 at com facebook presto server taskresource createorupdatetask taskresource java 160 at sun reflect generatedmethodaccessor473 invoke unknown source at sun reflect delegatingmethodaccessorimpl invoke delegatingmethodaccessorimpl java 43 at java lang reflect method invoke method java 498 at org glassfish jersey server model internal resourcemethodinvocationhandlerfactory lambda static 0 resourcemethodinvocationhandlerfactory java 76 at org glassfish jersey server model internal abstractjavaresourcemethoddispatcher 1 run abstractjavaresourcemethoddispatcher java 148 at org glassfish jersey server model internal abstractjavaresourcemethoddispatcher invoke abstractjavaresourcemethoddispatcher java 191 at org glassfish jersey server model internal javaresourcemethoddispatcherprovider responseoutinvoker dodispatch javaresourcemethoddispatcherprovider java 200 at org glassfish jersey server model internal abstractjavaresourcemethoddispatcher dispatch abstractjavaresourcemethoddispatcher java 103 at org glassfish jersey server model resourcemethodinvoker invoke resourcemethodinvoker java 493 at org glassfish jersey server model resourcemethodinvoker apply resourcemethodinvoker java 415 at org glassfish jersey server model resourcemethodinvoker apply resourcemethodinvoker java 104 at org glassfish jersey server serverruntime 1 run serverruntime java 277 at org glassfish jersey internal error 1 call error java 272 at org glassfish jersey internal error 1 call error java 268 at org glassfish jersey internal error process error java 316 at org glassfish jersey internal error process error java 298 at org glassfish jersey internal error process error java 268 at org glassfish jersey process internal requestscope runinscope requestscope java 289 at org glassfish jersey server serverruntime process serverruntime java 256 at org glassfish jersey server applicationhandler handle applicationhandler java 703 at org glassfish jersey servlet webcomponent serviceimpl webcomponent java 416 at org glassfish jersey servlet webcomponent service webcomponent java 370 at org glassfish jersey servlet servletcontainer service servletcontainer java 389 at org glassfish jersey servlet servletcontainer service servletcontainer java 342 at org glassfish jersey servlet servletcontainer service servletcontainer java 229 at org eclipse jetty servlet servletholder handle servletholder java 867 at org eclipse jetty servlet servlethandler cachedchain dofilter servlethandler java 1623 at com facebook airlift http server authenticationfilter dofilter authenticationfilter java 69 at org eclipse jetty servlet servlethandler cachedchain dofilter servlethandler java 1610 at com facebook airlift http server tracetokenfilter dofilter tracetokenfilter java 63 at org eclipse jetty servlet servlethandler cachedchain dofilter servlethandler java 1610 at com facebook airlift http server timingfilter dofilter timingfilter java 51 at org eclipse jetty servlet servlethandler cachedchain dofilter servlethandler java 1610 at org eclipse jetty servlet servlethandler dohandle servlethandler java 540 at org eclipse jetty server handler scopedhandler handle scopedhandler java 146 at org eclipse jetty server handler gzip gziphandler handle gziphandler java 703 at org eclipse jetty server handler handlerwrapper handle handlerwrapper java 132 at org eclipse jetty server handler scopedhandler nexthandle scopedhandler java 257 at org eclipse jetty server handler contexthandler dohandle contexthandler java 1345 at org eclipse jetty server handler scopedhandler nextscope scopedhandler java 203 at org eclipse jetty servlet servlethandler doscope servlethandler java 480 at org eclipse jetty server handler scopedhandler nextscope scopedhandler java 201 at org eclipse jetty server handler contexthandler doscope contexthandler java 1247 at org eclipse jetty server handler scopedhandler handle scopedhandler java 144 at org eclipse jetty server handler handlercollection handle handlercollection java 126 at org eclipse jetty server handler statisticshandler handle statisticshandler java 174 at org eclipse jetty server handler handlerlist handle handlerlist java 61 at org eclipse jetty server handler handlerwrapper handle handlerwrapper java 132 at org eclipse jetty server server handle server java 502 at org eclipse jetty server httpchannel handle httpchannel java 364 at org eclipse jetty server httpconnection onfillable httpconnection java 260 at org eclipse jetty io abstractconnection readcallback succeed abstractconnection java 305 at org eclipse jetty io fillinterest fillable fillinterest java 103 at org eclipse jetty io channelendpoint 2 run channelendpoint java 118 at org eclipse jetty util thread strategy eatwhatyoukill runtask eatwhatyoukill java 333 at org eclipse jetty util thread strategy eatwhatyoukill doproduce eatwhatyoukill java 310 at org eclipse jetty util thread strategy eatwhatyoukill tryproduce eatwhatyoukill java 168 at org eclipse jetty util thread strategy eatwhatyoukill run eatwhatyoukill java 126 at org eclipse jetty util thread reservedthreadexecutor reservedthread run reservedthreadexecutor java 366 at org eclipse jetty util thread queuedthreadpool runjob queuedthreadpool java 765 at org eclipse jetty util thread queuedthreadpool 2 run queuedthreadpool java 683 at java lang thread run thread java 750 cause by com fasterxml jackson databind exc valueinstantiationexception can not construct instance of com facebook presto common subfield problem invalid subfield path win at source byte i d 1 root type limitnode sourcelocation line 20 column 15 i d 209 source type filternode i d 208 source type tablescannode sourcelocation line 20 column 15 i d 0 table connectorid hive connectorhandle type hive hadoop2 schemaname eng dev tablename column test2 transaction type hive hadoop2 uuid ce51d3e0 06a6 4068 9c48 d33e53ab3cbb connectortablelayout type hive hadoop2 schematablename schema truncate 5683 byte line 1 column 956 through reference chain com facebook presto sql planner planfragment root com facebook presto spi plan limitnode source com facebook presto spi plan filternode source com facebook presto spi plan tablescannode table com facebook presto spi tablehandle connectortablelayout com facebook presto hive hivetablelayouthandle domainpredicate com facebook presto common predicate tupledomain columndomain java util arraylist 0 com facebook presto common predicate tupledomain columndomain column at com fasterxml jackson databind exc valueinstantiationexception from valueinstantiationexception java 47 at com fasterxml jackson databind deserializationcontext instantiationexception deserializationcontext java 1907 at com fasterxml jackson databind deser std stdvalueinstantiator wrapasjsonmappingexception stdvalueinstantiator java 587 at com fasterxml jackson databind deser std stdvalueinstantiator rewrapctorproblem stdvalueinstantiator java 610 at com fasterxml jackson databind deser std stdvalueinstantiator createfromstre stdvalueinstantiator java 335 at com fasterxml jackson databind deser std stddeserializer deserializefromstre stddeserializer java 265 at com fasterxml jackson databind deser beandeserializerbase deserializefromstre beandeserializerbase java 1495 at com fasterxml jackson databind deser beandeserializer deserializeother beandeserializer java 208 at com fasterxml jackson databind deser beandeserializer deserialize beandeserializer java 198 at com fasterxml jackson databind deser settablebeanproperty deserialize settablebeanproperty java 542 at com fasterxml jackson databind deser beandeserializer deserializewitherrorwrapping beandeserializer java 566 at com fasterxml jackson databind deser beandeserializer deserializeusingpropertybase beandeserializer java 450 at com fasterxml jackson databind deser beandeserializerbase deserializefromobjectusingnondefault beandeserializerbase java 1405 at com fasterxml jackson databind deser beandeserializer deserializefromobject beandeserializer java 363 at com fasterxml jackson databind deser beandeserializer deserialize beandeserializer java 196 at com fasterxml jackson databind deser std collectiondeserializer deserializefromarray collectiondeserializer java 355 at com fasterxml jackson databind deser std collectiondeserializer deserialize collectiondeserializer java 244 at com fasterxml jackson databind deser std collectiondeserializer deserialize collectiondeserializer java 28 at com fasterxml jackson databind deser std referencetypedeserializer deserialize referencetypedeserializer java 197 at com fasterxml jackson databind deser settablebeanproperty deserialize settablebeanproperty java 542 at com fasterxml jackson databind deser beandeserializer deserializewitherrorwrapping beandeserializer java 566 at com fasterxml jackson databind deser beandeserializer deserializeusingpropertybase beandeserializer java 450 at com fasterxml jackson databind deser beandeserializerbase deserializefromobjectusingnondefault beandeserializerbase java 1405 at com fasterxml jackson databind deser beandeserializer deserializefromobject beandeserializer java 363 at com fasterxml jackson databind deser beandeserializer deserialize beandeserializer java 196 at com fasterxml jackson databind deser settablebeanproperty deserialize settablebeanproperty java 542 at com fasterxml jackson databind deser beandeserializer deserializewitherrorwrapping beandeserializer java 566 at com fasterxml jackson databind deser beandeserializer deserializeusingpropertybase beandeserializer java 450 at com fasterxml jackson databind deser beandeserializerbase deserializefromobjectusingnondefault beandeserializerbase java 1405 at com fasterxml jackson databind deser beandeserializer deserializefromobject beandeserializer java 363 at com fasterxml jackson databind deser beandeserializer deserializeother beandeserializer java 231 at com fasterxml jackson databind deser beandeserializer deserialize beandeserializer java 198 at com fasterxml jackson databind jsontype impl aspropertytypedeserializer deserializetypedforid aspropertytypedeserializer java 139 at com fasterxml jackson databind jsontype impl aspropertytypedeserializer deserializetypedfromobject aspropertytypedeserializer java 107 at com fasterxml jackson databind jsontype impl aspropertytypedeserializer deserializetypedfromany aspropertytypedeserializer java 208 at com facebook presto metadata abstracttypedjacksonmodule internaltypedeserializer deserialize abstracttypedjacksonmodule java 87 at com fasterxml jackson databind deser std referencetypedeserializer deserialize referencetypedeserializer java 197 at com fasterxml jackson databind deser settablebeanproperty deserialize settablebeanproperty java 542 at com fasterxml jackson databind deser beandeserializer deserializewitherrorwrapping beandeserializer java 566 at com fasterxml jackson databind deser beandeserializer deserializeusingpropertybase beandeserializer java 450 at com fasterxml jackson databind deser beandeserializerbase deserializefromobjectusingnondefault beandeserializerbase java 1405 at com fasterxml jackson databind deser beandeserializer deserializefromobject beandeserializer java 363 at com fasterxml jackson databind deser beandeserializer deserialize beandeserializer java 196 at com fasterxml jackson databind deser settablebeanproperty deserialize settablebeanproperty java 542 at com fasterxml jackson databind deser beandeserializer deserializewitherrorwrapping beandeserializer java 566 at com fasterxml jackson databind deser beandeserializer deserializeusingpropertybase beandeserializer java 450 at com fasterxml jackson databind deser beandeserializerbase deserializefromobjectusingnondefault beandeserializerbase java 1405 at com fasterxml jackson databind deser beandeserializer deserializefromobject beandeserializer java 363 at com fasterxml jackson databind deser beandeserializer deserializeother beandeserializer java 231 at com fasterxml jackson databind deser beandeserializer deserialize beandeserializer java 198 at com fasterxml jackson databind jsontype impl aspropertytypedeserializer deserializetypedforid aspropertytypedeserializer java 139 at com fasterxml jackson databind jsontype impl aspropertytypedeserializer deserializetypedfromobject aspropertytypedeserializer java 107 at com fasterxml jackson databind deser abstractdeserializer deserializewithtype abstractdeserializer java 263 at com fasterxml jackson databind deser settablebeanproperty deserialize settablebeanproperty java 539 at com fasterxml jackson databind deser beandeserializer deserializewitherrorwrapping beandeserializer java 566 at com fasterxml jackson databind deser beandeserializer deserializeusingpropertybase beandeserializer java 450 at com fasterxml jackson databind deser beandeserializerbase deserializefromobjectusingnondefault beandeserializerbase java 1405 at com fasterxml jackson databind deser beandeserializer deserializefromobject beandeserializer java 363 at com fasterxml jackson databind deser beandeserializer deserializeother beandeserializer java 231 at com fasterxml jackson databind deser beandeserializer deserialize beandeserializer java 198 at com fasterxml jackson databind jsontype impl aspropertytypedeserializer deserializetypedforid aspropertytypedeserializer java 139 at com fasterxml jackson databind jsontype impl aspropertytypedeserializer deserializetypedfromobject aspropertytypedeserializer java 107 at com fasterxml jackson databind deser abstractdeserializer deserializewithtype abstractdeserializer java 263 at com fasterxml jackson databind deser settablebeanproperty deserialize settablebeanproperty java 539 at com fasterxml jackson databind deser beandeserializer deserializewitherrorwrapping beandeserializer java 566 at com fasterxml jackson databind deser beandeserializer deserializeusingpropertybase beandeserializer java 450 at com fasterxml jackson databind deser beandeserializerbase deserializefromobjectusingnondefault beandeserializerbase java 1405 at com fasterxml jackson databind deser beandeserializer deserializefromobject beandeserializer java 363 at com fasterxml jackson databind deser beandeserializer deserializeother beandeserializer java 231 at com fasterxml jackson databind deser beandeserializer deserialize beandeserializer java 198 at com fasterxml jackson databind jsontype impl aspropertytypedeserializer deserializetypedforid aspropertytypedeserializer java 139 at com fasterxml jackson databind jsontype impl aspropertytypedeserializer deserializetypedfromobject aspropertytypedeserializer java 107 at com fasterxml jackson databind deser abstractdeserializer deserializewithtype abstractdeserializer java 263 at com fasterxml jackson databind deser settablebeanproperty deserialize settablebeanproperty java 539 at com fasterxml jackson databind deser beandeserializer deserializewitherrorwrapping beandeserializer java 566 at com fasterxml jackson databind deser beandeserializer deserializeusingpropertybase beandeserializer java 450 at com fasterxml jackson databind deser beandeserializerbase deserializefromobjectusingnondefault beandeserializerbase java 1405 at com fasterxml jackson databind deser beandeserializer deserializefromobject beandeserializer java 363 at com fasterxml jackson databind deser beandeserializer deserialize beandeserializer java 196 at com fasterxml jackson databind deser defaultdeserializationcontext readrootvalue defaultdeserializationcontext java 322 at com fasterxml jackson databind objectreader bindandclose objectreader java 2033 at com fasterxml jackson databind objectreader readvalue objectreader java 1528 at com facebook airlift json jsoncodec fromjson jsoncodec java 196 65 more cause by com facebook presto common invalidfunctionargumentexception invalid subfield path win at com facebook presto common subfieldtokenizer invalidsubfieldpath subfieldtokenizer java 261 at com facebook presto common subfieldtokenizer computenext subfieldtokenizer java 121 at com facebook presto common subfieldtokenizer trytocomputenext subfieldtokenizer java 68 at com facebook presto common subfieldtokenizer hasnext subfieldtokenizer java 62 at java util iterator foreachremaine iterator java 115 at com facebook presto common subfield subfield java 225 at sun reflect generatedconstructoraccessor236 newinstance unknown source at sun reflect delegatingconstructoraccessorimpl newinstance delegatingconstructoraccessorimpl java 45 at java lang reflect constructor newinstance constructor java 423 at com fasterxml jackson databind introspect annotatedconstructor call1 annotatedconstructor java 129 at com fasterxml jackson databind deser std stdvalueinstantiator createfromstre stdvalueinstantiator java 332 143 more screenshot if appropriate context this be just a normal query with a filter it should work period |
prestodbpresto | jdbc driver return inconsistent capitalization of vendor type name | Bug | the vendor type name return by databasemetadata gettypeinfo and the type name return in resultsetmetadata do not use consistent capitalization image image create or replace view as select from value cast 0 as integer null cast 1 as integer interval 0 00 year to month cast 2 as integer interval 1 01 year to month cast 3 as integer interval 1 01 year to month cast 4 as integer interval 10 10 year to month as t rnum ciyrmo |
prestodbpresto | native plan conversion error unexpected token char at 16 | Bug | presto tpcd sf1 parquet partition select 1 from store sale date dim where ss sell date sk d date sk query 20240315 090432 00030 wpvtm fail unexpected token char at 16 veloxruntimeerror unexpected token char at 16 at unknown 0 zn8facebook5velox7process10stacktracec1ei unknown source at unknown 1 zn8facebook5velox14veloxexceptionc2epkcms3 st17basic string viewicst11char traitsicees7 s7 s7 bns1 4typees7 unknown source at unknown 2 zn8facebook5velox6detail14veloxcheckfailins0 17veloxruntimeerrorerknst7 cxx1112basic stringicst11char traitsicesaiceeeeevrkns1 18veloxcheckfailargset0 unknown source at unknown 3 zn8facebook5velox4type6fbhive14hivetypeparser9parsetypeev unknown source at unknown 4 zn8facebook5velox4type6fbhive14hivetypeparser5parseerknst7 cxx1112basic stringicst11char traitsicesaiceee unknown source at unknown 5 zn8facebook6presto12 global n 117tohivetablehandleerkns0 8protocol11tupledomaininst7 cxx1112basic stringicst11char traitsicesaiceeeeerkst10share ptrins2 13rowexpressioneebrks9 rkst6vectorins2 6columnesaisl eerkns2 11tablehandleerkst3mapis9 s9 st4lessis9 esaist4pairisi s9 eeerkns0 18veloxexprconvertererkns0 10typeparsere unknown source at unknown 6 zn8facebook6presto27veloxqueryplanconverterbase16toveloxqueryplanerkst10shared ptrikns0 8protocol13tablescannodeeerks2 ins3 14tablewriteinfoeerknst7 cxx1112basic stringicst11char traitsicesaiceee unknown source at unknown 7 zn8facebook6presto27veloxqueryplanconverterbase16toveloxqueryplanerkst10shared ptrikns0 8protocol8plannodeeerks2 ins3 14tablewriteinfoeerknst7 cxx1112basic stringicst11char traitsicesaiceee unknown source at unknown 8 zn8facebook6presto27veloxqueryplanconverterbase16toveloxqueryplanerkns0 8protocol12planfragmenterkst10shared ptrins2 14tablewriteinfoeerknst7 cxx1112basic stringicst11char traitsicesaiceee unknown source at unknown 9 zzn8facebook6presto12taskresource18createorupdatetaskepn8proxygen11httpmessageerkst6vectorinst7 cxx1112basic stringicst11char traitsicesaiceeesaisb eeenkulrksb sh le clesh sh l isra 0 unknown source at unknown 10 znst17 function handlerifst10unique ptrin8facebook6presto8protocol8taskinfoest14default deleteis4 eerknst7 cxx1112basic stringicst11char traitsicesaiceeesf lezns2 12taskresource18createorupdatetaskepn8proxygen11httpmessageerkst6vectorisd saisd eeeulsf sf le e9 m invokeerkst9 any datasf sf ol unknown source at unknown 11 zn5folly6detail8function14functiontraitsifvrn 7futures6detail8corebaseeons 8executor9keepaliveis7 eepns 17exception wrappereee7callbigizns4 4corein 4unitee11setcallbackizns4 10futurebaseish e18thenimplementationizno 6futureish e9thenvalueizns 3viaizzn8facebook6presto12taskresource22createorupdatetaskimplepn8proxygen11httpmessageerkst6vectorinst7 cxx1112basic stringicst11char traitsicesaiceeesais13 eerkst8functionifst10unique ptrins 8protocol8taskinfoest14default deleteis1b eerks13 s1 g leeenkulsw rksx is19 in 5iobufes1c is1l eesais1n eepnsu 15responsehandlerest10shared ptrins 4http27callbackrequesthandlerstateeee clesw s1r s1 t s1x eulve eensn in 20isfutureorsemifutureidtclcl7declvalit eeeee5innereees9 os21 euls26 e eensn ins4 19valuecallableresultish s21 e10value typeeees26 eulsa ons 3tryish eee ns4 25tryexecutorcallableresultish s2f veeeenst9enable ifixntsrnt0 13returnsfuturee5valueens2j 6returnee4typees26 s2j ns4 18inlinecontinuationeeulsa s2e e eevs26 os1u in 14requestcontextees2o euls6 sa sc e eevs6 sa sc rns1 4datae unknown source at unknown 12 zn5folly6detail8function14functiontraitsifvrn 7futures6detail8corebaseeons 8executor9keepaliveis7 eepns 17exception wrappereeecles6 sa sc unknown source at unknown 13 zzn5folly7futures6detail8corebase10docallbackeon 8executor9keepaliveis3 eens1 5stateeenuls6 e0 cles6 unknown source at unknown 14 zzno5folly8executor9keepaliveis0 e3addizns 7futures6detail8corebase10docallbackeos2 ns5 5stateeeuls7 e0 eevot enulve clev unknown source at unknown 15 zn5folly6detail8function14functiontraitsifvvee9callsmallizno 8executor9keepaliveis6 e3addizns 7futures6detail8corebase10docallbackeos8 nsb 5stateeeulsd e0 eevot eulve eevrns1 4datae unknown source at unknown 16 zn5folly6detail8function14functiontraitsifvveeclev unknown source at unknown 17 zn5folly18threadpoolexecutor7runtaskerkst10share ptrins0 6threadeeons0 4taske unknown source at unknown 18 zn5folly21cputhreadpoolexecutor9threadrunest10share ptrin 18threadpoolexecutor6threadee unknown source at unknown 19 zst13 invoke implivrmn5folly18threadpoolexecutorefvst10share ptrins1 6threadeeerps1 jrs4 eet st21 invoke memfun derefot0 ot1 dpot2 unknown source at unknown 20 zst8 invokeirmn5folly18threadpoolexecutorefvst10share ptrins1 6threadeeejrps1 rs4 eenst15 invoke resultit jdpt0 ee4typeeosc dposd unknown source at unknown 21 znst5 bindifmn5folly18threadpoolexecutorefvst10share ptrins1 6threadeeeps1 s4 ee6 callivjejlm0elm1eeeet ost5tupleijdpt0 eest12 index tupleijxspt1 eee unknown source at unknown 22 znst5 bindifmn5folly18threadpoolexecutorefvst10share ptrins1 6threadeeeps1 s4 eeclijeveet0 dpot unknown source at unknown 23 zn5folly6detail8function14functiontraitsifvvee9callsmallist5 bindifmn 18threadpoolexecutorefvst10shared ptrins7 6threadeeeps7 sa eeeevrns1 4datae unknown source at unknown 24 0x00000000000c2b23 unknown source at unknown 25 start thread unknown source at unknown 26 clone unknown source |
prestodbpresto | unexpected result from map normalize | Bug | map normalize function doesn t seem to have any special handling for the case when total sum of the value be zero thus we get the follow result presto di select map normalize map array 1 2 array 1 1 col0 1 infinity 2 infinity presto di select map normalize map array 1 2 array null 0 col0 1 null 2 nan presto di select map normalize map array 1 2 array 0 0 col0 1 nan 2 nan presto di select map normalize map array 1 2 3 array 0 1 1 col0 1 nan 2 infinity 3 infinity I wonder if this be intentional the docs don t clarify that case map normalize return the map with the same key but all non null value be scale proportionally so that the sum of value become 1 map entry with null value remain unchanged yet the sum of the value in result be clearly not 1 cc tdcmeehan rschlussel aditi pandit steveburnett kaikalur |
prestodbpresto | iceberg variable width column datum size be generally wrong | Bug | your environment any iceberg table expect behavior datum size statistic should accurately reflect the size in memory when operate in presto current behavior iceberg s tablestatisticsmaker try to use the iceberg manifest file information to calculate the data size for each column this information be use in the optimizer for thing like determine the join distribution type base on row size however in the iceberg spec this datum be actually the on disk data size not necessarily the in memory size which be what we care about it turn out that most if not all of the data size statistic iceberg report be incorrect by an order of 3 5x this amount could change depend on disk storage format compression encryption etc possible solution there be two dimension to the solution 1 do we want to allow analyze on iceberg table to generate stat for datum size 2 can we still utilize the manifest information even when stat don t exist without analyze with analyze don t change manifest info don t change the code at all just add data size collection support to iceberg do change manifest info add some heuristic to tablestatisticsmaker add support for correct data size with analyze and try to improve tablestatisticsmaker with heuristic 1 don t return data size stat at all except after an analyze 3 return incorrect stat but allow analyze to overwrite and improve they 4 keep incorrect stat but apply some adjustment factor base on file type configuration step to reproduce use the icebergqueryrunner and execute show stat on any tpch ds table sql show stat for select comment from order column name datum size distinct value count null fraction row count low value high value comment 167815 0 null 0 0 null null null null null null null 15000 0 null null 2 row query 20240314 192652 00012 nmth9 finish 1 node split 1 total 1 do 100 00 latency client side 124ms server side 83ms 0 row 0b 0 rows s 0b s check the value return by the aggregation function use to calculate data size sum data size for stat sql presto tpch select sum data size for stat comment from order col0 727364 1 row query 20240314 192726 00013 nmth9 finish 4 node split 9 total 9 do 100 00 latency client side 295ms server side 244ms 15k row 228 kb 61 5k rows s 934 kb s 727364 167815 by a factor of about 4 5 context can cause query slowdown if incorrect join distribution type be choose |
prestodbpresto | incorrect stat compute store for partition hive table | Bug | in 22149 issuecomment 1988362783 we observe that the stat store after run analyze on a partitioned table do not match expect value row count be off null fraction be off the correct stat be those from the unpartitioned version of the same datum partition stat presto show stat for hive tpcd sf1000 parquet varchar part store sale column name datum size distinct value count null fraction row count low value high value ss sell time sk null 47961 0 0 23922516057106902 null 28800 75599 ss item sk null 297612 0 0 0 null 1 300000 ss customer sk null 1 2124495e7 0 23924228921378904 null 1 12000000 ss cdemo sk null 1890006 0 0 23931612089546253 null 1 1920800 ss hdemo sk null 7082 0 0 23927428620311655 null 1 7200 ss addr sk null 5947530 0 0 23927779717312136 null 1 6000000 ss store sk null 513 0 0 23923616161041744 null 1 1000 ss promo sk null 1483 0 0 2392977450723527 null 1 1500 ss ticket number null 1 0071624e8 0 0 null 2 240000000 ss quantity null 100 0 0 23922890211223832 null 1 100 ss wholesale cost null 10091 0 0 2392495696729831 null 1 0 100 0 ss list price null 19495 0 0 23928140945469847 null 1 0 200 0 ss sale price null 19348 0 0 2392575802741981 null 0 0 200 0 ss ext discount amt null 529048 0 0 23929047160016365 null 0 0 19336 68 ss ext sale price null 600147 0 0 23928937114687854 null 0 0 19800 0 ss ext wholesale cost null 388752 0 0 23927765743302168 null 1 0 10000 0 ss ext list price null 731384 0 0 2392458316253163 null 1 0 19998 0 ss ext tax null 115731 0 0 23925412520023315 null 0 0 1758 24 ss coupon amt null 529048 0 0 23929047160016365 null 0 0 19336 68 ss net pay null 786675 0 0 23930163683412922 null 0 0 19800 0 ss net pay inc tax null 1098165 0 0 2393256861052866 null 0 0 21294 24 ss net profit null 1023699 0 0 2392519068261505 null 10000 0 9900 0 ss sell date sk null 1823 0 0 024822416010531874 null 2450816 2452642 null null null null 5 221121221440001e9 null null 24 row unpartitioned stat presto show stat for hive tpcd sf1000 parquet varchar store sale column name datum size distinct value count null fraction row count low value high value ss sell date sk null 1820 0 0 04500048022595944 null 2450816 2452642 ss sell time sk null 47961 0 0 04499776111740666 null 28800 75599 ss item sk null 297612 0 0 0 null 1 300000 ss customer sk null 1 2236563e7 0 04499698125304584 null 1 12000000 ss cdemo sk null 1890006 0 0 04500093578341331 null 1 1920800 ss hdemo sk null 7082 0 0 044998298272422764 null 1 7200 ss addr sk null 5947530 0 0 044996645487757815 null 1 6000000 ss store sk null 513 0 0 04499048261485481 null 1 1000 ss promo sk null 1483 0 0 04499917917887129 null 1 1500 ss ticket number null 2 43256717e8 0 0 null 1 240000000 ss quantity null 100 0 0 04499472152140729 null 1 100 ss wholesale cost null 10091 0 0 04499681007177697 null 1 0 100 0 ss list price null 19495 0 0 04499915278987244 null 1 0 200 0 ss sale price null 19536 0 0 044999514249711985 null 0 0 200 0 ss ext discount amt null 1149239 0 0 04500334759901894 null 0 0 19778 0 ss ext sale price null 738120 0 0 04499642951463563 null 0 0 19972 0 ss ext wholesale cost null 388752 0 0 04499845764808689 null 1 0 10000 0 ss ext list price null 752801 0 0 04499803472965792 null 1 0 20000 0 ss ext tax null 150267 0 0 04499627500010287 null 0 0 1797 48 ss coupon amt null 1149239 0 0 04500334759901894 null 0 0 19778 0 ss net pay null 1274594 0 0 0449999816127706 null 0 0 19972 0 ss net pay inc tax null 1697156 0 0 04500332989061181 null 0 0 21769 48 ss net profit null 1499246 0 0 044990789213354636 null 10000 0 9986 0 null null null null 2 879987999e9 null null 24 row |
prestodbpresto | native seem to generate large datum size than java | Bug | environment these test be perform against presto native and java on 0 287 with hive and tpc ds sf10k on a 16 node cluster issue originally we find this issue use tpc ds q23 we notice that the presto native execution take significantly long to complete compare to java baseline here be native while target be java wall ms here for the native query be over 5x long image this discrepancy be trace in the q23 query plan to a join in native which have significantly more datum input than the correspond java execution notice that the datum in the native execution generate by fragment 10 be small 519 gb but when read by the remote source as the input to the inner join the input be 5 tb be size java in comparison have fragment 10 output 734 gb in fragment 10 and calculate remotesource 10 as have 858 gb input java execution innerjoin plannodeid 6008 ss customer sk 72 c customer sk hashvalue 590 hashvalue 592 ss quantity 79 integer ss sale price 82 decimal 7 2 c customer sk bigint estimate source costbasedsourceinfo row 27 651 160 233 463 54 gb cpu 3 949 818 895 340 49 memory 1 170 000 000 00 network 896 846 758 984 00 cpu 1 59h 2 47 schedule 2 28h 1 50 output 27 503 916 103 row 601 96 gb leave probe input avg 112 499 471 15 row input std dev 89 31 right build input avg 253 906 25 row input std dev 0 20 distribution partition remotesource 10 ss customer sk 72 bigint ss quantity 79 integer ss sale price 82 decimal 7 2 hashvalue 590 bigint cpu 10 91 m 0 28 schedule 18 53 m 0 20 output 28 799 864 615 row 858 30 gb input avg 112 499 471 15 row input std dev 89 31 localexchange plannodeid 7205 hash hashvalue 592 c customer sk c customer sk bigint hashvalue 592 bigint estimate source costbasedsourceinfo row 65 000 000 1 09 gb cpu 4 095 000 000 00 memory 0 00 network 1 170 000 000 00 cpu 2 44s 0 00 schedule 2 57 0 00 output 65 000 000 row 1 09 gb input avg 253 906 25 row input std dev 153 41 remotesource 11 c customer sk bigint hashvalue 593 bigint cpu 323 00ms 0 00 schedule 340 00ms 0 00 output 65 000 000 row 1 09 gb input avg 253 906 25 row input std dev 153 41 fragment 10 source cpu 3 00h schedule 7 09h input 28 799 864 615 row 200 62 gb per task avg 1 799 991 538 44 std dev 1 339 212 806 25 output 28 799 864 615 row 733 99 gb 16 task output layout ss customer sk 72 ss quantity 79 ss sale price 82 hashvalue 591 output partition hash ss customer sk 72 hashvalue 591 stage execution strategy ungrouped execution scanproject plannodeid 27 7993 table tablehandle connectorid hive connectorhandle hivetablehandle schemaname tpcd sf10000 parquet varchar tablename store sale analyzepartitionvalue optional empty layout optional tpcds sf10000 parquet varchar store sale group false projectlocality local ss customer sk 72 bigint ss quantity 79 integer ss sale price 82 decimal 7 2 hashvalue 591 bigint estimate source costbasedsourceinfo row 28 799 864 615 834 16 gb cpu 636 477 977 449 00 memory 0 00 network 0 00 source costbasedsourceinfo row 28 799 864 615 834 16 gb cpu 1 532 154 736 433 00 memory 0 00 network 0 00 cpu 3 00h 4 66 schedule 9 52h 6 25 output 28 799 864 615 row 734 00 gb input avg 3 714 673 63 row input std dev 19 08 hashvalue 591 combine hash bigint 0 coalesce operator hash code ss customer sk 72 bigint 0 24 16 layout tpcd sf10000 parquet varchar store sale ss customer sk 72 ss customer sk bigint 3 regular 24 16 ss sale price 82 ss sale price decimal 7 2 13 regular 24 16 ss quantity 79 ss quantity int 10 regular 24 16 input 28 799 864 615 row 200 58 gb filter 0 00 native execution innerjoin plannodeid 5900 ss customer sk 72 c customer sk ss quantity 79 integer ss sale price 82 decimal 7 2 c customer sk bigint estimate source costbasedsourceinfo row 27 651 160 233 231 77 gb cpu 2 532 819 573 286 49 memory 585 000 000 00 network 637 062 977 449 00 cpu 12 64 m 0 71 schedule 12 71 m 0 07 output 27 503 916 103 row 3 32 tb distribution partition remotesource 10 ss customer sk 72 bigint ss quantity 79 integer ss sale price 82 decimal 7 2 cpu 8 64 m 0 48 schedule 16 17 m 0 09 output 28 799 864 615 row 5 02 tb input avg 56 249 735 58 row input std dev 565 42 localexchange plannodeid 6953 hash c customer sk c customer sk bigint estimate source costbasedsourceinfo row 65 000 000 557 90 mb cpu 1 755 000 000 00 memory 0 00 network 585 000 000 00 cpu 4 56 0 00 schedule 6 17 0 00 output 65 000 000 row 929 50 mb input avg 126 953 13 row input std dev 556 78 remotesource 11 c customer sk bigint cpu 690 00ms 0 00 schedule 733 00ms 0 00 output 65 000 000 row 929 50 mb fragment 10 source cpu 1 07h schedule 3 01h input 28 799 864 615 row 669 99 gb per task avg 1 799 991 538 44 std dev 1 332 162 619 44 output 28 799 864 615 row 519 62 gb 16 task output layout ss customer sk 72 ss quantity 79 ss sale price 82 output partition hash ss customer sk 72 stage execution strategy ungrouped execution tablescan plannodeid 27 tablehandle connectorid hive connectorhandle hivetablehandle schemaname tpcd sf10000 parquet varchar tablename store sale analyzepartitionvalue optional empty layout optional tpcds sf10000 parquet varchar store sale group false ss customer sk 72 bigint ss quantity 79 integer ss sale price 82 decimal 7 2 estimate source costbasedsourceinfo row 28 799 864 615 592 77 gb cpu 636 477 977 449 00 memory 0 00 network 0 00 cpu 1 07h 3 57 schedule 3 01h 1 02 output 28 799 864 615 row 519 62 gb input avg 56 249 735 58 row input std dev 696 62 layout tpcd sf10000 parquet varchar store sale ss quantity 79 ss quantity int 10 regular 24 16 ss customer sk 72 ss customer sk bigint 3 regular 24 16 ss sale price 82 ss sale price decimal 7 2 13 regular 24 16 input 28 799 864 615 row 669 99 gb filter 0 00 our suspect be the severe increase in datum be likely the cause of the query slowdown I be able to extract the relevant query from the plan and confirm that this issue persist even in simple query sql select cast ss quantity as decimal 10 0 ss sale price ss customer sk ss sale price ss quantity from store sale customer where ss customer sk c customer sk I run this query on sf1k with a 2 node cluster and find that there be still an issue with large data transfer size you ll see here the java execution have an input size of 86 03 gb for fragment 1 while native have input of 335 08 gb java explain analyze fragment 1 hash cpu 19 63 m schedule 35 02 m input 2 891 987 999 row 86 03 gb per task avg 1 445 993 999 50 std dev 64 709 792 50 output 2 750 397 233 row 97 34 gb 2 task output layout expr ss customer sk ss sale price ss quantity output partition single stage execution strategy ungrouped execution project plannodeid 4 projectlocality local expr decimal 17 2 ss customer sk bigint ss sale price decimal 7 2 ss quantity integer estimate source costbasedsourceinfo row 2 750 397 233 78 78 gb cpu 479 689 129 163 18 memory 216 000 000 00 network 89 783 768 320 00 cpu 9 34 m 24 71 schedule 16 64 m 12 53 output 2 750 397 233 row 97 34 gb input avg 85 949 913 53 row input std dev 58 48 expr cast ss quantity as decimal 10 0 ss sale price 1 129 innerjoin plannodeid 359 ss customer sk c customer sk hashvalue hashvalue 19 ss customer sk bigint ss quantity integer ss sale price decimal 7 2 estimate source costbasedsourceinfo row 2 750 397 233 80 59 gb cpu 395 097 171 901 88 memory 216 000 000 00 network 89 783 768 320 00 cpu 8 84 m 23 36 schedule 15 79 m 11 89 output 2 750 397 233 row 74 28 gb leave probe input avg 89 999 624 97 row input std dev 57 99 right build input avg 375 000 00 row input std dev 0 16 collision avg 235 810 06 100 23 est collision std dev 100 01 distribution partition remotesource 2 ss customer sk bigint ss quantity integer ss sale price decimal 7 2 hashvalue bigint cpu 1 43 m 3 78 schedule 2 59 m 1 95 output 2 879 987 999 row 85 83 gb input avg 89 999 624 97 row input std dev 57 99 localexchange plannodeid 435 hash hashvalue 19 c customer sk c customer sk bigint hashvalue 19 bigint estimate source costbasedsourceinfo row 12 000 000 366 21 mb cpu 756 000 000 00 memory 0 00 network 216 000 000 00 cpu 558 00ms 0 02 schedule 709 00ms 0 01 output 12 000 000 row 206 mb input avg 375 000 00 row input std dev 268 30 remotesource 3 c customer sk bigint hashvalue 20 bigint cpu 59 00ms 0 00 schedule 83 00ms 0 00 output 12 000 000 row 206 mb input avg 375 000 00 row input std dev 268 30 fragment 2 source cpu 18 15 m schedule 1 04h input 2 879 987 999 row 20 00 gb per task avg 1 439 993 999 50 std dev 82 255 679 50 output 2 879 987 999 row 73 54 gb 2 task output layout ss customer sk ss quantity ss sale price hashvalue 18 output partition hash ss customer sk hashvalue 18 stage execution strategy ungrouped execution scanproject plannodeid 0 477 table tablehandle connectorid hive connectorhandle hivetablehandle schemaname tpcd sf1000 parquet varchar tablename store sale analyzepartitionvalue optional empty layout optional tpcd sf1 estimate source costbasedsourceinfo row 2 879 987 999 83 42 gb cpu 63 647 876 329 00 memory 0 00 network 0 00 source costbasedsourceinfo row 2 879 987 999 83 42 gb cpu 153 215 644 649 00 memory 0 00 netw cpu 18 15 m 47 98 schedule 1 63h 73 46 output 2 879 987 999 row 73 54 gb input avg 2 742 845 71 row input std dev 37 70 hashvalue 18 combine hash bigint 0 coalesce operator hash code ss customer sk bigint 0 1 128 layout tpcd sf1000 parquet varchar store sale ss sale price ss sale price decimal 7 2 13 regular 1 128 ss quantity ss quantity int 10 regular 1 128 ss customer sk ss customer sk bigint 3 regular 1 128 input 2 879 987 999 row 20 00 gb filter 0 00 fragment 3 source cpu 3 28 schedule 9 10 input 12 000 000 row 91 70 mb per task avg 6 000 000 00 std dev 2 952 680 00 output 12 000 000 row 183 11 mb 2 task output layout c customer sk hashvalue 21 output partition hash c customer sk hashvalue 21 stage execution strategy ungrouped execution scanproject plannodeid 1 478 table tablehandle connectorid hive connectorhandle hivetablehandle schemaname tpcd sf1000 parquet varchar tablename customer analyzepartitionvalue optional empty layout optional tpcds sf1000 estimate source costbasedsourceinfo row 12 000 000 205 99 mb cpu 108 000 000 00 memory 0 00 network 0 00 source costbasedsourceinfo row 12 000 000 205 99 mb cpu 324 000 000 00 memory 0 00 network 0 00 cpu 3 28 0 14 schedule 12 69 0 16 output 12 000 000 row 183 11 mb input avg 1 090 909 09 row input std dev 60 68 hashvalue 21 combine hash bigint 0 coalesce operator hash code c customer sk bigint 0 1 141 layout tpcd sf1000 parquet varchar customer c customer sk c customer sk bigint 0 regular 1 141 input 12 000 000 row 91 70 mb filter 0 00 native exlpain analyze fragment 1 hash cpu 56 06 m schedule 1 47h input 2 891 987 999 row 335 08 gb per task avg 1 445 993 999 50 std dev 64 361 414 50 output 2 750 397 233 row 71 26 gb 2 task output layout expr ss customer sk ss sale price ss quantity output partition single stage execution strategy ungrouped execution project plannodeid 4 projectlocality local expr decimal 17 2 ss customer sk bigint ss sale price decimal 7 2 ss quantity integer estimate source costbasedsourceinfo row 2 750 397 233 78 78 gb cpu 337 741 576 861 18 memory 108 000 000 00 network 63 755 876 329 00 cpu 53 25 m 87 27 schedule 1 42h 78 92 output 2 750 397 233 row 71 26 gb input avg 343 799 654 13 row input std dev 173 21 expr cast ss quantity as decimal 10 0 ss sale price 1 129 innerjoin plannodeid 359 ss customer sk c customer sk ss customer sk bigint ss quantity integer ss sale price decimal 7 2 estimate source costbasedsourceinfo row 2 750 397 233 80 59 gb cpu 253 149 619 599 88 memory 108 000 000 00 network 63 755 876 329 00 cpu 1 48 m 2 42 schedule 1 59 m 1 47 output 2 750 397 233 row 329 85 gb distribution partition remotesource 2 ss customer sk bigint ss quantity integer ss sale price decimal 7 2 cpu 1 32 m 2 17 schedule 1 51 m 1 40 output 2 879 987 999 row 334 90 gb input avg 359 998 499 88 row input std dev 173 44 localexchange plannodeid 435 hash c customer sk c customer sk bigint estimate source costbasedsourceinfo row 12 000 000 366 21 mb cpu 324 000 000 00 memory 0 00 network 108 000 000 00 cpu 295 00ms 0 01 schedule 953 00ms 0 01 output 12 000 000 row 181 48 mb input avg 1 500 000 00 row input std dev 173 21 remotesource 3 c customer sk bigint cpu 87 00ms 0 00 schedule 92 00ms 0 00 output 12 000 000 row 181 48 mb input avg 1 500 000 00 row input std dev 173 21 fragment 2 source cpu 4 95 m schedule 19 48 m input 2 879 987 999 row 67 00 gb per task avg 1 439 993 999 50 std dev 145 805 473 50 output 2 879 987 999 row 52 10 gb 2 task output layout ss customer sk ss quantity ss sale price output partition hash ss customer sk stage execution strategy ungrouped execution tablescan plannodeid 0 tablehandle connectorid hive connectorhandle hivetablehandle schemaname tpcd sf1000 parquet varchar tablename store sale analyzepartitionvalue optional empty layout optional tpcds sf1000 parquet va estimate source costbasedsourceinfo row 2 879 987 999 59 28 gb cpu 63 647 876 329 00 memory 0 00 network 0 00 cpu 4 95 m 8 11 schedule 19 48 m 18 03 output 2 879 987 999 row 52 10 gb input avg 359 998 499 88 row input std dev 174 38 layout tpcd sf1000 parquet varchar store sale ss customer sk ss customer sk bigint 3 regular 1 128 ss sale price ss sale price decimal 7 2 13 regular 1 128 ss quantity ss quantity int 10 regular 1 128 input 2 879 987 999 row 67 00 gb filter 0 00 fragment 3 source cpu 838 71ms schedule 10 24 input 12 000 000 row 112 39 mb per task avg 6 000 000 00 std dev 2 952 680 00 output 12 000 000 row 91 61 mb 2 task output layout c customer sk output partition hash c customer sk stage execution strategy ungrouped execution tablescan plannodeid 1 tablehandle connectorid hive connectorhandle hivetablehandle schemaname tpcd sf1000 parquet varchar tablename customer analyzepartitionvalue optional empty layout optional tpcd sf1000 parquet varch estimate source costbasedsourceinfo row 12 000 000 103 00 mb cpu 108 000 000 00 memory 0 00 network 0 00 cpu 838 00ms 0 02 schedule 10 24 0 16 output 12 000 000 row 91 61 mb input avg 1 500 000 00 row input std dev 199 22 layout tpcd sf1000 parquet varchar customer c customer sk c customer sk bigint 0 regular 1 141 input 12 000 000 row 112 39 mb filter 0 00 my first thought be that there s something wonky with the block encoding since the query result be correct but haven t be able to confirm cc aditi pandit majetideepak yingsu00 |
prestodbpresto | approx set do not support as many type as approx distinct | Bug | expect behavior approx set should support as many type as approx distinct current behavior when try to validate velox behavior we see that approx set only seem to support a few type whereas approx distinct support any type presto di show function like approx set function return type argument type function type deterministic description variable arity build in temporary language approx set hyperloglog bigint aggregate true false true false approx set hyperloglog bigint double aggregate true false true false approx set hyperloglog double aggregate true false true false approx set hyperloglog double double aggregate true false true false approx set hyperloglog varchar x aggregate true false true false approx set hyperloglog varchar x double aggregate true false true false 6 row presto di show function like approx distinct function return type argument type function type deterministic description variable arity build in temporary language approx distinct bigint t aggregate true false true false approx distinct bigint t double aggregate true false true false 2 row be there a reason for this discrepancy possible solution make approx set generic context this be discover while test velox to ensure compatibility with presto java |
prestodbpresto | tpc ds sf 1k 13 prestissimo query result mismatch when datum be partition | Bug | on this dashboard view filter on fail 0 and mismatch 1 q04 q11 q31 q34 q37 q39 q42 q69 q71 q73 q75 q82 q98 query output s3 presto workload output tpc ds c0w1 native oss save output power ds sf1k par 240311 031741 query output baseline s3 presto workload output tpc ds c0w1 java oss save output power ds sf1k 240311 001500 |
prestodbpresto | incorrect result after ctecommonfilterpushdown due to trim filter | Bug | during our verifier test I observe a bug where filter add by the cteprojectionandfilterpushdownrule get trim in the cte producer tree we have filter x a or x b or x c or x d push down by the cteprojectionandfilterpushdownrule and presto after this rule convert this to x a or x b drop x c or x d in our production the filter get drop in filterrowexpressionrewriterule rule here in the repro in the predicatepushdown rule your environment presto version use storage hdfs s3 gcs datum source and connector use deployment cloud or on prem pastebin link to the complete debug log expect behavior current behavior possible solution this seem to be a major issue in presto itself and not the pushdown rule since the pushdown rule properly apply the filter step to reproduce could repro use this test case unit test in testcteexecution it will fail with result mismatch expect 1024 row get 52 row expect 1024 but find 52 test public void testcommonfilterpushdown queryrunner queryrunner getqueryrunner string test1 with order platform datum as n select n o orderkey as order key n o orderdate as datestr n o orderpriority as event type n from n order o n where n o orderdate between date 1995 01 01 and date 1995 01 31 n and o orderpriority in 1 urgent 3 medium n union all n select n l orderkey as order key n o orderdate as datestr n o orderpriority as event type n from n lineitem l n join order o on l orderkey o orderkey n where n o orderdate between date 1995 01 01 and date 1995 01 31 n and o orderpriority in 2 high 5 low n n urgent as n select order key datestr n from order platform datum n where event type 1 urgent n n medium as n select order key datestr n from order platform datum n where event type 3 medium n n high as n select order key datestr n from order platform datum n where event type 2 high n n low as n select order key datestr n from order platform datum n where event type 5 low n n select n ofin order key as order key n ofin datestr as order date n from urgent ofin n leave join medium oproc on ofin datestr oproc datestr n leave join low on oproc datestr low datestr leave join high on low datestr high datestr order by n ofin order key n compareresult queryrunner execute session builder super getsession setsystemproperty pushdown subfield enable true setsystemproperty cte materialization strategy heuristic complex query only setsystemproperty cte filter and projection pushdown enable true setsystemproperty verbose optimizer result all setsystemproperty verbose optimizer info enable true build test1 queryrunner execute getsession test1 screenshot if appropriate context the filter filter plannodeid 2162 filterpredicate expr 32 varchar 1 urgent or expr 32 varchar 3 medium or expr 32 varchar 5 low or expr 32 varchar 2 high expr 30 bigint expr 31 date expr 32 varchar 15 get trim |
prestodbpresto | construct tdig crash if the array size and count do not match | Bug | fix construct tdig to explicitly check for count centroid mean weight array size mismatch or fix it if it can be different look like there be some assumption that they be all same size simple example presto oculus select construct tdig array 1 array 1 2 1 1 1 1 10 query 20240310 162224 36022 xr3hz fail end index 80 must not be great than size 16 |
prestodbpresto | upgrade circleci build to at least maven 3 6 3 | Bug | I coax the circleci build into tell I what version of maven they be use to test and as I suspect from indirect evidence it be pre 3 6 1 error read historical timing data file do not exist request weighting by historical base timing but they be not present fall back to weighting by name 1mapache maven 3 5 4 red hat 3 5 4 5 m maven home usr share maven java version 1 8 0 362 vendor red hat inc runtime usr lib jvm java 1 8 0 openjdk 1 8 0 362 b08 3 el8 x86 64 jre default locale en us platform encode ansi x3 4 1968 os name linux version 5 15 0 1053 aw arch amd64 family unix info scan for project info the github action build be use I think 3 6 3 or maybe something later this need to be upgrade |
prestodbpresto | when node scheduler network topology flat use clickhouse connector report null | Bug | your environment presto version use 0 283 storage hdfs s3 gcs hdfs datum source and connector use clickhouse deployment cloud or on prem on prem pastebin link to the complete debug log when node scheduler network topology flat use clickhouse connector report null method com facebook presto metadata split getpreferrednode should return an empty list not null current public list getpreferrednode nodeprovider nodeprovider return null should public list getpreferrednode nodeprovider nodeprovider return immutablelist of |
prestodbpresto | orcbatchpagesourcefactory catch raw java lang exception | Bug | this be recommend against by effective java see if this can be clean up to catch more specific exception this class might also miss close some resource along some path that could be manage with more careful exception handling and try with resource |
prestodbpresto | hide runtime stat from presto cli in debug mode | Bug | your environment not relevant expect behavior when I use the cli I see a large amount of runtime stat when run in debug mode when the query be finish which impede my ability to see query result s0 drivercountpertask sum 1 count 1 min 1 max 1 s0 taskblockedtimenano sum 0 01 count 1 min 0 01 max 0 01 s0 taskelapsedtimenanos sum 0 02 count 1 min 0 02 max 0 02 s0 taskqueuedtimenanos sum 0 01 count 1 min 0 01 max 0 01 s0 taskscheduledtimenano sum 70ms count 1 min 70ms max 70ms s1 drivercountpertask sum 20 count 4 min 5 max 5 s1 taskblockedtimenano sum 0 08 count 4 min 0 02 max 0 02 s1 taskelapsedtimenanos sum 0 05 count 4 min 0 01 max 0 01 s1 taskqueuedtimenanos sum 0 02 count 4 min 380ms max 396ms s1 taskscheduledtimenano sum 0 03 count 4 min 0 01 max 0 01 s2 drivercountpertask sum 1 count 1 min 1 max 1 s2 getsplitstimenanos sum 0ms count 1 min 0ms max 0ms s2 taskelapsedtimenanos sum 376ms count 1 min 376ms max 376ms s2 taskqueuedtimenanos sum 58ms count 1 min 58ms max 58ms s2 taskscheduledtimenano sum 35ms count 1 min 35ms max 35ms fragmentplantimenano sum 67ms count 1 min 67ms max 67ms getcanonicalinfotimenanos sum 0ms count 1 min 0ms max 0ms getcolumnhandletimenano sum 1ms count 1 min 1ms max 1ms getcolumnmetadatatimenano sum 0ms count 1 min 0ms max 0ms getlayouttimenano sum 13ms count 5 min 0ms max 7ms getmaterializedviewtimenanos sum 0ms count 1 min 0ms max 0ms gettablehandletimenanos sum 0ms count 1 min 0ms max 0ms getviewtimenano sum 0ms count 1 min 0ms max 0ms logicalplannertimenanos sum 49ms count 1 min 49ms max 49ms optimizertimenanos sum 133ms count 1 min 133ms max 133ms latency client side 0 03 server side 0 03 8 row 166b 2 row s 50b s current behavior runtime stat be return with the query result for every query in debug mode possible solution this should be an opt in feature as it be only relevant for presto developer and highly technical user step to reproduce run any query in the cli in debug mode screenshot if appropriate see above example context runtime stat be recently hide from the ui they should be hide here as well |
prestodbpresto | map top n return wrong result if nan appear in the input | Bug | this issue be discover as part of an audit of all the comparison and order behavior for nan across presto function relate to and while there be a lot of inconsistency in how nan be handle that need to be address map top n can produce definite wrong result when nan value show up in the map accord to the documentaion map top n truncate map item keep only the top n element by value n must be a non negative integer in the presence of nan value nan seem to reset the search for topn entry select map top n map array a b c array nan 3 2 1 col0 b 3 0 1 row bug regardless of interpretation of nan 2 be always less than 3 so this result be definitely incorrect select map top n map array a b c array 3 nan 2 1 col0 c 2 0 1 row select map top n map array a b c array 3 2 nan 1 col0 c nan 1 row |
prestodbpresto | insert value into a table with only one column of rowtype would fail | Bug | when insert value into a table with only one column of rowtype like row a int b varchar it would fail with the follow message query 20240228 131628 00001 7gfyh fail line 1 37 insert query have 2 expression s but expect 1 target column s mismatch at column 1 a be of type row b integer c varchar but expression be of type integer insert into test insert value row 1 1001 step to reproduce 1 create a table test insert with only one column of rowtype create table test insert a row b int c varchar 2 insert value into table test insert insert into test insert value row 1 1001 insert into test insert value row 1 1001 insert into test insert value 1 1001 insert into test insert value 1 1001 expect behavior the insertion should succeed current behavior all the statement above would fail possible solution pr context the insert statement fail because of incorrect unfolding for the row type |
prestodbpresto | exclude omnigraffle file from license check | Bug | so as to cure these warning in the build info license maven plugin 2 3 check default presto main info check license warn unknown file extension user elharo presto presto main src main java com facebook presto tdig doc tdig graffle warn unknown file extension user elharo presto presto main src main java com facebook presto type khyperloglog docs khll graffle |
prestodbpresto | testhudiintegration test failure in pipeline | Bug | your environment n a expect behavior current behavior error failure error testhudiintegration testdemoquery1 94 abstracttestqueryframework assertquery 159 execution of actual query fail select symbol max ts from stock ticks mor rt group by symbol have symbol goog error testhudiintegration testdemoquery2 117 abstracttestqueryframework assertquery 159 execution of actual query fail select hoodie record key symbol ts volume open close from stock tick mor rt where symbol goog error testhudiintegration testquerywithpartitioncolumn 137 abstracttestqueryframework assertquery 159 execution of actual query fail select symbol ts dt from stock tick mor rt where symbol goog and dt 2018 08 31 main cause appear to be cause by java lang unsupportedoperationexception not implement by the rawlocalfilesystem filesystem implementation at org apache hadoop fs filesystem getscheme filesystem java 219 at org apache hadoop fs hadoopextendedfilesystem getscheme hadoopextendedfilesystem java 71 at org apache hudi common fs fsutil isgcsfilesystem fsutil java 649 at org apache hudi common table log hoodielogfilereader getfsdatainputstream hoodielogfilereader java 501 at org apache hudi common table log hoodielogfilereader hoodielogfilereader java 120 at org apache hudi common table log hoodielogformatreader hoodielogformatreader java 70 at org apache hudi common table log abstracthoodielogrecordreader scaninternalv1 abstracthoodielogrecordreader java 245 32 more possible solution step to reproduce n a screenshot if appropriate context |
prestodbpresto | presto alluxio sdk cache issue for file change of the same s3 uri | Bug | we have use presto sdk cache for some time in version 0 275 with alluxio 2 9 3 cache might become invalid and can t be query once in about 1 2 month and everything will be fine after manually clear all the cache so we decide to upgrade presto alluxio to the late release presto 0 285 1 alluxio 304 for new feature and bug fix but thing seem to be bad we have some hive table with no partition the content of the table might be update hourly or daily as we only care about the late datum query and cache work fine for the first version of the file after file content change for the same file s3 uri the table can t be query anymore with exception query can be resume after manually empty the cache file the error type might be different seem related to different file read first see don t know what type 15 and then not valid parquet file and sometimes java lang arrayindexoutofboundsexception our previous presto version 0 275 with alluxio 2 9 3 doesn t have this issue and change file can be read successfully most of the time we currently disable cache for our 0 285 1 deployment your environment presto version use 0 285 1 with alluxio version 304 storage hdfs s3 gcs s3 datum source and connector use hive parquet deployment cloud or on prem native deploy on aw ec2 java version 1 8 0 181 complete debug log see file attach error 1 not valid parquet file coordinator error 1 log worker error 1 log error 2 java lang arrayindexoutofboundsexception coordinator error 2 log worker error 2 log error 3 don t know what type 15 coordinator error 3 log worker error 3 log expect behavior change file should be read successfully and the cache should be update accordingly current behavior after file content change for the same file s3 uri the table can t be query anymore with exception query can be resume after manually empty the cache file step to reproduce 1 start presto with an empty cache 2 query a table with parquet storage with a fix location no partition and get the datum cache such as select from db table limit 100 3 update the table content with the same s3 file name same uri by an insert overwrite for example hive or spark normally will produce the same file name under the fix location such as 000001 0 4 rerun the previous query and get the error screenshot if appropriate context look very like cache metadata didn t match with new file I try to upgrade alluxio to 307 as 0 286 prepare didn t work presto setup and config be attach weird that github doesn t support property file hive property txt coordinator config property txt worker config property txt jvm config txt full server start log server log |
prestodbpresto | ut failure in presto kafka module when upgrade org apache zookeeper from 3 4 14 to 3 7 2 | Bug | unable to upgrade zookeeper version kafka have a dependency with zookeeper so upgrade zookeeper lead to ut failure in kakfa upgrading kafka to a compatible version version of zookeeper lead to break in code compilation as many class be not find pr associate with the same your environment presto version use storage hdfs s3 gcs datum source and connector use deployment cloud or on prem pastebin link to the complete debug log error face com facebook presto kafka testminimalfunctionality startkafka time elapse 5 491 s failure java lang noclassdeffounderror com codahale metric reservoir com facebook presto kafka testminimalfunctionality teardown time elapse 0 002 s failure java lang nullpointerexception while upgrade kafka the below error be show presto presto kafka src test java com facebook presto kafka util embeddedkafka java can not find symbol class zkutil symbol class location package kafka util presto kafka src test java com facebook presto kafka util embeddedkafka java 127 9 can not find symbol class symbol class zkutil location class com facebook presto kafka util embeddedkafka |
prestodbpresto | clarification on comparison with nan | Bug | current behavior currently in presto nan seem to behave like its infinity select array sort a from value array nan infinity infinity 1 as t a infinity 1 infinity nan select array sort desc a from value array nan infinity infinity 1 as t a nan infinity 1 infinity this be not just limit to array sort and you can see this behavior in order by too select x y from value 1 1 0 2 nan 3 nan 4 infinity 5 infinity as t x y order by y 5 infinity 1 1 4 infinity 2 nan 3 nan however run select nan infinity false give a contradictory answer expect behavior comparison with nan be not define so we should confirm to comparison with nan context velox be try to ensure correctness with presto java but we be see inconsistency in behavior and want clarification on these |
prestodbpresto | clickhouse test be fail when enable | Bug | clickhouse test be fail on master although they be disabled but the comment mention for disable be which be wrong below be the test which be fail currently error failure error testclickhousedistributedquerie abstracttestquerie testcorrelatedexistssubquerie 4121 abstracttestqueryframework assertquery 159 execution of actual query fail select max o orderdate o orderkey exist select 1 from order I where o orderkey I orderkey and I orderkey 10000 0 from orders o group by o orderkey order by o orderkey limit 1 error testclickhousedistributedquerie abstracttestquerie testcorrelatedscalarsubquerieswithscalaraggregation 3903 abstracttestqueryframework assertquery 159 execution of actual query fail select max o orderdate o orderkey select avg i orderkey from order I where o orderkey I orderkey and I orderkey 10000 0 from orders o group by o orderkey order by o orderkey limit 1 error testclickhousedistributedquerie abstracttestdistributedquerie testpayloadjoinapplicability 1226 abstracttestqueryframework assertupdate 249 abstracttestqueryframework assertupdate 254 runtime unsupported column type map integer integer error testclickhousedistributedquerie abstracttestdistributedquerie testpayloadjoincorrectness 1286 abstracttestqueryframework assertupdate 249 abstracttestqueryframework assertupdate 254 runtime unsupported column type map integer integer error testclickhousedistributedquerie abstracttestquerie testpreprocessmetastorecall 6886 abstracttestqueryframework computeactual 134 runtime for input string 1997 07 29 error testclickhousedistributedquerie abstracttestdistributedquerie testremoveredundantcasttovarcharinjoinclause 1339 abstracttestqueryframework assertupdate 249 abstracttestqueryframework assertupdate 254 runtime unsupported column type map integer integer error testclickhousedistributedquerie abstracttestdistributedquerie teststringfilter 1352 abstracttestqueryframework assertquery 159 for query select count from test charn filter where shipmode air not equal actual row up to 100 of 1 extra row show 1 row in total 0 expect row up to 100 of 1 missing row show 1 row in total 8491 error testclickhousedistributedquerie abstracttestdistributedquerie testsubfieldaccesscontrol 152 abstracttestqueryframework assertupdate 254 runtime unsupported column type row f1 integer f2 integer f3 array row ff1 integer ff2 integer your environment presto version use late master storage hdfs s3 gcs datum source and connector use deployment cloud or on prem pastebin link to the complete debug log expect behavior the test should be pass disable enabling can be decide later current behavior test be fail sample run possible solution either the test should be fix or they should not be trigger for clickhouse if the support be not there at present in the connector step to reproduce 1 2 3 4 screenshot if appropriate context all the test on master branch should pass |
prestodbpresto | linux presto e2e test fail in gh all the time error occur in start fork | Bug | linux presto e2e test fail all the time with error occur in start fork in gh the test be split into 5 run to run test in parallel and one of they be fail pretty much all the time seem always the same run 3 example follow the link at the end of the log say mojoexecutionexception unlike many other error this exception be not generate by the maven core itself but by a plugin as a rule of thumb plugin use this error to signal a problem in their configuration or the information they retrieve from the pom observation 1 there be also 615 major gc message in one log during 1hr and 95 of these during 7 min for another pr maybe memory use be and issue observation 2 linux spark e2e test be a similar test whose run 4 often time out after 5hrs or run at one instance run 4 have 2610 event of major gc and only 6 such event in another run which complete within 5 min the test be very similar to the one mention in the title of this issue example some info from the log info test run 59 failure 0 error 0 skip 0 info warn corrupt channel by directly write to native stream in fork jvm 1 see faq web page and the dump file root project presto native execution target surefire report 2024 02 13t11 56 51 519 jvmrun1 dumpstream info info build failure info info total time 53 33 min wall clock info finish at 2024 02 13t12 50 21 06 00 info error fail to execute goal org apache maven plugins maven surefire plugin 3 0 0 m7 test default test on project presto native execution error error please refer to root project presto native execution target surefire report for the individual test result error please refer to dump file if any exist date dump date jvmrun n dump and date dumpstream error executionexception the fork vm terminate without properly say goodbye vm crash or system exit call error command be bin sh c cd root project presto native execution usr lib jvm java 1 8 0 openjdk 1 8 0 362 b08 3 el8 x86 64 jre bin java dfile encode utf 8 xmx4 g xms4 g xx exitonoutofmemoryerror xx heapdumponoutofmemoryerror xx omitstacktraceinfastthrow jar root project presto native execution target surefire surefirebooter 20240213115652838 34 jar root project presto native execution target surefire 2024 02 13t11 56 51 519 jvmrun1 surefire 20240213115652838 32tmp surefire 1 20240213115652838 33tmp error error occur in start fork check output in log error process exit code 3 error crash test error com facebook presto nativeworker testprestonativewindowquerie error org apache maven surefire booter surefirebooterforkexception executionexception the fork vm terminate without properly say goodbye vm crash or system exit call error command be bin sh c cd root project presto native execution usr lib jvm java 1 8 0 openjdk 1 8 0 362 b08 3 el8 x86 64 jre bin java dfile encode utf 8 xmx4 g xms4 g xx exitonoutofmemoryerror xx heapdumponoutofmemoryerror xx omitstacktraceinfastthrow jar root project presto native execution target surefire surefirebooter 20240213115652838 34 jar root project presto native execution target surefire 2024 02 13t11 56 51 519 jvmrun1 surefire 20240213115652838 32tmp surefire 1 20240213115652838 33tmp error error occur in start fork check output in log error process exit code 3 error crash test error com facebook presto nativeworker testprestonativewindowquerie error at org apache maven plugin surefire booterclient forkstarter awaitresultsdone forkstarter java 513 error at org apache maven plugin surefire booterclient forkstarter runsuitesforkpertestset forkstarter java 460 error at org apache maven plugin surefire booterclient forkstarter run forkstarter java 327 error at org apache maven plugin surefire booterclient forkstarter run forkstarter java 269 error at org apache maven plugin surefire abstractsurefiremojo executeprovider abstractsurefiremojo java 1334 error at org apache maven plugin surefire abstractsurefiremojo executeafterpreconditionschecke abstractsurefiremojo java 1167 error at org apache maven plugin surefire abstractsurefiremojo execute abstractsurefiremojo java 931 error at org apache maven plugin defaultbuildpluginmanager executemojo defaultbuildpluginmanager java 137 error at org apache maven lifecycle internal mojoexecutor execute mojoexecutor java 208 error at org apache maven lifecycle internal mojoexecutor execute mojoexecutor java 154 error at org apache maven lifecycle internal mojoexecutor execute mojoexecutor java 146 error at org apache maven lifecycle internal lifecyclemodulebuilder buildproject lifecyclemodulebuilder java 117 error at org apache maven lifecycle internal builder multithreade multithreadedbuilder 1 call multithreadedbuilder java 200 error at org apache maven lifecycle internal builder multithreade multithreadedbuilder 1 call multithreadedbuilder java 196 error at java util concurrent futuretask run futuretask java 266 error at java util concurrent executors runnableadapter call executor java 511 error at java util concurrent futuretask run futuretask java 266 error at java util concurrent threadpoolexecutor runworker threadpoolexecutor java 1149 error at java util concurrent threadpoolexecutor worker run threadpoolexecutor java 624 error at java lang thread run thread java 750 error cause by org apache maven surefire booter surefirebooterforkexception the fork vm terminate without properly say goodbye vm crash or system exit call error command be bin sh c cd root project presto native execution usr lib jvm java 1 8 0 openjdk 1 8 0 362 b08 3 el8 x86 64 jre bin java dfile encode utf 8 xmx4 g xms4 g xx exitonoutofmemoryerror xx heapdumponoutofmemoryerror xx omitstacktraceinfastthrow jar root project presto native execution target surefire surefirebooter 20240213115652838 34 jar root project presto native execution target surefire 2024 02 13t11 56 51 519 jvmrun1 surefire 20240213115652838 32tmp surefire 1 20240213115652838 33tmp error error occur in start fork check output in log error process exit code 3 error crash test error com facebook presto nativeworker testprestonativewindowquerie error at org apache maven plugin surefire booterclient forkstarter fork forkstarter java 714 error at org apache maven plugin surefire booterclient forkstarter lambda runsuitesforkpertestset 7 forkstarter java 449 error 4 more expect behavior test pass or fail in the actual code not framework current behavior test seem to fail in the framework step to reproduce submit a pr and observe the test fail |
prestodbpresto | unsupportedoperationexception node planner plan deletenode do not have a graphviz visitor | Bug | description while perform delete operation the result of the delete event fail to print in the console it happen because the graphviz printer implementation in presto doesn t implement the default method of visitdelete with deletenode type similar bug be also present in node planner plan metadatadeletenode as well give under the follow issue 2024 01 16t11 51 58 771 0530 warn dispatcher query 30 com facebook presto event querymonitor error create graphviz plan for query 20240116 062137 00010 v52e2 java lang unsupportedoperationexception node com facebook presto sql planner plan deletenode do not have a graphviz visitor java lang unsupportedoperationexception node com facebook presto sql planner plan deletenode do not have a graphviz visitor at com facebook presto util graphvizprinter nodeprinter visitplan graphvizprinter java 268 at com facebook presto util graphvizprinter nodeprinter visitplan graphvizprinter java 246 at com facebook presto sql planner plan internalplanvisitor visitdelete internalplanvisitor java 97 at com facebook presto sql planner plan deletenode accept deletenode java 94 at com facebook presto sql planner plan internalplannode accept internalplannode java 36 at com facebook presto util graphvizprinter printfragmentnode graphvizprinter java 240 at com facebook presto util graphvizprinter printdistributedfromfragment graphvizprinter java 203 at com facebook presto sql planner planprinter planprinter graphvizdistributedplan planprinter java 451 at com facebook presto event querymonitor creategraphvizqueryplan querymonitor java 448 at com facebook presto event querymonitor createquerymetadata querymonitor java 287 at com facebook presto event querymonitor querycompletedevent querymonitor java 244 at com facebook presto execution sqlquerymanager lambda createquery 5 sqlquerymanager java 294 at com facebook presto execution querystatemachine lambda addqueryinfostatechangelistener 18 querystatemachine java 971 at com facebook presto execution statemachine firestatechangedlistener statemachine java 229 at com facebook presto execution statemachine lambda firestatechange 0 statemachine java 221 at java base java util concurrent threadpoolexecutor runworker threadpoolexecutor java 1128 at java base java util concurrent threadpoolexecutor worker run threadpoolexecutor java 628 at java base java lang thread run thread java 839 to reproduce it be identify during development of mongodb delete operation functionality expect behavior no error in the console after delete operation be complete |
prestodbpresto | native testcase testzerooffset fail against c worker | Bug | testcase testzerooffset be currently fail against c worker expect behavior assertquery select array agg a over order by a asc null last range between 0 precede and 0 follow from value 1 2 1 null t a value array 1 1 array 1 1 array 2 array null current behavior java lang assertionerror for query select array agg a over order by a asc null last range between 0 precede and 0 follow from value 1 2 1 null t a not equal actual row up to 100 of 1 extra row show 4 row in total 2 null expect row up to 100 of 1 missing row show 4 row in total 2 query plan against c worker presto tpch explain select array agg a over order by a asc null last range between 0 precede and 0 follow from value 1 2 1 null t a query plan output plannodeid 15 col0 array agg array integer col0 array agg 1 16 project plannodeid 11 projectlocality local array agg array integer window plannodeid 10 order by field asc null last operator subtract integer field integer ope array agg array agg field range 0 precede 0 follow 1 16 localexchange plannodeid 463 single operator subtract integer field integer operator add estimate source costbasedsourceinfo row 4 204b cpu 64 00 memory 0 00 network 0 00 project plannodeid 186 projectlocality local operator subtract integer field integer o estimate source costbasedsourceinfo row 4 204b cpu 64 00 memory 0 00 network operator subtract field integer 0 operator add field integer 0 localexchange plannodeid 462 round robin field integer estimate source costbasedsourceinfo row 4 204b cpu 16 00 memory 0 00 netwo value plannodeid 0 field integer estimate source costbasedsourceinfo row 4 204b cpu 0 00 memory 0 00 ne integer 1 integer 2 integer 1 null |
prestodbpresto | native intermittent failure in testcase testallpartitionsamevalue against c worker | Bug | this be an intermittent failure query doesn t always produce this output sometimes the testcase succeed and sometimes it fail produce different actual output expect behavior assertquery select array agg a over order by a range between 1 precede and 1 follow from value 1 1 1 t a value array 1 1 1 array 1 1 1 array 1 1 1 current behavior output against c worker presto tpch select array agg a over order by a range between 1 precede and 1 follow from value 1 1 1 t a col0 1 1 1 1 1 1 null 3 row query plan against c worker presto tpch explain select array agg a over order by a range between 1 precede and 1 follow from value 1 1 1 t a query plan output plannodeid 15 col0 array agg array integer col0 array agg 1 16 project plannodeid 11 projectlocality local array agg array integer window plannodeid 10 order by field asc null last operator add integer operator subtract integ array agg array agg field range 1 precede 1 follow 1 16 localexchange plannodeid 463 single operator add integer operator subtract integer field estimate source costbasedsourceinfo row 3 153b cpu 60 00 memory 0 00 network 0 00 project plannodeid 186 projectlocality local operator add integer operator subtract int estimate source costbasedsourceinfo row 3 153b cpu 60 00 memory 0 00 network operator add field integer 1 operator subtract field integer 1 localexchange plannodeid 462 round robin field integer estimate source costbasedsourceinfo row 3 153b cpu 15 00 memory 0 00 netwo value plannodeid 0 field integer estimate source costbasedsourceinfo row 3 153b cpu 0 00 memory 0 00 ne integer 1 integer 1 integer 1 |
prestodbpresto | aggregate function max min return different result base on order of nan for float point type | Bug | aggregate function max min return different result base on when nan be encounter in the input for float point type if nan be the first value then irrespective of what the other value be the result be nan this seem wrong expect behavior max min should not be sensitive to order of nan and should return same result current behavior presto di select max x from value 4 0 nan null t x col0 4 0 1 row presto di select max x from value nan 4 0 null t x col0 nan 1 row possible solution the bug be likely here where state be initially null and set to nan and then subsequently all comparison against it fail |
prestodbpresto | circleci redundant header check | Bug | currently there be a circleci check name ci circleci header check which use a python script from the velox repo check py header to verify that license header exist on a variety of file type this check will run indiscriminately on all file regardless of whether they be within presto native execution it seem to only check file which be modify by the current pr there be a number of file which we explicitly ignore header check on mvn validate such as in query plan test resource however if these file be modify in a pr we have no way to ignore they so this specific ci check always fail add logic to strip the header for the test be quite annoying it would be well if this check either didn t run on the file cover by the maven check or if we could update some kind of ignore list the tool use doesn t seem to have one |
prestodbpresto | native add miss stat to enable all hbo optimization | Bug | some hbo optimization do not apply to prestissimo because native worker do not produce necessary stat for example randomizenullkeyinouterjoin optimization rely on nulljoinbuildkeycount and joinbuildkeycount stat in join s build side it would be nice to document all hbo optimization and clarify what stat be need to enable these so that these can be add to prestissimo cc tdcmeehan majetideepak aditi pandit feilong liu kaikalur pranjalssh |
prestodbpresto | code break when upgrade org elasticsearch elasticsearch from 6 0 0 to 7 17 13 in presto elasticsearch | Bug | unable to upgrade elasticsearch version due to break in code compilation and have to exclude sever conflicting version for other package due to upgrade in elasticsearch which alter the functionality of the program ut failure as well pr associate with the same your environment presto version use storage hdfs s3 gcs datum source and connector use deployment cloud or on prem pastebin link to the complete debug log error face presto elasticsearch src main java com facebook presto elasticsearch client elasticsearchclient java some input file use or override a deprecate api presto elasticsearch src main java com facebook presto elasticsearch client elasticsearchclient java recompile with xlint deprecation for detail presto elasticsearch src main java com facebook presto elasticsearch decoder arraydecoder java some input file use unchecked or unsafe operation presto elasticsearch src main java com facebook presto elasticsearch decoder arraydecoder java recompile with xlint unchecked for detail |
prestodbpresto | cte materialization internal error fail with 0 length varchar | Bug | this be because presto allow 0 length varchar but hive doesn t when we materialize this cte it fail but without materialization it succeed test public void testctewithzerolengthvarchar string testquery with temp as select from value cast as varchar 0 9 as t text column number column select from temp queryrunner queryrunner getqueryrunner compareresult queryrunner execute getmaterializedsession testquery queryrunner execute getsession testquery stack trace cause by java lang runtimeexception varchar length 0 out of allow range 1 65535 at org apache hadoop hive serde2 typeinfo basecharutil validatevarcharparameter basecharutil java 32 at org apache hadoop hive serde2 typeinfo varchartypeinfo varchartypeinfo java 33 at org apache hadoop hive serde2 typeinfo typeinfofactory createprimitivetypeinfo typeinfofactory java 159 at org apache hadoop hive serde2 typeinfo typeinfofactory getprimitivetypeinfo typeinfofactory java 117 at org apache hadoop hive serde2 typeinfo typeinfofactory getvarchartypeinfo typeinfofactory java 183 at com facebook presto hive hivetypetranslator translate hivetypetranslator java 98 at com facebook presto hive hivetype tohivetype hivetype java 218 at com facebook presto hive hivemetadata getcolumnhandle hivemetadata java 3584 at com facebook presto hive hivemetadata createtemporarytable hivemetadata java 1107 so we might need this check for the 0 length varchar due to because write with 0 length varchar fail see reference while presto support varchar of length 0 as discuss in your environment presto version use storage hdfs s3 gcs datum source and connector use deployment cloud or on prem pastebin link to the complete debug log expect behavior current behavior possible solution step to reproduce 1 2 3 4 screenshot if appropriate context |
prestodbpresto | aggregation function arbitary should be order sensitive | Bug | hi we have find that arbitary aggregate function be order sensitive as the resultant value can change across run this happen despite set order by as it do not honor the order sensitivity flag aggregation function that be mark as order sensitive allow user to use order by within the aggregation function call to specify an order of input function aggregate page root your environment n a expect behavior select arbitrary x order by x from some large table this table have column x should return same result across multiple run current behavior the above should ideally always return the same result however the value return be indeterminate possible solution the arbitary aggregate function can be mark as order sensitive as show below l171 context the velox team be try to ensure correctness of its aggregate function because presto s arbitary be not mark order sensitive its hard to validate its velox equivalent |
prestodbpresto | orc reader doesnot support time type unsupported hive type time error | Bug | description create an iceberg table with column of time datatype and with orc format insert to these table be work fine whereas the select from fail it be unable to retrieve the datum as the query throw error unsupported hive type time presto version use v0 284 datum source and connector use iceberg expect behavior the expect behaviour should be that if the user be able to create and insert value of type time and format orc then the user should be able to query those datum from it current behavior unsupported hive type time error be throw when run the select query from the table possible solution provide support to timetype in the column reader would fix this issue step to reproduce create a table have column view time where the data type be time and with format orc insert value to the table select to fetch the entire datum from the table throw error unsupported hive type time |
prestodbpresto | native fix v1 operation end point | Bug | v1 operation end point be describe in 21745 here we document a few know issue with that endpoint that would be nice to fix invalid value of parameter produce cryptic error for example invalid value of limit for task listall return 500 stoi cmd task listall limit abc http error 500 stoi task listall command return invalid json 20240122 231941 13733 9yxym 21 0 251 0 i d 20240122 231941 13733 9yxym 21 0 251 0 numfinisheddriver 0 numrunningdriver 5 numthread 1 numtotaldriver 5 pauserequeste 0 shortid tk 52221 state run terminaterequeste 0 20240122 231941 13733 9yxym 19 0 243 0 i d 20240122 231941 13733 9yxym 19 0 243 0 numfinisheddriver 0 numrunningdriver 5 numthread 0 numtotaldriver 5 pauserequeste 0 shortid tk 52221 state run terminaterequeste 0 50 more task parse error on line 1 20240122 231941 1373 systemconfig getproperty doesn t reject non existent property cmd systemconfig getproperty name foo systemconfig getproperty doesn t reject invalid value for property cmd systemconfig setproperty name task max driver per task value abc have set system property value task max driver per task to abc old value be 123 canonical rest use get for read only operator and put post delete for modification it would be more natural to define get v1 operation systemconfig name xxx to fetch current value and put v1 operation systemconfig name xxx value yyy to update the value cc xiaoxmeng tanjialiang tdcmeehan majetideepak |
prestodbpresto | bug presto benchto benchmark driver not start | Bug | presto benchto benchmark not start after spring core version update to fix the vulnerability relate to fix revert the above fix the presto benchto benchmark driver work fine your environment 0 286 snapshot version hive connector with tpch benchmark expect behavior presto benchto benchmark driver should be up and presto driver load current behavior teradata benchto driverapp not start possible solution step to reproduce 1 build the master branch 0 286 snapshot 2 launch presto and benchto service with tpch benchmark 3 launch the presto benchto benchmark driver with the command as provide in the readme md of presto benchto benchmark java xmx1 g jar presto benchto benchmark target presto benchto benchmark executable jar sql presto benchto benchmark src main resource sql benchmark presto benchto benchmark src main resource benchmark activebenchmark presto tpch profile presto devenv 5 no log print on console to show benchto be start 6 after revert the vulnerability fix we can see the benchto start with log as show below 7 the macro benchmarke framework 16 24 16 081 info main c teradata benchto driver driverapp start driverapp v0 4 on ip 10 0 43 33 with pid 34269 start by ubuntu in home ubuntu presto286 16 24 16 695 info main o h validator internal util version hv000001 hibernate validator 5 1 3 final 16 24 17 349 info main c teradata benchto driver driverapp start driverapp in 1 457 second jvm run for 1 754 16 24 17 349 info main c t b d e benchmarkexecutiondriver run benchmark executionsequenceid 2024 01 17t16 24 17 349 with property benchmarkpropertie sqldir resource sql benchmarksdir resource benchmark executionsequenceid null environmentname presto devenv graphitepropertie graphitepropertie cpugraphiteexpr null memorygraphiteexpr null networkgraphiteexpr null graphiteresolutionsecond 0 graphitemetricsdelaysecond 0 graphitemetricscollectionenable false frequencycheck true activebenchmark presto tpch 16 24 17 351 info main c t b driver loader benchmarkloader search for benchmark in classpath 16 24 17 355 info main c t b driver loader benchmarkloader benchmark find resource benchmark presto kafka yaml 16 24 17 355 info main c t b driver loader benchmarkloader benchmark find resource benchmark presto distribute sort yaml 16 24 17 355 info main c t b driver loader benchmarkloader benchmark find resource benchmark presto tpcd yaml 16 24 17 355 info main c t b driver loader benchmarkloader benchmark find resource benchmark presto tpch yaml 16 24 17 361 info main c t b driver loader benchmarkloader exclude benchmark 16 24 17 361 info main c t b driver loader benchmarkloader benchmark name datum source run prewarm concurrency 16 24 17 399 info main c t b driver loader benchmarkloader recently test benchmark 16 24 17 399 info main c t b driver loader benchmarkloader benchmark name datum source run prewarm concurrency 16 24 17 399 info main c t b driver loader benchmarkloader select benchmark 16 24 17 399 info main c t b driver loader benchmarkloader benchmark name datum source run prewarm concurrency 16 24 17 399 info main c t b d e benchmarkexecutiondriver load 0 benchmark 16 24 17 399 warn main c t b d e benchmarkexecutiondriver no benchmark select exiting screenshot if appropriate context this issue be blocker to use benchto service and bencto driver to do macro benchmarking of support benchmark like tpch etc |
prestodbpresto | presto jdbc driver need to upgrade jackson library to 2 16 0 due to various cve s | Bug | late presto jdbc driver 0 285 appear to still be use jackson 2 10 which be old there be several well publicize cve s against this version of jackson notably 1 com fasterxml jackson core jackson core package version before 2 15 0 be vulnerable to denial of service do prisma 2023 0067 prisma 2023 0068 prisma 2023 0069 2 cve 2023 35116 jackson databind be vulnerable to denial of service fix in jackson 2 16 0 |
prestodbpresto | kcounterspillpeakmemorybyte define as a histogram type but record as a regular metric | Bug | kcounterspillpeakmemorybyte metric be of histogram type define here l205c2 l210c12 define histogram metric kcounterspillpeakmemorybyte 1l 512 1024 1024 0 20l 1024 1024 1024 max bucket value 20 gb 100 record as regular metric instead histogram must use record histgram metric l620 your environment presto version use storage hdfs s3 gcs datum source and connector use deployment cloud or on prem pastebin link to the complete debug log expect behavior current behavior possible solution step to reproduce 1 2 3 4 screenshot if appropriate context |
prestodbpresto | resolve the tmp dependency of prestodb on rhel 8 machine | Bug | resolve the tmp dependency of prestodb on rhel 8 machine your environment presto version use all version storage hdfs s3 gcs hdfs datum source and connector use hive connector deployment cloud or on prem on prem pastebin link to the complete debug log expect behavior server start without any error current behavior possible solution try add a property in jvm config to change tmp directory like below and retry to see if error still persist djava io tmpdir path to tmpdir step to reproduce 1 rhel 8 setup 2 presto start jvm start newly screenshot if appropriate context presto code be not directly dependent on tmp directory but the jvm be dependent on tmp directory and jvm be create elf 64 bit lsb share object x86 64 version 1 sysv dynamically link with debug info not strip file in order to mitigate this problem one solution be to use djava io tmpdir config in jvm config which help we in configure custom tmp directory |
prestodbpresto | native client get error format array result | Bug | sql select split 1 2 3 4 5 6 get error image expect datum image real datum image may native result json format have some error I test more case and find that all column which be complex type may have this problem for example select 1 2 select array 1 x 2 y select map array 1 2 array x y |
prestodbpresto | incorrect result of min by max by x y n in window operation | Bug | hi community I notice min by max by x y n produce incorrect result when it be use in window operation as an example for the query below with min by x y n since the window frame be unbounded precede to current row when the second row be process the function should aggregate both the first and second input row and hence there should be two value in the result array I e 1 2 however the current result of the second row be 2 only select min by c0 c0 c1 over partition by c2 order by c3 asc range between unbounded precede and current row from value 1 10 false 0 2 10 false 1 as t c0 c1 c2 c3 this error be cause by abstractminmaxbynaggregationfunction output break the assumption for expand frame in aggregatewindowfunction processrow aggregatewindowfunction processrow have an optimization branch for same or expand frame of the previously compute frame where it only add additional input row of the new frame to accumulator l67 l71 this draw an implicit assumption that the content of the accumulator of the previous frame remain in the accumulator when process the current frame this however be not true with min by because abstractminmaxbynaggregationfunction output clear the accumulator l147 l147 your environment presto version use 0 286 expect behavior expect result be 1 1 2 current behavior current result be 1 2 possible solution add heap addall reversedblockbuilder to the end of abstractminmaxbynaggregationfunction output before out closeentry step to reproduce screenshot if appropriate context |
prestodbpresto | arbitraryoutputbuffer isoverutilize report for scale writer scheduling | Bug | in meta prestissimo native shadow test we find that arbitraryoutputbuffer isoverutilize in presto java worker report true if the producer have finish produce datum this seem to be strange as coordinator leverage this as a signal to schedule more writer but if an output buffer have already finish then we shouldn t schedule more writer this apparently will cause to schedule more writer when some output buffer have finish produce datum during the query execution after make velox align with presto java in isoverutilize report prestissimo query get schedule more writer from the coordinator and the query run much fast we be not sure what s original design logic behind java implementation of arbitraryoutputbuffer isoverutilize mbasmanova think it could be due to the new writer scheduling logic in coordinator scaledwriterscheduler getnewtaskcount which require more than half of the producer be over utilize give that if we don t report over utilize for a finished producer then the coordinator might not be able to schedule new writer if non trivial amount of producer have finish as we can t reach 50 over utilize condition if that be the case mbasmanova suggest to remove the finish producer when decide to schedule new writer then worker doesn t need to report over utilize for a finished producer which seem to be hack at worker side cc mbasmanova kewang1024 |
prestodbpresto | cte materalization plan error in lateraljoinnode when reference be not explicitly mention | Bug | observe a test case select with test cte as select testcol as testcol select testcol from test cte from value 1 as test table testcol the error be java lang illegalargumentexception invalid node expression dependency field 1 not in source plan output add an explicit reference to the table inside seem to work select with test cte as select testcol as testcol from value 1 as test table testcol select testcol from test cte from value 1 as test table testcol need to deep dive |
prestodbpresto | apache pinot select query issue after enable ssl | Bug | query 20231206 065307 00006 qsx57 fail java io eofexception httpconnectionoverhttp 450b0c95 decryptedendpoint 16d34a8e 48 43 f096 ip4 static sl reverse com 150 240 67 72 8099 172 17 127 59 56016 open fill flush p to 30989 300000 java io uncheckedioexception java io eofexception httpconnectionoverhttp 450b0c95 decryptedendpoint 16d34a8e 48 43 f096 ip4 static sl reverse com 150 240 67 72 8099 172 17 127 59 56016 open fill flush p to 30989 300000 your environment presto version use 1 82 datum source and connector use apache pinot we have enable ssl in apache pinot and try to query from presto when try to query all query be work except select query without ssl all query work fine property use connector name pinot pinot controller urls 150 240 67 72 8443 pinot secure connection true pinot grpc tls key store path home asha pinotssl keystore jks pinot grpc tls key store password pinot grpc tls trust store path home asha pinotssl truststore jks pinot grpc tls trust store password current behavior presto default select from transcript query 20231206 065307 00006 qsx57 fail java io eofexception httpconnectionoverhttp 450b0c95 decryptedendpoint 16d34a8e 48 43 f096 ip4 static sl reverse com 150 240 67 72 8099 172 17 127 59 56016 open fill flush p to 30989 300000 java io uncheckedioexception java io eofexception httpconnectionoverhttp 450b0c95 decryptedendpoint 16d34a8e 48 43 f096 ip4 static sl reverse com 150 240 67 72 8099 172 17 127 59 56016 open fill flush p to 30989 300000 screenshot if appropriate image |
prestodbpresto | presto allow parse of invalid timezone name | Bug | presto s timezone parsing be too permissive and allow for timezone name that be not valid accord to iana s timezone database in addition to other bug the follow be example of non compliant timezone name that be allow etc 06 00 etc 06 etc 6 etc utc etc ut the only official format be etc gmt 1 and similar est not support but it should be timezone in the format etc gmt 10 00 besides not be official return the wrong result don t flip the sign as they should 1 01 officially also don t exist only 01 00 expect behavior these should fail to parse though it might break backwards compatibility current behavior they be parse successfully incorrectly step to reproduce presto select from unixtime 1698528090 etc 06 00 col0 2023 10 29 03 21 30 000 06 00 and other variation describe above cc mbasmanova zacw7 |
prestodbpresto | hbo may cause similar subplan to not use stat | Bug | hbo be use the default planidallocator while create canonical plan the idallocator auto increment effectively make subplan hash unavailable for use by other query sample json type com facebook presto sql planner canonicaltablescannode i d 82 table connectorid hive tablehandle schemaname tpch tablename nation layoutidentifi bucketfilter null constraint columndomain domainpredicate columndomain remainingpredicate valueblock cgaaaejzvevfqvjsqvkbaaaaaae type boolean schematablename schema tpch table nation outputvariable type variable name name varchar 25 1 regular 438 type varchar 25 type variable name nationkey bigint 0 regular 439 type bigint type variable name regionkey bigint 2 regular 440 type bigint assignment name varchar 25 1 regular 438 name name hivetype varchar 25 typesignature varchar 25 hivecolumnindex 1 columntype regular requiredsubfield nationkey bigint 0 regular 439 name nationkey hivetype bigint typesignature bigint hivecolumnindex 0 columntype regular requiredsubfield regionkey bigint 2 regular 440 name regionkey hivetype bigint typesignature bigint hivecolumnindex 2 columntype regular requiredsubfield here this tablescan have the i d 82 in its canonicalized json which be hash this can cause other subplan from not reuse this run statistic |
prestodbpresto | cost calculator overlooks current node hbo statistic for certain query | Bug | I observe an instance where in a filter scan scenario historical statistic be available for the filter node but not for the scan this be notable because post execution this constitute a single phase and execution statistic within a phase can be inaccurately associate by hbo however code for costcalculator for filter be override public plancostestimate visitfilter filternode node void context localcostestimate localcost localcostestimate ofcpu getstat node getsource getoutputsizeinbyte node getsource return costforstreame node localcost in this setup the filter statistic be not utilize this be unless a node high up in the hierarchy use the source stat but in this case the node above be a project node which typically do not use these stat it might be beneficial to attempt estimate the cpu cost base on the current node s statistic for similar node |
prestodbpresto | plan time and execution time be too long | Bug | presto version 0 284 table ddls create external table od eport od eport aed contrast city df area code string comment area name string comment city name string comment partition by ingestion time string row format serde org apache hadoop hive ql io orc orcserde store as inputformat org apache hadoop hive ql io orc orcinputformat outputformat org apache hadoop hive ql io orc orcoutputformat location hdfs zwy warehouse tablespace external hive od eport db od eport aed contrast city df tblpropertie translate to external true bucketing version 2 external table purge true spark sql create version 2 2 or prior spark sql source schema numpartcol 1 spark sql source schema numpart 1 spark sql source schema part 0 type struct field name area code type string nullable true metadata comment name area name type string nullable true metadata comment name city name type string nullable true metadata comment name ingestion time type string nullable true metadata spark sql source schema partcol 0 ingestion time transient lastddltime 1678761651 quarry select from hive od eport od eport aed contrast city df current config jvm flag jdk1 8 query explain plan image image image image coordinator server xmx8 g xx useg1gc xx g1heapregionsize 32 m xx usegcoverheadlimit xx explicitgcinvokesconcurrent xx heapdumponoutofmemoryerror xx exitonoutofmemoryerror coordinator true node scheduler include coordinator false http server http port 8070 query max memory 22 gb query max memory per node 5 gb query max total memory per node 5 gb discovery uri discovery server enable true experimental reserve pool enable false worker 2 server xmx9 g xx useg1gc xx g1heapregionsize 32 m xx usegcoverheadlimit xx explicitgcinvokesconcurrent xx heapdumponoutofmemoryerror xx exitonoutofmemoryerror coordinator false node scheduler include coordinator true http server http port 8070 query max memory 22 gb query max memory per node 6 gb query max total memory per node 6 gb discovery uri experimental reserve pool enable false |
prestodbpresto | support implicit mapping of presto datatype char to iceberg datatype string | Bug | create an iceberg table use cta fail when the source table have character column it be unable to create table as the query throw error type not support for iceberg char 3 presto version use v0 282 datum source and connector use iceberg expect behavior the expect behaviour should be that char field in the source table should be implicitly map to string when use in a ctas for an iceberg table without the implicit mapping of char to stre the process of use cta when the source table have character column become very cumbersome with the need to specify and cast each individual column current behavior the type not support for iceberg char 3 error be get throw when create an iceberg table use the create table as select cta operation when the source table contain character column possible solution support the mapping of presto datatype char to the equivalent datatype string in iceberg pr step to reproduce 1 create a table in db2 have column char col where the data type be char 2 try create an iceberg table use cta select from the create db2 table 3 unable to create table as the query be throw error type not support for iceberg char 3 error log from presto server create table if not exist iceberg data black 01 table 01 as select from db2warehouse mariam 01 employee 1 limit 10 query 20230821 112545 01233 4jq97 fail type not support for iceberg char 3 com facebook presto spi prestoexception type not support for iceberg char 3 at com facebook presto iceberg typeconverter toicebergtype typeconverter java 189 at com facebook presto iceberg icebergabstractmetadata toicebergschema icebergabstractmetadata java 298 at com facebook presto iceberg iceberghivemetadata begincreatetable iceberghivemetadata java 247 at com facebook presto spi connector classloader classloadersafeconnectormetadata begincreatetable classloadersafeconnectormetadata java 412 at com facebook presto metadata metadatamanager begincreatetable metadatamanager java 830 at com facebook presto execution scheduler tablewriteinfo createwritertarget tablewriteinfo java 96 at com facebook presto execution scheduler tablewriteinfo createwritertarget tablewriteinfo java 118 at com facebook presto execution scheduler tablewriteinfo createtablewriteinfo tablewriteinfo java 76 at com facebook presto execution scheduler sectionexecutionfactory createsectionexecution sectionexecutionfactory java 166 at com facebook presto execution scheduler legacysqlqueryscheduler createstageexecution legacysqlqueryscheduler java 355 at com facebook presto execution scheduler legacysqlqueryscheduler legacysqlqueryscheduler java 244 at com facebook presto execution scheduler legacysqlqueryscheduler createsqlqueryscheduler legacysqlqueryscheduler java 173 at com facebook presto execution sqlqueryexecution plandistribution sqlqueryexecution java 607 at com facebook presto execution sqlqueryexecution start sqlqueryexecution java 455 at com facebook presto gen presto 0 282 20230821 101320 1 run unknown source at com facebook presto execution sqlquerymanager createquery sqlquerymanager java 306 at com facebook presto dispatcher localdispatchquery lambda startexecution 8 localdispatchquery java 211 at java base java util concurrent threadpoolexecutor runworker threadpoolexecutor java 1128 at java base java util concurrent threadpoolexecutor worker run threadpoolexecutor java 628 at java base java lang thread run thread java 839 |
prestodbpresto | handle case of table with no support column | Bug | handle case of table with no support column your environment presto version use 0 285 storage hdfs s3 gcs postgresql datum source and connector use postgresql deployment cloud or on prem mac pastebin link to the complete debug log expect behavior table public no column table have no support column all s column be not support current behavior presto public show table table example table example table1 mm no column table sample table 5 row query 20231120 100558 00026 suqwa finish 1 node split 19 total 19 do 100 00 latency client side 0 03 server side 0 03 5 row 136b 1 rows s 50b s presto public select from no column table query 20231120 100747 00027 suqwa fail table public no column table not find presto public show table table example table example table1 mm no column table sample table unsupported type table 6 row query 20231120 102515 00028 suqwa finish 1 node split 19 total 19 do 100 00 latency client side 0 02 server side 0 02 6 row 174b 2 row s 75b s presto public select from unsupported type table query 20231120 102529 00029 suqwa fail table public unsupported type table not find possible solution throw new tablenotfoundexception tablehandle getschematablename format table s have no support column all s column be not support step to reproduce 1 postgresql connector 2 create mydatabase create table no column table create table mydatabase create table unsupported type table datum point point create table mydatabase 3 from presto cli select from screenshot if appropriate presto use public use presto public select from unsupported type table query 20231120 102910 00004 35dqb fail table public unsupported type table have no support column all 1 column be not support presto public select from no column table query 20231120 102918 00005 35dqb fail table public no column table have no support column all 0 column be not support context |
prestodbpresto | native query system runtime task be break in prestissimo | Bug | query system runtime task table in prestissimo fail with error connector with name system not register select from system runtime task limit 1 query 20231117 221734 00000 h9up6 fail it connector end connector with name system not register veloxruntimeerror it connector end connector with name system not register at unknown 0 unknown source at unknown 1 unknown source at unknown 2 unknown source at unknown 3 unknown source this query be expect to return correct result like other system runtime table query system runtime node system runtime query and system runtime transaction can be query successfully |
prestodbpresto | native scalar function presto default gte not register with argument interval year to month interval year to month against velox | Bug | expect behavior follow query be suppose to fail with the expected error message but it currently fail with an unexpected error assertqueryfail select array agg a over order by a range x precede from value date 2001 01 31 interval 1 year t a x window frame offset value must not be negative or null output against c worker veloxusererror scalar function presto default gte not register with argument interval year to month interval year to month find function register with the follow signature tinyint tinyint boolean smallint smallint boolean integer integer boolean bigint bigint boolean real real boolean double double boolean decimal a precision a scale decimal a precision a scale boolean timestamp with time zone timestamp with time zone boolean date date boolean timestamp timestamp boolean boolean boolean boolean varbinary varbinary boolean varchar varchar boolean the query also fail if we provide the positive interval for frame select array agg a over order by a range x precede from value interval 2 year interval 1 year t a x |
prestodbpresto | native bind caching of parse type | Bug | we recently add a cache to store parse type to improve performance however this cache be unbounded this can lead to an observable memory footprint in production system a well alternative be to use the simplelrucache from velox |
prestodbpresto | fix insert value for single column of rowtype | Bug | description currently when we try to execute insert value with single column of rowtype like row a int b varchar or row r row a int b varchar it would fail because of incorrect unfolding for the row this pr fix the problem fix issue test plan make sure the fix do not affect other test case newly add test case in abstracttestdistributedquerie testinsert contributor checklist x please make sure your submission complie with our development development format format commit message commit format and pull request and attribution guideline attribution x pr description address the issue accurately and concisely if the change be non trivial a github issue be reference x document new property with its default value sql syntax function or other functionality x if release note be require they follow the release note guideline x adequate test be add if applicable x ci pass release note no release note |
prestodbpresto | presto return 0 result when the query contain more than 100 table | Bug | presto return 0 result when the query contain more than 100 table here be the sample query select table name column name from xxxuser information schema column where xxxuser information schema column table schema xxx user order by xxxuser information schema column table name the above query return non zero result if the number of table be 100 or less your environment presto version use 0 263 storage hdfs s3 gcs datum source and connector use oracle oracle connector deployment cloud or on prem on prem pastebin link to the complete debug log expect behavior the query should return the table and column from the schema specify in the query current behavior the query return correct result if the number of table be less or equal to 100 if it have more than 100 table it return 0 row and it do not throw any error in the logs possible solution be there any configuration change in presto that limit the table scan to 100 step to reproduce 1 create 100 table in oracle 2 run the above query in presto cli ie command line and it return the row 3 create 101 table in oracle 4 run the above query in presto cli ie commond line and it return 0 row screenshot if appropriate context |
prestodbpresto | testprestosparkqueryrunner fail due to the error message mismatch | Bug | full log at testprestosparkqueryrunner teststoragebasedbroadcastjoindeserializedmaxthreshold expect an error message with the detailed info broadcast size 2 mb but the actual message be allocate 2 00 mb delta 9 97 kb hashbuilderoperator top consumer hashbuilderoperator 2 00 mb detail taskid 0 0 0 reservation 2 00 mb topconsumer type hashbuilderoperator plannodeid 4 reservation 666 16 kb 582 90 kb 544 91 kb 253 55 kb total 2 00 mb info inner replicated it seem the test need to update the expect message to be the late in code 2023 11 08t12 35 53 4069053z error test run 1282 failure 2 error 0 skip 0 time elapse 1 378 35 s failure in testsuite 2023 11 08t12 35 53 4071636z error com facebook presto spark adaptive execution testprestosparkadaptivequeryrunner teststoragebasedbroadcastjoindeserializedmaxthreshold time elapse 1 495 s failure 2023 11 08t12 35 53 4081645z java lang assertionerror expect exception message query exceed per node broadcast memory limit of 2 mb allocate 2 00 mb delta 9 97 kb hashbuilderoperator top consumer hashbuilderoperator 2 00 mb detail taskid 0 0 0 reservation 2 00 mb topconsumer type hashbuilderoperator plannodeid 4 reservation 666 16 kb 582 90 kb 544 91 kb 253 55 kb total 2 00 mb info inner replicate to match query exceed per node broadcast memory limit of 2 mb broadcast size 2 mb for query select from lineitem l join order o on l orderkey o orderkey 2023 11 08t12 35 53 4085956z at org testng assert fail assert java 98 2023 11 08t12 35 53 4087058z at com facebook presto test queryassertion assertexceptionmessage queryassertion java 351 2023 11 08t12 35 53 4088566z at com facebook presto test queryassertion assertqueryfail queryassertion java 332 2023 11 08t12 35 53 4090222z at com facebook presto test abstracttestqueryframework assertqueryfail abstracttestqueryframework java 282 2023 11 08t12 35 53 4092529z at com facebook presto spark testprestosparkqueryrunner teststoragebasedbroadcastjoindeserializedmaxthreshold testprestosparkqueryrunner java 933 2023 11 08t12 35 53 4094379z at sun reflect nativemethodaccessorimpl invoke0 native method 2023 11 08t12 35 53 4095519z at sun reflect nativemethodaccessorimpl invoke nativemethodaccessorimpl java 62 2023 11 08t12 35 53 4096876z at sun reflect delegatingmethodaccessorimpl invoke delegatingmethodaccessorimpl java 43 2023 11 08t12 35 53 4097984z at java lang reflect method invoke method java 498 2023 11 08t12 35 53 4099229z at org testng internal invoker methodinvocationhelper invokemethod methodinvocationhelper java 135 2023 11 08t12 35 53 4100697z at org testng internal invoker testinvoker invokemethod testinvoker java 673 2023 11 08t12 35 53 4101986z at org testng internal invoker testinvoker invoketestmethod testinvoker java 220 2023 11 08t12 35 53 4103540z at org testng internal invoker methodrunner runinsequence methodrunner java 50 2023 11 08t12 35 53 4105126z at org testng internal invoker testinvoker methodinvocationagent invoke testinvoker java 945 2023 11 08t12 35 53 4106559z at org testng internal invoker testinvoker invoketestmethod testinvoker java 193 2023 11 08t12 35 53 4108014z at org testng internal invoker testmethodworker invoketestmethod testmethodworker java 146 2023 11 08t12 35 53 4109432z at org testng internal invoker testmethodworker run testmethodworker java 128 2023 11 08t12 35 53 4110893z at java util concurrent threadpoolexecutor runworker threadpoolexecutor java 1149 2023 11 08t12 35 53 4112193z at java util concurrent threadpoolexecutor worker run threadpoolexecutor java 624 2023 11 08t12 35 53 4113153z at java lang thread run thread java 750 2023 11 08t12 35 53 4116946z cause by com facebook presto exceededmemorylimitexception query exceed per node broadcast memory limit of 2 mb allocate 2 00 mb delta 9 97 kb hashbuilderoperator top consumer hashbuilderoperator 2 00 mb detail taskid 0 0 0 reservation 2 00 mb topconsumer type hashbuilderoperator plannodeid 4 reservation 666 16 kb 582 90 kb 544 91 kb 253 55 kb total 2 00 mb info inner replicate 2023 11 08t12 35 53 4121102z at com facebook presto spark util prestosparkfailureutil toprestosparkfailure prestosparkfailureutil java 60 2023 11 08t12 35 53 4123157z at com facebook presto spark execution abstractprestosparkqueryexecution execute abstractprestosparkqueryexecution java 396 2023 11 08t12 35 53 4125216z at com facebook presto spark prestosparkqueryrunner executewithretrystrategie prestosparkqueryrunner java 525 2023 11 08t12 35 53 4126943z at com facebook presto spark prestosparkqueryrunner execute prestosparkqueryrunner java 508 2023 11 08t12 35 53 4128409z at com facebook presto test queryassertion assertqueryfail queryassertion java 328 2023 11 08t12 35 53 4129370z 17 more 2023 11 08t12 35 53 4129615z 2023 11 08t12 35 53 4130904z error com facebook presto spark testprestosparkqueryrunner teststoragebasedbroadcastjoindeserializedmaxthreshold time elapse 0 478 s failure 2023 11 08t12 35 53 4138030z java lang assertionerror expect exception message query exceed per node broadcast memory limit of 2 mb allocate 2 00 mb delta 7 51 kb hashbuilderoperator top consumer prestosparkremotesourceoperator 3 88 mb hashbuilderoperator 2 00 mb detail taskid 0 0 0 reservation 5 88 mb topconsumer type prestosparkremotesourceoperator plannodeid 238 reservation 3 88 mb 0b total 3 88 mb type hashbuilderoperator plannodeid 4 reservation 657 92 kb 655 67 kb 372 46 kb 360 00 kb total 2 00 mb info inner replicate to match query exceed per node broadcast memory limit of 2 mb broadcast size 2 mb for query select from lineitem l join order o on l orderkey o orderkey 2023 11 08t12 35 53 4143226z at org testng assert fail assert java 98 2023 11 08t12 35 53 4144571z at com facebook presto test queryassertion assertexceptionmessage queryassertion java 351 2023 11 08t12 35 53 4146086z at com facebook presto test queryassertion assertqueryfail queryassertion java 332 2023 11 08t12 35 53 4147744z at com facebook presto test abstracttestqueryframework assertqueryfail abstracttestqueryframework java 282 2023 11 08t12 35 53 4150047z at com facebook presto spark testprestosparkqueryrunner teststoragebasedbroadcastjoindeserializedmaxthreshold testprestosparkqueryrunner java 933 2023 11 08t12 35 53 4151922z at sun reflect nativemethodaccessorimpl invoke0 native method 2023 11 08t12 35 53 4153054z at sun reflect nativemethodaccessorimpl invoke nativemethodaccessorimpl java 62 2023 11 08t12 35 53 4154393z at sun reflect delegatingmethodaccessorimpl invoke delegatingmethodaccessorimpl java 43 2023 11 08t12 35 53 4155495z at java lang reflect method invoke method java 498 2023 11 08t12 35 53 4156861z at org testng internal invoker methodinvocationhelper invokemethod methodinvocationhelper java 135 2023 11 08t12 35 53 4158326z at org testng internal invoker testinvoker invokemethod testinvoker java 673 2023 11 08t12 35 53 4159624z at org testng internal invoker testinvoker invoketestmethod testinvoker java 220 2023 11 08t12 35 53 4160952z at org testng internal invoker methodrunner runinsequence methodrunner java 50 2023 11 08t12 35 53 4162346z at org testng internal invoker testinvoker methodinvocationagent invoke testinvoker java 945 2023 11 08t12 35 53 4163902z at org testng internal invoker testinvoker invoketestmethod testinvoker java 193 2023 11 08t12 35 53 4165380z at org testng internal invoker testmethodworker invoketestmethod testmethodworker java 146 2023 11 08t12 35 53 4166427z at org testng internal invoker testmethodworker run testmethodworker java 128 2023 11 08t12 35 53 4167185z at java util concurrent threadpoolexecutor runworker threadpoolexecutor java 1149 2023 11 08t12 35 53 4168051z at java util concurrent threadpoolexecutor worker run threadpoolexecutor java 624 2023 11 08t12 35 53 4168685z at java lang thread run thread java 750 2023 11 08t12 35 53 4171692z cause by com facebook presto exceededmemorylimitexception query exceed per node broadcast memory limit of 2 mb allocate 2 00 mb delta 7 51 kb hashbuilderoperator top consumer prestosparkremotesourceoperator 3 88 mb hashbuilderoperator 2 00 mb detail taskid 0 0 0 reservation 5 88 mb topconsumer type prestosparkremotesourceoperator plannodeid 238 reservation 3 88 mb 0b total 3 88 mb type hashbuilderoperator plannodeid 4 reservation 657 92 kb 655 67 kb 372 46 kb 360 00 kb total 2 00 mb info inner replicate 2023 11 08t12 35 53 4174691z at com facebook presto spark util prestosparkfailureutil toprestosparkfailure prestosparkfailureutil java 60 2023 11 08t12 35 53 4176015z at com facebook presto spark execution abstractprestosparkqueryexecution execute abstractprestosparkqueryexecution java 396 2023 11 08t12 35 53 4177228z at com facebook presto spark prestosparkqueryrunner executewithretrystrategie prestosparkqueryrunner java 525 2023 11 08t12 35 53 4178255z at com facebook presto spark prestosparkqueryrunner execute prestosparkqueryrunner java 508 2023 11 08t12 35 53 4179118z at com facebook presto test queryassertion assertqueryfail queryassertion java 328 2023 11 08t12 35 53 4179681z 17 more 2023 11 08t12 35 53 4179812z 2023 11 08t12 35 53 6275071z info 2023 11 08t12 35 53 6275559z info result 2023 11 08t12 35 53 6275989z info 2023 11 08t12 35 53 6276358z error failure 2023 11 08t12 35 53 6293453z error testprestosparkqueryrunner teststoragebasedbroadcastjoindeserializedmaxthreshold 933 abstracttestqueryframework assertqueryfail 282 expect exception message query exceed per node broadcast memory limit of 2 mb allocate 2 00 mb delta 7 51 kb hashbuilderoperator top consumer prestosparkremotesourceoperator 3 88 mb hashbuilderoperator 2 00 mb detail taskid 0 0 0 reservation 5 88 mb topconsumer type prestosparkremotesourceoperator plannodeid 238 reservation 3 88 mb 0b total 3 88 mb type hashbuilderoperator plannodeid 4 reservation 657 92 kb 655 67 kb 372 46 kb 360 00 kb total 2 00 mb info inner replicate to match query exceed per node broadcast memory limit of 2 mb broadcast size 2 mb for query select from lineitem l join order o on l orderkey o orderkey 2023 11 08t12 35 53 6305551z error testprestosparkadaptivequeryrunner testprestosparkqueryrunner teststoragebasedbroadcastjoindeserializedmaxthreshold 933 abstracttestqueryframework assertqueryfail 282 expect exception message query exceed per node broadcast memory limit of 2 mb allocate 2 00 mb delta 9 97 kb hashbuilderoperator top consumer hashbuilderoperator 2 00 mb detail taskid 0 0 0 reservation 2 00 mb topconsumer type hashbuilderoperator plannodeid 4 reservation 666 16 kb 582 90 kb 544 91 kb 253 55 kb total 2 00 mb info inner replicate to match query exceed per node broadcast memory limit of 2 mb broadcast size 2 mb for query select from lineitem l join order o on l orderkey o orderkey your environment presto version use late master expect behavior the error message should match current behavior expect error message with additional info broadcast size 2 mb but actual be allocate 2 00 mb delta 9 97 kb hashbuilderoperator top consumer hashbuilderoperator 2 00 mb detail taskid 0 0 0 reservation 2 00 mb topconsumer type hashbuilderoperator plannodeid 4 reservation 666 16 kb 582 90 kb 544 91 kb 253 55 kb total 2 00 mb info inner replicate possible solution fix the test with the correct error message step to reproduce screenshot if appropriate context |
prestodbpresto | native make memory pool init capacity and memory pool transfer capacity register systemconfig property | Bug | memory pool init capacity and memory pool transfer capacity be not register systemconfig property in l131 this lead to warning unregistered property from during server initialization this be mislead these property should be register systemconfig property step to reproduce add entry for memory pool init capacity and memory pool transfer capacity in config property and start prestissimo memory arbitrator kind share memory pool init capacity 536870912 memory pool transfer capacity 536870912 |
prestodbpresto | native support for char n be miss in velox | Bug | it seem currently we don t support char n type in velox one can create a table with this type but can t do any operation on the table current behavior presto tpch describe tab2 column type extra comment col1 integer col2 char 20 2 row query 20231106 204830 00050 qqghb finish 2 node split 3 total 3 do 100 00 latency client side 94ms server side 77ms 2 row 117b 25 row s 1 48 kb s presto tpch select from tab2 query 20231106 204836 00051 qqghb fail fail to parse type char 20 presto tpch insert into tab2 value 1 aaa query 20231106 205006 00052 qqghb fail fail to parse type char 20 |
prestodbpresto | native cast from varchar to timestamp with time zone not support in velox | Bug | it seem currently the cast from varchar to timestamp with time zone not yet support in velox expect behavior follow query be suppose to succeed but it currently fail against velox worker select cast orderdate as timestamp with time zone from order limit 1 this query be suppose to return the output like col0 1993 10 31 00 00 00 000 asia kolkata 1 row current behavior presto tpch select cast orderdate as timestamp with time zone from order limit 1 query 20231106 142314 00003 css8j fail 1 node split 9 total 0 do 0 00 latency client side 370ms server side 351ms 15k row 5 94 kb 42 7k rows s 16 9 kb s query 20231106 142314 00003 css8j fail can not cast varchar to timestamp with time zone cast orderdate as timestamp with time zone |
prestodbpresto | ignore null in window aggregate doesn t seem to work | Bug | window aggregate sql support ignore null clause it seem like the result with and without this clause be the same wrt null in the input data example function array agg c over partition by a order by b row between 1 precede and current row with and without ignore null retun the same result presto tiny select a array agg c over partition by a order by b row between 1 precede and current row from value 1 1 3 1 2 null 1 4 2 as t a b c a col1 1 3 1 3 null 1 null 2 3 row presto tiny select a array agg c ignore null over partition by a order by b row between 1 precede and current row from value 1 1 3 1 2 null 1 4 2 as t a b c a col1 1 3 1 3 null 1 null 2 3 row expect behavior array agg as a regular aggregate doesn t have any explicit ignore null option so there isn t a basis for comparison but the result I would ve expect be presto tiny select a array agg c ignore null over partition by a order by b row between 1 precede and current row from value 1 1 3 1 2 null 1 4 2 as t a b c a col1 1 3 1 3 1 2 3 row most other system I check like duckdb error on ignore null in aggregate if we don t support any special behavior then presto should error as well |
prestodbpresto | read delta lake table with apache presto readdirect unsupported in remoteblockreader | Bug | hi everyone we be have problem read delta lake table store in hadoop hdfs through apache presto we run the query use the apache presto sql engine through presto cli 0 283 dbeaver ide and qlik sense select from delta gold my table limit 10 error readdirect unsupported in remoteblockreader if we execute the same query through an apache spark cluster session the datum be read successfully here s the architecture environment on premise cluster a apache hadoop 3 3 4 with hdfs service with 3 datum node replication factor 3 there be a data lake layer and a delta lake table create with apache spark via pyspark cluster b apache spark 3 3 0 with 6 worker node python 3 10 with library pyspark 3 3 0 and delta spark 2 3 0 metadata apache hive 3 1 3 every table create with apache spark the apache hive store the metadata hdfs path schema name table name attribute comment etc sql engine apache presto 0 283 1fa586a all server run with jdk 8u341 delta lake create table from pyspark sql type import structtype structfield stringtype integertype decimaltype longtype datetype timestamptype deltatable createifnotexist sparksession s session delta tablename gold my table addcolumn column a datatype longtype nullable false addcolumn column b datatype longtype nullable false addcolumn column c datatype stringtype nullable false addcolumn column n datatype timestamptype nullable false property description mmmmmmmmmmmmmmmmmmmm location hdfs hdfs servername hdfs server port datum deltalake gold my table execute this table have 59 field apache presto configuration file the content of the follow file be all locate on the apache presto coordinator node delta property connector name delta hive metastore uri thrift metastore hive 9083 node property node environment production node data dir var apache presto data jvm config server xmx16 g xx useg1gc xx g1heapregionsize 32 m xx usegcoverheadlimit xx explicitgcinvokesconcurrent xx heapdumponoutofmemoryerror xx exitonoutofmemoryerror log property com facebook presto debug config property http server http port 10500 query max memory 10 gb query max memory per node 2 gb discovery uri core site xml fs defaultfs hdfs hdfs servername hdfs server port uri do namenode hdfs site xml dfs namenode name dir datum hdfs namenode hdfs servername dfs namenode fs limit min block size 32 dfs replication 3 dfs namenode handler count 1 dfs client use datanode hostname true whether client should use datanode hostname when connect to datanodes dfs client use legacy blockreader false the error trace when we trigger the query through apache presto error type internal error error code generic internal error 65536 stack trace java lang unsupportedoperationexception readdirect unsupported in remoteblockreader at org apache hadoop hdfs remoteblockreader read remoteblockreader java 492 at org apache hadoop hdfs dfsinputstream bytebufferstrategy doread dfsinputstream java 789 at org apache hadoop hdfs dfsinputstream readbuffer dfsinputstream java 823 at org apache hadoop hdfs dfsinputstream readwithstrategy dfsinputstream java 883 at org apache hadoop hdfs dfsinputstream read dfsinputstream java 938 at org apache hadoop fs fsdatainputstream read fsdatainputstream java 143 at shadedelta org apache parquet hadoop util h2seekableinputstream h2reader read h2seekableinputstream java 81 at shadedelta org apache parquet hadoop util h2seekableinputstream readfully h2seekableinputstream java 90 at shadedelta org apache parquet hadoop util h2seekableinputstream readfully h2seekableinputstream java 75 at shadedelta org apache parquet hadoop parquetfilereader readfooter parquetfilereader java 575 at shadedelta org apache parquet hadoop parquetfilereader parquetfilereader java 776 at shadedelta org apache parquet hadoop parquetfilereader open parquetfilereader java 657 at shadedelta org apache parquet hadoop parquetreader initreader parquetreader java 152 at shadedelta org apache parquet hadoop parquetreader read parquetreader java 135 at shadedelta com github mjakubowski84 parquet4s parquetiterableimpl anon 3 hasnext parquetreader scala 144 at io delta standalone internal action customparquetiterator hasnext memoryoptimizedlogreplay scala 132 at io delta standalone internal action memoryoptimizedlogreplay anon 1 anonfun ensurenextiterisready 3 memoryoptimizedlogreplay scala 81 at io delta standalone internal action memoryoptimizedlogreplay anon 1 anonfun ensurenextiterisready 3 adapt memoryoptimizedlogreplay scala 81 at scala option exist option scala 376 at io delta standalone internal action memoryoptimizedlogreplay anon 1 ensurenextiterisready memoryoptimizedlogreplay scala 81 at io delta standalone internal action memoryoptimizedlogreplay anon 1 hasnext memoryoptimizedlogreplay scala 90 at scala collection convert wrapper jiteratorwrapper hasnext wrapper scala 43 at scala collection iterator foreach iterator scala 943 at scala collection iterator foreach iterator scala 943 at scala collection abstractiterator foreach iterator scala 1431 at io delta standalone internal snapshotimpl loadtableprotocolandmetadata snapshotimpl scala 141 at io delta standalone internal snapshotimpl x 1 lzycompute snapshotimpl scala 131 at io delta standalone internal snapshotimpl x 1 snapshotimpl scala 131 at io delta standalone internal snapshotimpl protocolscala lzycompute snapshotimpl scala 131 at io delta standalone internal snapshotimpl protocolscala snapshotimpl scala 131 at io delta standalone internal snapshotimpl snapshotimpl scala 272 at io delta standalone internal snapshotmanagement createsnapshot snapshotmanagement scala 257 at io delta standalone internal snapshotmanagement getsnapshotatinit snapshotmanagement scala 224 at io delta standalone internal snapshotmanagement init snapshotmanagement scala 37 at io delta standalone internal deltalogimpl deltalogimpl scala 47 at io delta standalone internal deltalogimpl apply deltalogimpl scala 263 at io delta standalone internal deltalogimpl fortable deltalogimpl scala 245 at io delta standalone internal deltalogimpl fortable deltalogimpl scala at io delta standalone deltalog fortable deltalog java 176 at com facebook presto delta deltaclient loaddeltatablelog deltaclient java 151 at com facebook presto delta deltaclient gettable deltaclient java 79 at com facebook presto delta deltametadata gettablehandle deltametadata java 220 at com facebook presto delta deltametadata gettablehandle deltametadata java 73 at com facebook presto spi connector classloader classloadersafeconnectormetadata gettablehandle classloadersafeconnectormetadata java 220 at com facebook presto metadata metadatautil getoptionaltablehandle metadatautil java 180 at com facebook presto metadata metadatamanager 1 gettablehandle metadatamanager java 1331 at com facebook presto util metadatautil lambda gettablecolumnmetadata 2 metadatautil java 83 at com facebook presto common runtimestat profilenano runtimestat java 136 at com facebook presto util metadatautil gettablecolumnmetadata metadatautil java 81 at com facebook presto util metadatautil gettablecolumnsmetadata metadatautil java 54 at com facebook presto sql analyzer statementanalyzer visitor visittable statementanalyzer java 1282 at com facebook presto sql analyzer statementanalyzer visitor visittable statementanalyzer java 338 at com facebook presto sql tree table accept table java 53 at com facebook presto sql tree astvisitor process astvisitor java 27 at com facebook presto sql analyzer statementanalyzer visitor process statementanalyzer java 352 at com facebook presto sql analyzer statementanalyzer visitor analyzefrom statementanalyzer java 2596 at com facebook presto sql analyzer statementanalyzer visitor visitqueryspecification statementanalyzer java 1615 at com facebook presto sql analyzer statementanalyzer visitor visitqueryspecification statementanalyzer java 338 at com facebook presto sql tree queryspecification accept queryspecification java 138 at com facebook presto sql tree astvisitor process astvisitor java 27 at com facebook presto sql analyzer statementanalyzer visitor process statementanalyzer java 352 at com facebook presto sql analyzer statementanalyzer visitor process statementanalyzer java 360 at com facebook presto sql analyzer statementanalyzer visitor visitquery statementanalyzer java 1116 at com facebook presto sql analyzer statementanalyzer visitor visitquery statementanalyzer java 338 at com facebook presto sql tree query accept query java 105 at com facebook presto sql tree astvisitor process astvisitor java 27 at com facebook presto sql analyzer statementanalyzer visitor process statementanalyzer java 352 at com facebook presto sql analyzer statementanalyzer analyze statementanalyzer java 330 at com facebook presto sql analyzer analyzer analyzesemantic analyzer java 117 at com facebook presto sql analyzer builtinqueryanalyzer analyze builtinqueryanalyzer java 93 at com facebook presto execution sqlqueryexecution sqlqueryexecution java 203 at com facebook presto execution sqlqueryexecution sqlqueryexecution java 107 at com facebook presto execution sqlqueryexecution sqlqueryexecutionfactory createqueryexecution sqlqueryexecution java 955 at com facebook presto dispatcher localdispatchqueryfactory lambda createdispatchquery 0 localdispatchqueryfactory java 167 at com google common util concurrent trustedlistenablefuturetask trustedfutureinterruptibletask runinterruptibly trustedlistenablefuturetask java 125 at com google common util concurrent interruptibletask run interruptibletask java 57 at com google common util concurrent trustedlistenablefuturetask run trustedlistenablefuturetask java 78 at java util concurrent threadpoolexecutor runworker threadpoolexecutor java 1149 at java util concurrent threadpoolexecutor worker run threadpoolexecutor java 624 at java lang thread run thread java 750 |
prestodbpresto | presto jdbc driver return zero error code and null sql state in exception | Bug | while presto jdbc driver throw a sqlexception on method call the value return for error code be zero and state be null which impede an application try to look at that level of detail while the databasemetadata getsqlstatetype return 2 state seem to always be null system out println e geterrorcode e getsqlstate e getmessage presto jdbc 0 283 or early and presto server version of similar version or early |
prestodbpresto | issue with pushdown subfield from map with map key of row type in unnest query | Bug | in query with cross join unnest when unneste the map column with key of row type the optimizer incorrectly push down the subfield of the key as if those subfield be relate to the value of the map not the key consider this example create table test pushdown subfield f map row a bigint b varchar c double bigint select k a k b k c from test pushdown subfields cross join unnest f as t k v the optimizer will push down subfield as follow f c f a f b this be problematic as the subfield be push as if they be relate to the value of the map which be not the case your environment presto version use 0 284 edge24 1 datum source and connector use hive expect behavior optimizer should not pushdown any subfield as currently there be no mean to pass to the reader subfield of the map key current behavior subfield f a f b e c will be push down to reader that be incorrect as the type of the map s value be of bigint type and query will fail with the error primitive stream reader doesn t support subfield possible solution in pushdownsubfield visitunn l429c25 l429c36 add logic for map that would not add any subfield for map key step to reproduce run the hivequeryrunner and execute command below set session pushdown dereference enable true set session pushdown subfield enable true set session hive pushdown filter enable true use hive tpch create table test pushdown subfield f map row a bigint b varchar c double bigint insert into test pushdown subfield value map from entry array row row 1 b cast 10 as double 1 select k a k b k c from test pushdown subfields cross join unnest f as t k v last query fail with the error error opening hive split file var folders q3 hy2th48936n898n tl6w8hhm0000gn t prestotest6794201901000344010 hive data tpch test pushdown subfield 20231026 103608 00041 nzgje 5fd6a307 be14 4f35 bde7 5d393071ccae offset 0 length 551 primitive type stream reader doesn t support subfield |
prestodbpresto | native semantic of field name in json cast enable session property not follow by json format in all the case | Bug | expect behavior all the query be run with the follow property set in the session from the onset it look as if the semantic of this session property be break in c worker but that isn t the case as I have list a query in the end that do succeed with this property set presto tpch set session field name in json cast enable true fail query select json format cast row a b as json from value 1 2 as t a b output against c worker presto tpch col0 3 1 row output against java worker col0 3 1 row fail query select json format cast row 1 row 9 a array null row 1 2 as json from value a t a output against c worker presto tpch col0 1 9 a null 1 2 1 row output against java worker col0 1 9 a null 1 2 1 row fail query select json format cast row row row row row a b c d e f as json from value row 0 1 2 3 null array 5 array t a b c d e f output against c worker col0 0 1 2 3 null 5 1 row output against java worker col0 0 1 2 3 null 5 1 row pass query select json format cast row 1 2 concat a b as json output against c worker presto tpch select json format cast row 1 2 concat a b as json col0 3 ab output against java worker presto tpch select json format cast row 1 2 concat a b as json col0 3 ab |
prestodbpresto | suboptimal plan for insert query with union all | Bug | query select orderkey 2 orderkey from select orderkey from order union all select orderkey from order union all select orderkey from order plan fragment 1 round robin output layout row 39 fragment 40 commitcontext 41 output partition single stage execution strategy ungrouped execution remotesource 2 3 row 39 bigint fragment 40 varbinary commitcontext 41 varbinary fragment 2 source output layout row 45 fragment 46 commitcontext 47 output partition round robin stage execution strategy ungrouped execution tablewriter plannodeid 557 row 45 bigint fragment 46 varbinary commitcontext 47 varbinary orderkey multiply 1 77 statistic collect 0 localexchange plannodeid 622 round robin multiply bigint scanproject plannodeid 0 344 table tablehandle connectorid hive connectorhandle hivetablehandle schemaname tpch tablename order analyzepartitionvalue optional empty layout optional tpch order group false projectlocality local multiply bigint multiply orderkey bigint 2 1 107 layout tpch order orderkey orderkey bigint 0 regular 1 122 fragment 3 source output layout row 48 fragment 49 commitcontext 50 output partition round robin stage execution strategy ungrouped execution tablewriter plannodeid 558 row 48 bigint fragment 49 varbinary commitcontext 50 varbinary orderkey multiply 38 1 77 statistic collect 0 localexchange plannodeid 623 round robin multiply 38 bigint scanproject plannodeid 3 345 table tablehandle connectorid hive connectorhandle hivetablehandle schemaname tpch tablename order analyzepartitionvalue optional empty layout optional tpch order group false projectlocality local multiply 38 bigint multiply 38 orderkey 2 bigint 2 1 107 layout tpch order orderkey 2 orderkey bigint 0 regular 1 166 fragment 4 source output layout row 42 fragment 43 commitcontext 44 output partition single stage execution strategy ungrouped execution tablewriter plannodeid 551 row 42 bigint fragment 43 varbinary commitcontext 44 varbinary orderkey orderkey 20 1 77 statistic collect 0 localexchange plannodeid 624 round robin orderkey 20 bigint tablescan plannodeid 11 tablehandle connectorid hive connectorhandle hivetablehandle schemaname tpch tablename order analyzepartitionvalue optional empty layout optional tpch order group false orderkey 20 bigint layout tpch order orderkey 20 orderkey bigint 0 regular 1 206 your environment presto version use trunk datum source and connector use hive expect behavior all table write be directly connect to the tablefinish stage current behavior there s an unnecessary round robin stage in between possible solution run pushprojectionthroughunion before setflatteningoptimizer step to reproduce 1 explain type distribute on a query from the description |
prestodbpresto | native correlate subquerie failure with error scalar function name not register presto default fail | Bug | couple of query with correlate sub query be incorrectly fail against velox expect behavior fail query velox output presto tpch select select r name from nation n region r where r regionkey n regionkey and n nationkey a from value 1 t a query 20231020 152110 00003 smh7a fail 1 node split 4 total 0 do 0 00 latency client side 337ms server side 296ms 0 row 0b 0 rows s 0b s query 20231020 152110 00003 smh7a fail scalar function name not register presto default fail call with argument integer varchar presto tpch select name from nation n where africa select name from region where regionkey n regionkey query 20231020 154504 00000 zyjhs fail 1 node split 3 total 2 do 66 67 latency client side 0 02 server side 0 01 30 row 2 17 kb 22 row s 1 61 kb s query 20231020 154504 00000 zyjhs fail scalar function name not register presto default fail call with argument integer varchar java output presto tiny select select r name from nation n region r where r regionkey n regionkey and n nationkey a from value 1 t a col0 america 1 row presto tiny select name from nation n where africa select name from region where regionkey n regionkey name ethiopia algeria kenya morocco mozambique 5 row |
prestodbpresto | add documentation for hive view | Bug | documentation hive view be define in hiveql and store in the hive metastore service they be analyze to allow read access to the datum the hive connector include support for read hive view with three different mode disabled legacy experimental disabled in this mode the connector doesn t support read hive view any attempt to read a hive view will result in an error legacy in this mode the connector support read hive view use some legacy method this might mean that the connector translate the hiveql in the view into its own query language which could lead to inaccuracy if the connector s query language doesn t fully support all feature of hiveql experimental in this mode the connector support read hive view use some new experimental method this might provide more accurate result than the legacy method but it could also be less stable or have other unforeseen issue |
prestodbpresto | potential datum corruption with recoverable group execution enable | Bug | your environment presto version use 0 285 snapshot storage hdfs s3 gcs local datum source and connector use hive deployment cloud or on prem local expect behavior when recovery happen correct output table content be expect current behavior incorrect table content possible solution disable recoverable group execution to mitigate step to reproduce 1 failure be inject right before commit with some probability diff 90f667b9e48d15ad2dce1189ca464e382be5b4bb0ee94a4e3ab5ce2ea0ebac31r296 2 test be modify to rely on inject failure instead of make worker unreponsive diff b0e66057cedfe54a588d3d28a8c274dbeeeaf805218c73a96aac251e0442b1cbr370 3 when failure occur output table sometimes contain incorrect datum 2023 10 20t11 50 33 272 0500 error splitrunner 9 200 com facebook presto execution executor taskexecutor error processing split 20231020 165028 00012 mkv4v 1 0 0 0 32 start 2 6082186029375e8 wall 178 ms cpu 2 ms wait 1 ms call 2 remote task error this be inject recoverable writer error 2023 10 20t11 50 34 339 0500 error main com facebook presto hive testhiverecoverableexecution query with recovery take 5716ms java lang assertionerror expect 15000 actual 13847 at org testng assert fail assert java 110 at org testng assert failnotequal assert java 1413 at org testng assert assertequalsimpl assert java 149 at org testng assert assertequal assert java 131 at org testng assert assertequal assert java 655 at org testng assert assertequal assert java 665 at com facebook presto hive testhiverecoverableexecution testrecoverablegroupedexecution testhiverecoverableexecution java 400 at com facebook presto hive testhiverecoverableexecution testinsertbucketedtable testhiverecoverableexecution java 197 at sun reflect nativemethodaccessorimpl invoke0 native method at sun reflect nativemethodaccessorimpl invoke nativemethodaccessorimpl java 62 at sun reflect delegatingmethodaccessorimpl invoke delegatingmethodaccessorimpl java 43 at java lang reflect method invoke method java 498 at org testng internal invoker methodinvocationhelper invokemethod methodinvocationhelper java 135 at org testng internal invoker testinvoker invokemethod testinvoker java 673 at org testng internal invoker testinvoker invoketestmethod testinvoker java 220 at org testng internal invoker methodrunner runinsequence methodrunner java 50 at org testng internal invoker testinvoker methodinvocationagent invoke testinvoker java 945 at org testng internal invoker testinvoker invoketestmethod testinvoker java 193 at org testng internal invoker testmethodworker invoketestmethod testmethodworker java 146 at org testng internal invoker testmethodworker run testmethodworker java 128 at java util arraylist foreach arraylist java 1257 at org testng testrunner privaterun testrunner java 808 at org testng testrunner run testrunner java 603 at org testng suiterunner runtest suiterunner java 429 at org testng suiterunner runsequentially suiterunner java 423 at org testng suiterunner privaterun suiterunner java 383 at org testng suiterunner run suiterunner java 326 at org testng suiterunnerworker runsuite suiterunnerworker java 52 at org testng suiterunnerworker run suiterunnerworker java 95 at org testng testng runsuitessequentially testng java 1249 at org testng testng runsuiteslocally testng java 1169 at org testng testng runsuite testng java 1092 at org testng testng run testng java 1060 at com intellij rt testng idearemotetestng run idearemotetestng java 66 at com intellij rt testng remotetestngstarter main remotetestngstarter java 105 screenshot if appropriate context discover when implement recoverable group execution support in prestissimo |
prestodbpresto | native try not handle correctly incase of overflow divide by zero error | Bug | expect behavior against velox semantic of try not be handle correctly incase of overflow error divide by zero type of error velox output presto tpch select try 2 0 query 20231018 051948 00002 cvwm6 fail scalar function name not register presto default fail call with argument integer json presto tpch select try cast 32768 as smallint query 20231020 024724 00001 acjwd fail scalar function name not register presto default fail call with argument integer json java output presto tpch select try 2 0 col0 null 1 row presto tpch select try cast 32768 as smallint col0 null 1 row |
prestodbpresto | native scalar subquery not fail with error scalar sub query have return multiple row against velox | Bug | expect behavior scalar subquery not fail with error scalar sub query have return multiple row against velox string subqueryreturnedtoomanyrow scalar sub query have return multiple row assertqueryfail select select 2 from value 3 4 where a 1 from value 1 t a subqueryreturnedtoomanyrow query return the follow error veloxusererror scalar function name not register presto default fail call with argument integer varchar the query be suppose to fail because the subquery select 2 from value 3 4 where a 1 will report more than 1 row but it be suppose to fail error scalar sub query have return multiple row but it be currently fail with scalar function name not register presto default fail call with argument integer varchar |
prestodbpresto | native unify async datum cache property prefix | Bug | currently we have the follow property async datum cache enable async cache ssd gb async cache ssd checkpoint gb async cache ssd path async cache ssd disable file cow some of they start with aysnc data cache and some async cache would be nice to unify they to one |
prestodbpresto | native support for create hll be miss in velox | Bug | expect behavior current behavior materializedresult actual computeactual select cardinality merge c from select create hll custkey c from order union all select empty approx set materializedresult expect resultbuilder getsession bigint row 1002l build assertequal actual getmaterializedrow expect getmaterializedrow the query be currently get this error if run against c worker veloxusererror scalar function name not register presto default create hll call with argument bigint |
prestodbpresto | update com fasterxml jackson core to late version | Bug | presto hudi and presto iceberg |
prestodbpresto | incorrect result in regexp like | Bug | regexp like behave strangely I d expect true in all the follow case but on of they return false this be use joni for regex library cc tdcmeehan aditi pandit amitkdutta zacw7 kaikalur presto whatsapp close select regexp like a b c d e a za z0 9 a za z0 9 col0 true presto select regexp like a b c d e a za z0 9 a za z0 9 col0 false presto select regexp like a b c d e a za z0 9 a za z0 9 col0 true |
prestodbpresto | testdistributedspilledquerieswithtempstorage abstracttestquerie testrownumberlimit ci failure | Bug | ci job error testdistributedspilledquerieswithtempstorage abstracttestquerie testrownumberlimit 1722 abstracttestqueryframework computeactual 127 abstracttestqueryframework computeactual 132 |
prestodbpresto | update to hadoop 3 2 x x | Bug | com facebook presto hadoop hadoop apache2 3 2 0 1 the hadoop apache2 dependency be use when you want your java application to interact with hadoop service this be particularly useful when you re build application that need to process large amount of datum use hadoop s distribute processing |
prestodbpresto | update aw sdk v1 to 1 12 493 | Bug | the aws sdk software development kit be a set of tool provide by amazon web service aw to interact with its various service it include library sample code documentation and other resource to help developer build software application that use aw service like amazon s3 amazon ec2 dynamodb and more the aw sdk for java 1 12 261 be an old version of the sdk and it s possible that it may have know vulnerability that have be address in later version however as a good practice it s recommend to keep your dependency include the aw sdk up to date aw regularly release update to their sdks that include not just vulnerability patch but also new feature performance improvement and bug fix so even if version 1 12 261 doesn t have a know vulnerability it s still a good idea to update to a more recent version if possible |
prestodbpresto | circleci job linux build and unit test fail with ld terminate with signal 9 kill | Bug | 758 763 link cxx executable presto cpp main test presto server test fail presto cpp main test presto server test opt rh gcc toolset 9 root usr bin c mavx2 mfma mavx mf16c mlzcnt std c 17 mbmi2 werror wno nullability completeness wno deprecate declaration wreorder g velox velox connector hive cmakefiles velox hive connector dir filehandle cpp o velox velox connector hive cmakefiles velox hive connector dir hiveconfig cpp o velox velox connector hive cmakefiles velox hive connector dir hiveconnector cpp o velox velox connector hive cmakefiles velox hive connector dir hivedatasink cpp o velox velox connector hive cmakefiles velox hive connector dir hivedatasource cpp o velox velox connector hive cmakefiles velox hive connector dir hivepartitionutil cpp o velox velox connector hive cmakefiles velox hive connector dir partitionidgenerator cpp o velox velox connector hive cmakefiles velox hive connector dir tablehandle cpp o velox velox connector tpch cmakefiles velox tpch connector dir tpchconnector cpp o presto cpp main test cmakefile presto server test dir announcertest cpp o presto cpp main test cmakefile presto server test dir coordinatordiscoverertest cpp o presto cpp main test cmakefile presto server test dir httpserverwrapper cpp o presto cpp main test cmakefile presto server test dir mutableconfig cpp o presto cpp main test cmakefile presto server test dir prestoexchangesourcetest cpp o presto cpp main test cmakefile presto server test dir prestotasktest cpp o presto cpp main test cmakefile presto server test dir querycontextcachet cpp o presto cpp main test cmakefile presto server test dir serveroperationt cpp o presto cpp main test cmakefile presto server test dir taskmanagertest cpp o presto cpp main test cmakefile presto server test dir querycontextmanagert cpp o o presto cpp main test presto server test wl rpath usr local lib64 usr local lib velox velox exec test util libvelox exec test lib a presto cpp main libpresto server lib a velox velox dwio common test util libvelox dwio common test util a presto cpp main type cmakefiles presto type converter dir typesignaturetypeconverter cpp o presto cpp main type cmakefiles presto type converter dir antlr typesignaturelexer cpp o presto cpp main type cmakefiles presto type converter dir antlr typesignatureparser cpp o presto cpp main type cmakefile presto type dir prestotoveloxqueryplan cpp o presto cpp main type cmakefile presto type dir prestotoveloxexpr cpp o presto cpp main type cmakefile presto type dir prestotoveloxsplit cpp o velox velox serializer libvelox presto serializer a velox velox function prestosql registration libvelox function prestosql a velox velox function prestosql aggregate libvelox aggregate a velox velox connector hive libvelox hive partition function a usr local lib64 libre2 so lib libgmock a lib libgt a lib libgt main a velox velox tpch gen libvelox tpch gen a velox velox vector test util libvelox vector test lib a lib libgt main a velox velox exec test util libvelox temp path a velox velox parse libvelox parse parser a velox velox duckdb conversion libvelox duckdb parser a velox velox parse libvelox parse expression a velox velox parse libvelox parse util a velox velox function libvelox function registry a velox velox duckdb conversion libvelox duckdb conversion a velox velox external duckdb libduckdb a velox velox external duckdb tpch libtpch extension a velox velox external duckdb tpch dbgen libdbgen a velox velox dwio dwrf reader libvelox dwio dwrf reader a velox velox dwio dwrf writer libvelox dwio dwrf writer a velox velox dwio dwrf common libvelox dwio dwrf common a velox velox dwio dwrf util libvelox dwio dwrf util a velox velox dwio dwrf proto libvelox dwio dwrf proto a usr lib64 liblz4 so usr lib64 liblzo2 so velox velox dwio catalog fbhive libvelox dwio catalog fbhive a velox velox dwio parquet libvelox dwio parquet reader a velox velox dwio parquet reader libvelox dwio native parquet reader a usr lib64 libzstd so velox velox dwio common compression libvelox dwio common compression a velox velox dwio parquet thrift libvelox dwio parquet thrift a usr lib64 libprotobuf so pthread velox velox dwio parquet libvelox dwio parquet writer a velox velox dwio parquet writer libvelox dwio arrow parquet writer a velox velox dwio parquet writer arrow libvelox dwio arrow parquet writer lib a velox velox dwio parquet writer arrow generate libvelox dwio arrow parquet writer thrift lib a velox velox dwio parquet writer arrow util libvelox dwio arrow parquet writer util lib a usr local lib64 libsnappy a velox third party arrow ep install lib64 libparquet a velox third party arrow ep install lib64 libarrow a velox third party arrow ep src arrow ep build thrift ep install lib libthrift a velox velox connector hive storage adapter s3fs libvelox s3fs a velox velox connector hive storage adapter hdfs libvelox hdfs a velox velox connector hive storage adapter gcs libvelox gcs a velox velox function prestosql libvelox function prestosql impl a velox velox function prestosql json libvelox function json a velox velox external md5 libmd5 a velox velox function prestosql type libvelox presto type a velox velox function lib libvelox be null function a dep simdjson build libsimdjson a velox velox common hyperloglog libvelox common hyperloglog a velox velox function lib aggregate libvelox function aggregate a presto cpp presto protocol cmakefiles presto protocol dir presto protocol cpp o presto cpp presto protocol cmakefiles presto protocol dir base64util cpp o presto cpp presto protocol cmakefiles presto protocol dir datasize cpp o presto cpp presto protocol cmakefiles presto protocol dir duration cpp o presto cpp presto protocol cmakefiles presto protocol dir connector cpp o presto cpp main common libpresto exception a presto cpp main http libpresto http a presto cpp main http filter libhttp filter a usr local lib libproxygenhttpserver so usr local lib libproxygen so usr local lib libwangle so usr local lib libfizz so usr local lib64 libfmt a presto cpp main operator libpresto operator a presto cpp main common libpresto common a velox velox function lib libvelox function lib a velox velox function prestosql window libvelox window a velox velox function lib window libvelox function window a usr local lib libantlr4 runtime so lpthread presto cpp main thrift libpresto thrift cpp2 a usr local lib libthriftcpp2 so usr local lib libthriftprotocol so usr local lib libthriftmetadata so usr local lib libthrift core so usr local lib libtransport so presto cpp main thrift libpresto thrift extra a presto cpp main libpresto server remote function a velox velox function remote client libvelox function remote a velox velox type fbhive libvelox type fbhive a velox velox function remote client libvelox function remote thrift client a velox velox function remote if libvelox function remote get serde a velox velox serializer libvelox presto serializer a velox velox dwio common libvelox dwio common a velox velox dwio common exception libvelox dwio common exception a velox velox exec libvelox exec a velox velox codegen libvelox codegen a velox velox exec libvelox exec a velox velox codegen libvelox codegen a velox velox expression libvelox expression a velox velox core libvelox core a velox velox connector libvelox connector a velox velox common config libvelox spill config a velox velox vector arrow libvelox arrow bridge a velox velox core libvelox config a velox velox function lib libvelox function util a velox velox expression libvelox expression function a velox velox expression type calculation libvelox type calculation a velox velox common cache libvelox cache a velox velox common file libvelox file a velox velox common compression libvelox common compression a velox velox dwio common encryption libvelox dwio common encryption a velox velox row libvelox row fast a velox velox vector libvelox vector a velox velox buffer libvelox buffer a velox velox common memory libvelox memory a velox velox common time libvelox time a velox velox type libvelox type a velox velox common encode libvelox encode a velox velox common serialization libvelox serialization a velox velox type tz libvelox type tz a velox velox external date libvelox external date a velox velox common testutil libvelox test util a usr local lib64 libre2 so usr local lib64 libre2 so velox velox common base libvelox common base a velox velox common base libvelox exception a velox velox common process libvelox process a usr lib64 libgflag so 2 2 2 usr lib64 libglog so velox velox function remote if libvelox remote function thrift a usr local lib libthriftcpp2 so 1 0 0 usr local lib libthriftfrozen2 so 1 0 0 usr local lib libthriftmetadata so 1 0 0 usr local lib libthriftprotocol so 1 0 0 usr local lib libasync so 1 0 0 usr local lib libtransport so 1 0 0 usr local lib librpcmetadata so 1 0 0 usr local lib libconcurrency so 1 0 0 lib64 libzstd so usr local lib libwangle so 1 0 0 usr local lib libfizz so 1 0 0 usr lib64 libz so usr local lib libsodium so usr lib64 librt so usr local lib libthrift core so 1 0 0 usr local lib libthriftanyrep so usr local lib libthrifttype so 1 0 0 usr local lib libthrifttyperep so 1 0 0 usr local lib libthriftannotation so 1 0 0 usr local lib libfolly so 0 58 0 dev usr lib64 libdouble conversion so usr local lib libboost context so 1 72 0 usr lib64 libevent so usr local lib64 libsnappy a usr lib64 liblz4 so usr lib64 libzstd so usr lib64 libz so usr lib64 libglog so usr lib64 libgflag so 2 2 2 usr local lib64 libfmt a usr local lib libboost regex so 1 72 0 usr lib64 libssl so usr lib64 libcrypto so usr local lib libboost filesystem so 1 72 0 usr local lib libboost program option so 1 72 0 usr local lib libboost system so 1 72 0 usr local lib libboost thread so 1 72 0 usr lib64 libdwarf so usr local lib libsodium so ldl lib libgt a lpthread collect2 fatal error ld terminate with signal 9 kill compilation terminate 759 763 link cxx executable presto cpp main type test presto expression test 760 763 link cxx executable presto cpp main type test presto velox split test 761 763 link cxx executable presto cpp main test presto query runner test 762 763 link cxx executable presto cpp main operator test presto operator test 763 763 link cxx executable presto cpp main presto server ninja build stop subcommand fail exit with code exit status 1 this be quite likely to be cause by memory shortage your environment expect behavior current behavior possible solution increase physical or virtual memory step to reproduce just submit a pr |
prestodbpresto | document lateral join | Bug | your environment open this issue in response to email from yzhang1991 that presto support lateral join base on issue 5879 however this ability be not document expect behavior the presto documentation should include documentation for lateral join current behavior the presto documentation do not include documentation for lateral join possible solution write documentation or identify why not to step to reproduce yzhang1991 in email in addition to finding also provide the follow I create a simple test to verify it work like I imagine create table lateraltest i d bigint create on date title varchar insert into lateraltest value 1 date 2013 09 30 vlad mihalcea blog 2 date 2017 01 22 hypersistence no lateral join select b i d as blog i d date diff year b create on current date as age in year date add year date diff year b create on current date 1 create on as next anniversary date diff day current date date add year date diff year b create on current date 1 create on as day to next anniversary from lateralt b order by blog i d with lateral join select b i d as blog i d age in year date add year age in year 1 create on as next anniversary date diff day current date date add year age in year 1 create on as day to next anniversary from lateraltest b cross join lateral select date diff year b create on current date as age in year as t order by blog i d screenshot if appropriate none context it be bring to my attention that this presto feature appear to be undocumente I be open this issue to ask the presto community if lateral join be support in presto sufficiently that they should be document and to ask if anyone be willing to work on this |
prestodbpresto | type mismatch between presto taskoutputoperator and velox s exchange operator | Bug | your environment prestissimo worker step to reproduce 1 set session optimize repartitione true 2 select max cast 2 as int 1 num shard from customer bucket partition stacktrace veloxruntimeerror encode kindencoding encoding to type mismatch integer expect int array get long array at unknown 0 facebook velox veloxexception veloxexception char const unsigned long char const std basic string view std basic string view std basic string view std basic string view bool facebook velox veloxexception type std basic string view unknown source at unknown 1 void facebook velox detail veloxcheckfail std allocator const facebook velox detail veloxcheckfailargs const std cxx11 basic string std allocator const unknown source at unknown 2 facebook velox serializer presto anonymous namespace readcolumn facebook velox bytestream facebook velox memory memorypool std vector std allocator const std vector std allocator bool unknown source at unknown 3 facebook velox serializer presto prestovectorserde deserialize facebook velox bytestream facebook velox memory memorypool std share ptr std share ptr facebook velox vectorserde option const unknown source at unknown 4 facebook velox exec exchange getoutput unknown source at unknown 5 facebook velox exec driver runinternal std share ptr std share ptr std share ptr unknown source at unknown 6 facebook velox exec driver run std share ptr unknown source at unknown 7 void folly detail function functiontrait callsmall 1 folly detail function datum unknown source at unknown 8 folly threadpoolexecutor runtask std share ptr const folly threadpoolexecutor task unknown source at unknown 9 folly cputhreadpoolexecutor threadrun std share ptr unknown source at unknown 10 void folly detail function functiontrait callsmall std share ptr folly detail function datum unknown source at unknown 11 execute native thread routine unknown source at unknown 12 start thread unknown source at unknown 13 clone3 unknown source the leaf stage have one task and it s execute on coordinator so it s use presto s taskoutputoperator it s downstream task be on prestissimo and it s throw type mismatch error screenshot if appropriate |
prestodbpresto | array min array max behavior with nan and null | Bug | in presto array max or array min both return nan if at least one element in the array be nan select array max array 4 0 nan null nan select array min array 4 0 nan null nan wonder if the behavior be correct or not cc kaikalur prithvip mbasmanova |
prestodbpresto | type mismatch between presto optimizedpartitionedoutputoperator and velox s exchange operator | Bug | your environment prestissimo worker step to reproduce 1 set session optimize repartitione true 2 select ds max cast 2 as int 1 num shard from customer bucket partition group by 1 stacktrace veloxruntimeerror encode kindencoding encoding to type mismatch integer expect int array get long array at unknown 0 facebook velox veloxexception veloxexception char const unsigned long char const std basic string view std basic string view std basic string view std basic string view bool facebook velox veloxexception type std basic string view unknown source at unknown 1 void facebook velox detail veloxcheckfail std allocator const facebook velox detail veloxcheckfailargs const std cxx11 basic string std allocator const unknown source at unknown 2 facebook velox serializer presto anonymous namespace readcolumn facebook velox bytestream facebook velox memory memorypool std vector std allocator const std vector std allocator bool unknown source at unknown 3 facebook velox serializer presto prestovectorserde deserialize facebook velox bytestream facebook velox memory memorypool std share ptr std share ptr facebook velox vectorserde option const unknown source at unknown 4 facebook velox exec exchange getoutput unknown source at unknown 5 facebook velox exec driver runinternal std share ptr std share ptr std share ptr unknown source at unknown 6 facebook velox exec driver run std share ptr unknown source at unknown 7 void folly detail function functiontrait callsmall 1 folly detail function datum unknown source at unknown 8 folly threadpoolexecutor runtask std share ptr const folly threadpoolexecutor task unknown source at unknown 9 folly cputhreadpoolexecutor threadrun std share ptr unknown source at unknown 10 void folly detail function functiontrait callsmall std share ptr folly detail function datum unknown source at unknown 11 execute native thread routine unknown source at unknown 12 start thread unknown source at unknown 13 clone3 unknown source the leaf stage have one task and it s execute on coordinator so it s use presto s optimizedpartitionedoutputoperator it s downstream task be on prestissimo and it s throw type mismatch error screenshot if appropriate similar issue with taskoutputoperator |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.