HP Vertica Analytics Platform 7.0.x SQL Reference Manual
User Manual: Pdf
Open the PDF directly: View PDF .
Page Count: 1539
Download | ![]() |
Open PDF In Browser | View PDF |
SQL Reference Manual HP Vertica Analytic Database Software Version: 7.0.x Document Release Date: 2/24/2014 Legal Notices Warranty The only warranties for HP products and services are set forth in the express warranty statements accompanying such products and services. Nothing herein should be construed as constituting an additional warranty. HP shall not be liable for technical or editorial errors or omissions contained herein. The information contained herein is subject to change without notice. Restricted Rights Legend Confidential computer software. Valid license from HP required for possession, use or copying. Consistent with FAR 12.211 and 12.212, Commercial Computer Software, Computer Software Documentation, and Technical Data for Commercial Items are licensed to the U.S. Government under vendor's standard commercial license. Copyright Notice © Copyright 2006 - 2014 Hewlett-Packard Development Company, L.P. Trademark Notices Adobe® is a trademark of Adobe Systems Incorporated. Microsoft® and Windows® are U.S. registered trademarks of Microsoft Corporation. UNIX® is a registered trademark of The Open Group. HP Vertica Analytic Database (7.0.x) Page 2 of 1539 Contents Contents 3 SQL Overview 39 HP Vertica Support for ANSI SQL Standards 39 Support for Historical Queries 39 Joins 39 Transactions 39 System Limits 41 SQL Language Elements 43 Keywords and Reserved Words 44 Keywords 44 Reserved Words 47 Identifiers 48 Non-ASCII Characters Literals 48 51 Number-Type Literals 51 String Literals 54 Character String Literals 54 Single Quotes in a String 54 Standard Conforming Strings and Escape Characters 55 Dollar-Quoted String Literals 56 Unicode String Literals 57 Using Standard Conforming Strings 57 VARBINARY String Literals 58 Extended String Literals 59 Standard Conforming Strings and Escape Characters 60 Identifying Strings That Are Not Standard Conforming 61 Doubled Single Quotes 62 Date/Time Literals Time Zone Values HP Vertica Analytic Database (7.0.x) 64 64 Page 3 of 1539 SQL Reference Manual Contents Day of the Week Names 65 Month Names 65 Interval Values 66 Interval-Literal 67 Interval-Qualifier 70 Operators 72 Binary Operators 72 Boolean Operators 74 Comparison Operators 75 Data Type Coercion Operators (CAST) 75 Date/Time Operators 78 Mathematical Operators 79 NULL Operators 81 String Concatenation Operators 81 Expressions 83 Operator Precedence 83 Expression Evaluation Rules 84 Aggregate Expressions 84 CASE Expressions 85 Column References 87 Comments 88 Date/Time Expressions 88 NULL Value 91 Numeric Expressions 91 Predicates 93 BETWEEN-predicate 93 Boolean-Predicate 93 Column-Value-Predicate 94 IN-predicate 95 INTERPOLATE 95 Join-Predicate 99 HP Vertica Analytic Database (7.0.x) Page 4 of 1539 SQL Reference Manual Contents LIKE-predicate 100 NULL-predicate 104 SQL Data Types 107 Binary Data Types 109 Boolean Data Type 113 Character Data Types 114 The Difference Between NULL and NUL Date/Time Data Types 115 117 Time Zone Abbreviations for Input 117 DATE 118 DATETIME 119 INTERVAL 120 Displaying or Omitting Interval Units in Output 120 Specifying Units on Input 122 How the Interval-Qualifier Affects Output Units 123 Specifying Precision 124 Casting with Intervals 125 Processing Signed Intervals 126 Processing Interval-Literals Without Units 127 Using INTERVALYM for INTERVAL YEAR TO MONTH 128 Operations with Intervals 129 Fractional Seconds in Interval Units 129 Interval-Literal 132 Interval-Qualifier 134 SMALLDATETIME 135 TIME 136 TIME AT TIME ZONE 137 TIMESTAMP 139 TIMESTAMP AT TIME ZONE 144 Long Data Types 146 Numeric Data Types 149 HP Vertica Analytic Database (7.0.x) Page 5 of 1539 SQL Reference Manual Contents DOUBLE PRECISION (FLOAT) 150 INTEGER 152 NUMERIC 153 Numeric Data Type Overflow 156 Data Type Coercion 158 Data Type Coercion Chart 163 SQL Functions Aggregate Functions 167 168 APPROXIMATE_COUNT_DISTINCT 168 APPROXIMATE_COUNT_DISTINCT_OF_SYNOPSIS 170 APPROXIMATE_COUNT_DISTINCT_SYNOPSIS 173 AVG [Aggregate] 174 BIT_AND 175 BIT_OR 176 BIT_XOR 178 CORR 179 COUNT [Aggregate] 180 COVAR_POP 184 COVAR_SAMP 185 MAX [Aggregate] 185 MIN [Aggregate] 186 REGR_AVGX 187 REGR_AVGY 188 REGR_COUNT 188 REGR_INTERCEPT 189 REGR_R2 189 REGR_SLOPE 190 REGR_SXX 190 REGR_SXY 191 REGR_SYY 192 STDDEV [Aggregate] 192 HP Vertica Analytic Database (7.0.x) Page 6 of 1539 SQL Reference Manual Contents STDDEV_POP [Aggregate] 193 STDDEV_SAMP [Aggregate] 194 SUM [Aggregate] 196 SUM_FLOAT [Aggregate] 197 VAR_POP [Aggregate] 198 VAR_SAMP [Aggregate] 198 VARIANCE [Aggregate] 200 Analytic Functions 202 Analytic Function Syntax 202 Analytic Syntactic Construct 202 window_partition_clause 204 window_order_clause 204 window_frame_clause 206 Window Aggregates 209 named_windows 211 AVG [Analytic] 212 CONDITIONAL_CHANGE_EVENT [Analytic] 213 CONDITIONAL_TRUE_EVENT [Analytic] 214 COUNT [Analytic] 216 CUME_DIST [Analytic] 218 DENSE_RANK [Analytic] 219 EXPONENTIAL_MOVING_AVERAGE [Analytic] 221 FIRST_VALUE [Analytic] 224 LAG [Analytic] 227 LAST_VALUE [Analytic] 231 LEAD [Analytic] 233 MAX [Analytic] 236 MEDIAN [Analytic] 237 MIN [Analytic] 238 NTILE [Analytic] 240 PERCENT_RANK [Analytic] 241 HP Vertica Analytic Database (7.0.x) Page 7 of 1539 SQL Reference Manual Contents PERCENTILE_CONT [Analytic] 243 PERCENTILE_DISC [Analytic] 246 RANK [Analytic] 248 ROW_NUMBER [Analytic] 250 STDDEV [Analytic] 252 STDDEV_POP [Analytic] 254 STDDEV_SAMP [Analytic] 255 SUM [Analytic] 256 VAR_POP [Analytic] 258 VAR_SAMP [Analytic] 259 VARIANCE [Analytic] 260 Date/Time Functions 263 Usage 263 Daylight Savings Time Considerations 263 Date/Time Functions in Transactions 263 ADD_MONTHS 264 AGE_IN_MONTHS 265 AGE_IN_YEARS 267 CLOCK_TIMESTAMP 268 CURRENT_DATE 269 CURRENT_TIME 269 CURRENT_TIMESTAMP 270 DATE_PART 271 DATE 280 DATE_TRUNC 280 DATEDIFF 282 DAY 289 DAYOFMONTH 290 DAYOFWEEK 290 DAYOFWEEK_ISO 291 DAYOFYEAR 292 HP Vertica Analytic Database (7.0.x) Page 8 of 1539 SQL Reference Manual Contents DAYS 293 EXTRACT 294 GETDATE 301 GETUTCDATE 302 HOUR 303 ISFINITE 304 JULIAN_DAY 304 LAST_DAY 305 LOCALTIME 306 LOCALTIMESTAMP 306 MICROSECOND 307 MIDNIGHT_SECONDS 308 MINUTE 309 MONTH 309 MONTHS_BETWEEN 310 NEW_TIME 312 NEXT_DAY 314 NOW [Date/Time] 315 OVERLAPS 316 QUARTER 317 ROUND [Date/Time] 318 SECOND 319 STATEMENT_TIMESTAMP 320 SYSDATE 321 TIME_SLICE 322 TIMEOFDAY 327 TIMESTAMPADD 328 TIMESTAMPDIFF 330 TIMESTAMP_ROUND 332 TIMESTAMP_TRUNC 333 TRANSACTION_TIMESTAMP 335 HP Vertica Analytic Database (7.0.x) Page 9 of 1539 SQL Reference Manual Contents TRUNC [Date/Time] 335 WEEK 337 WEEK_ISO 337 YEAR 339 YEAR_ISO 339 Formatting Functions 341 TO_BITSTRING 341 TO_CHAR 342 TO_DATE 345 TO_HEX 346 TO_TIMESTAMP 347 TO_TIMESTAMP_TZ 349 TO_NUMBER 351 Template Patterns for Date/Time Formatting 353 Template Pattern Modifiers for Date/Time Formatting Template Patterns for Numeric Formatting Geospatial Package SQL Functions 355 356 357 To Install the Geospatial package: 357 Contents of the Geospatial Package 357 Using Geospatial Package SQL Functions 357 Using Built-In HP Vertica Functions for Geospatial Analysis 358 Geospatial SQL Functions 358 WGS-84 SQL Functions 358 Earth Radius, Radius of Curvature, and Bearing SQL Functions 359 ECEF Conversion SQL Functions 359 Bounding Box SQL Functions 360 Miles/Kilometer Conversion SQL Functions 360 BB_WITHIN 360 BEARING 361 CHORD_TO_ARC 362 DWITHIN 363 HP Vertica Analytic Database (7.0.x) Page 10 of 1539 SQL Reference Manual Contents ECEF_CHORD 364 ECEF_x 365 ECEF_y 366 ECEF_z 367 ISLEFT 367 KM2MILES 368 LAT_WITHIN 369 LL_WITHIN 370 LLD_WITHIN 371 LON_WITHIN 372 MILES2KM 373 RADIUS_LON 374 RADIUS_M 374 RADIUS_N 375 RADIUS_R 376 RADIUS_Ra 377 RADIUS_Rc 377 RADIUS_Rv 378 RADIUS_SI 379 RAYCROSSING 379 WGS84_a 381 WGS84_b 382 WGS84_e2 382 WGS84_f 383 WGS84_if 383 WGS84_r1 384 IP Conversion Functions 385 INET_ATON 385 INET_NTOA 386 V6_ATON 387 V6_NTOA 388 HP Vertica Analytic Database (7.0.x) Page 11 of 1539 SQL Reference Manual Contents V6_SUBNETA 389 V6_SUBNETN 390 V6_TYPE 392 Mathematical Functions 394 ABS 394 ACOS 394 ASIN 395 ATAN 396 ATAN2 396 CBRT 397 CEILING (CEIL) 398 COS 398 COT 399 DEGREES 400 DISTANCE 401 DISTANCEV 401 EXP 402 FLOOR 403 HASH 404 LN 405 LOG 406 MOD 406 MODULARHASH 408 PI 409 POWER (or POW) 409 RADIANS 410 RANDOM 411 RANDOMINT 411 ROUND 412 SIGN 414 SIN 415 HP Vertica Analytic Database (7.0.x) Page 12 of 1539 SQL Reference Manual Contents SQRT 415 TAN 416 TRUNC 416 WIDTH_BUCKET 417 NULL-handling Functions 420 COALESCE 420 IFNULL 421 ISNULL 422 NULLIF 424 NULLIFZERO 425 NVL 426 NVL2 428 ZEROIFNULL 429 Pattern Matching Functions 431 EVENT_NAME 431 MATCH_ID 433 PATTERN_ID 434 Regular Expression Functions 436 ISUTF8 436 REGEXP_COUNT 437 REGEXP_INSTR 439 REGEXP_LIKE 442 REGEXP_REPLACE 445 REGEXP_SUBSTR 448 Sequence Functions 451 NEXTVAL 451 CURRVAL 453 LAST_INSERT_ID 455 String Functions 459 ASCII 459 BIT_LENGTH 460 HP Vertica Analytic Database (7.0.x) Page 13 of 1539 SQL Reference Manual Contents BITCOUNT 461 BITSTRING_TO_BINARY 462 BTRIM 463 CHARACTER_LENGTH 464 CHR 465 CONCAT 466 DECODE 466 GREATEST 468 GREATESTB 469 HEX_TO_BINARY 471 HEX_TO_INTEGER 472 INET_ATON 473 INET_NTOA 474 INITCAP 475 INITCAPB 476 INSERT 477 INSTR 478 INSTRB 481 ISUTF8 482 LEAST 483 LEASTB 484 LEFT 486 LENGTH 487 LOWER 488 LOWERB 489 LPAD 490 LTRIM 490 MD5 491 OCTET_LENGTH 492 OVERLAY 493 OVERLAYB 494 HP Vertica Analytic Database (7.0.x) Page 14 of 1539 SQL Reference Manual Contents POSITION 496 POSITIONB 498 QUOTE_IDENT 498 QUOTE_LITERAL 499 REPEAT 500 REPLACE 501 RIGHT 502 RPAD 503 RTRIM 504 SPACE 505 SPLIT_PART 506 SPLIT_PARTB 507 STRPOS 508 STRPOSB 509 SUBSTR 510 SUBSTRB 511 SUBSTRING 512 TO_BITSTRING 514 TO_HEX 515 TRANSLATE 516 TRIM 516 UPPER 518 UPPERB 519 V6_ATON 519 V6_NTOA 521 V6_SUBNETA 522 V6_SUBNETN 523 V6_TYPE 524 System Information Functions 527 CURRENT_DATABASE 527 CURRENT_SCHEMA 527 HP Vertica Analytic Database (7.0.x) Page 15 of 1539 SQL Reference Manual Contents CURRENT_USER 529 DBNAME (function) 529 HAS_TABLE_PRIVILEGE 530 SESSION_USER 532 USER 533 USERNAME 533 VERSION 534 Timeseries Functions 535 TS_FIRST_VALUE 535 TS_LAST_VALUE 536 URI Encode/Decode Functions 539 URI_PERCENT_DECODE 539 URI_PERCENT_ENCODE 540 HP Vertica Meta-Functions Alphabetical List of HP Vertica Meta-Functions ADD_LOCATION Storage Location Subdirectories 541 542 542 543 ADVANCE_EPOCH 544 ALTER_LOCATION_USE 545 USER Storage Location Restrictions 546 Monitoring Storage Locations 546 ALTER_LOCATION_LABEL 546 ANALYZE_CONSTRAINTS 548 Detecting Constraint Violations During a Load Process 549 Understanding Function Failures 550 ANALYZE_CORRELATIONS 556 ANALYZE_HISTOGRAM 558 ANALYZE_STATISTICS 561 ANALYZE_WORKLOAD 564 AUDIT 568 AUDIT_FLEX 572 HP Vertica Analytic Database (7.0.x) Page 16 of 1539 SQL Reference Manual Contents AUDIT_LICENSE_SIZE 574 AUDIT_LICENSE_TERM 575 BUILD_FLEXTABLE_VIEW 575 CANCEL_REBALANCE_CLUSTER 579 CANCEL_REFRESH 579 CHANGE_CURRENT_STATEMENT_RUNTIME_PRIORITY 580 CHANGE_RUNTIME_PRIORITY 581 CLEAR_CACHES 582 CLEAR_DATA_COLLECTOR 583 CLEAR_PROFILING 584 CLEAR_PROJECTION_REFRESHES 585 CLEAR_RESOURCE_REJECTIONS 586 CLEAR_OBJECT_STORAGE_POLICY 587 CLOSE_SESSION 588 Controlling Sessions 590 CLOSE_ALL_SESSIONS 592 Controlling Sessions 594 COMPUTE_FLEXTABLE_KEYS 595 COMPUTE_FLEXTABLE_KEYS_AND_BUILD_VIEW 597 CURRENT_SCHEMA 598 DATA_COLLECTOR_HELP 599 DISABLE_DUPLICATE_KEY_ERROR 601 DISABLE_ELASTIC_CLUSTER 604 DISABLE_LOCAL_SEGMENTS 605 DISABLE_PROFILING 605 DISPLAY_LICENSE 606 DO_TM_TASK 607 DROP_LICENSE 609 DROP_LOCATION 610 Retiring or Dropping a Storage Location 610 Storage Locations with Temp and Data Files 610 HP Vertica Analytic Database (7.0.x) Page 17 of 1539 SQL Reference Manual Contents DROP_PARTITION 611 DROP_STATISTICS 614 DUMP_CATALOG 616 DUMP_LOCKTABLE 617 DUMP_PARTITION_KEYS 618 DUMP_PROJECTION_PARTITION_KEYS 619 DUMP_TABLE_PARTITION_KEYS 620 ENABLE_ELASTIC_CLUSTER 622 ENABLE_LOCAL_SEGMENTS 622 ENABLE_PROFILING 623 EVALUATE_DELETE_PERFORMANCE 624 EXPORT_CATALOG 627 EXPORT_OBJECTS 628 EXPORT_STATISTICS 630 EXPORT_TABLES 632 FLUSH_DATA_COLLECTOR 633 GET_AHM_EPOCH 634 GET_AHM_TIME 635 GET_AUDIT_TIME 636 GET_COMPLIANCE_STATUS 636 GET_CURRENT_EPOCH 637 GET_DATA_COLLECTOR_POLICY 638 GET_LAST_GOOD_EPOCH 639 GET_NUM_ACCEPTED_ROWS 639 GET_NUM_REJECTED_ROWS 640 GET_PROJECTION_STATUS 640 GET_PROJECTIONS, GET_TABLE_PROJECTIONS 642 HAS_ROLE 644 IMPORT_STATISTICS 646 INTERRUPT_STATEMENT 647 INSTALL_LICENSE 650 HP Vertica Analytic Database (7.0.x) Page 18 of 1539 SQL Reference Manual Contents LAST_INSERT_ID 651 MAKE_AHM_NOW 653 MARK_DESIGN_KSAFE 655 MATERIALIZE_FLEXTABLE_COLUMNS 657 MEASURE_LOCATION_PERFORMANCE 659 MERGE_PARTITIONS 661 MOVE_PARTITIONS_TO_TABLE 662 PARTITION_PROJECTION 663 PARTITION_TABLE 665 PURGE 667 PURGE_PARTITION 667 PURGE_PROJECTION 669 PURGE_TABLE 670 REALIGN_CONTROL_NODES 672 REBALANCE_CLUSTER 672 REENABLE_DUPLICATE_KEY_ERROR 674 REFRESH 674 RELEASE_ALL_JVM_MEMORY 677 RELEASE_JVM_MEMORY 678 RELOAD_SPREAD 678 RESET_LOAD_BALANCE_POLICY 680 RESTORE_LOCATION 680 Effects of Restoring a Previously Retired Location 681 Monitoring Storage Locations 681 RESTORE_FLEXTABLE_DEFAULT_KEYS_TABLE_AND_VIEW 681 RETIRE_LOCATION 683 Effects of Retiring a Storage Location 683 Monitoring Storage Locations 684 SET_AHM_EPOCH 684 SET_AHM_TIME 686 SET_AUDIT_TIME 688 HP Vertica Analytic Database (7.0.x) Page 19 of 1539 SQL Reference Manual Contents SET_CONTROL_SET_SIZE 689 SET_DATA_COLLECTOR_POLICY 690 SET_DATA_COLLECTOR_TIME_POLICY 692 SET_LOAD_BALANCE_POLICY 695 SET_LOCATION_PERFORMANCE 696 SET_SCALING_FACTOR 697 SET_OBJECT_STORAGE_POLICY 697 New Storage Policy 698 Existing Storage Policy 698 Forcing Existing Data Storage to a New Storage Location 698 SHUTDOWN 699 SLEEP 701 START_REBALANCE_CLUSTER 702 START_REFRESH 703 SYNCH_WITH_HCATALOG_SCHEMA 704 Catalog Management Functions 707 DROP_LICENSE 707 DUMP_CATALOG 707 EXPORT_CATALOG 708 EXPORT_OBJECTS 709 INSTALL_LICENSE 711 MARK_DESIGN_KSAFE 711 SYNCH_WITH_HCATALOG_SCHEMA 714 Client Connection Management Functions 715 SET_LOAD_BALANCE_POLICY 715 RESET_LOAD_BALANCE_POLICY 716 Cluster Management Functions 717 SET_CONTROL_SET_SIZE 717 REALIGN_CONTROL_NODES 718 RELOAD_SPREAD 719 REBALANCE_CLUSTER 720 HP Vertica Analytic Database (7.0.x) Page 20 of 1539 SQL Reference Manual Contents Cluster Scaling Functions 722 CANCEL_REBALANCE_CLUSTER 722 DISABLE_ELASTIC_CLUSTER 722 DISABLE_LOCAL_SEGMENTS 723 ENABLE_ELASTIC_CLUSTER 723 ENABLE_LOCAL_SEGMENTS 724 REBALANCE_CLUSTER 725 SET_SCALING_FACTOR 726 START_REBALANCE_CLUSTER 727 Constraint Management Functions 729 ANALYZE_CONSTRAINTS 729 Detecting Constraint Violations During a Load Process 730 Understanding Function Failures 731 ANALYZE_CORRELATIONS 737 DISABLE_DUPLICATE_KEY_ERROR 739 LAST_INSERT_ID 742 REENABLE_DUPLICATE_KEY_ERROR 744 Data Collector Functions 746 Related Topics 746 CLEAR_DATA_COLLECTOR 746 DATA_COLLECTOR_HELP 747 FLUSH_DATA_COLLECTOR 749 GET_DATA_COLLECTOR_POLICY 750 SET_DATA_COLLECTOR_POLICY 751 SET_DATA_COLLECTOR_TIME_POLICY 753 Database Designer Functions 756 DESIGNER_ADD_DESIGN_QUERIES 757 DESIGNER_ADD_DESIGN_QUERIES_FROM_RESULTS 760 DESIGNER_ADD_DESIGN_QUERY 762 DESIGNER_ADD_DESIGN_TABLES 763 DESIGNER_CANCEL_POPULATE_DESIGN 765 HP Vertica Analytic Database (7.0.x) Page 21 of 1539 SQL Reference Manual Contents DESIGNER_CREATE_DESIGN 766 DESIGNER_DESIGN_PROJECTION_ENCODINGS 768 DESIGNER_DROP_ALL_DESIGNS 769 DESIGNER_DROP_DESIGN 770 DESIGNER_OUTPUT_ALL_DESIGN_PROJECTIONS 771 DESIGNER_OUTPUT_DEPLOYMENT_SCRIPT 773 DESIGNER_RESET_DESIGN 774 DESIGNER_RUN_POPULATE_DESIGN_AND_DEPLOY 775 DESIGNER_SET_ANALYZE_CORRELATIONS_MODE 777 DESIGNER_SET_DESIGN_KSAFETY 779 DESIGNER_SET_DESIGN_TYPE 781 DESIGNER_SET_OPTIMIZATION_OBJECTIVE 782 DESIGNER_SET_PROPOSE_UNSEGMENTED_PROJECTIONS 784 DESIGNER_WAIT_FOR_DESIGN 785 Database Management Functions 787 CLEAR_RESOURCE_REJECTIONS 787 DUMP_LOCKTABLE 787 DUMP_PARTITION_KEYS 788 EXPORT_TABLES 790 HAS_ROLE 791 SET_CONFIG_PARAMETER 793 SHUTDOWN 794 Epoch Management Functions 797 ADVANCE_EPOCH 797 GET_AHM_EPOCH 797 GET_AHM_TIME 798 GET_CURRENT_EPOCH 799 GET_LAST_GOOD_EPOCH 799 MAKE_AHM_NOW 800 SET_AHM_EPOCH 801 SET_AHM_TIME 803 HP Vertica Analytic Database (7.0.x) Page 22 of 1539 SQL Reference Manual Contents Flex Table Functions 805 COMPUTE_FLEXTABLE_KEYS 805 COMPUTE_FLEXTABLE_KEYS 807 COMPUTE_FLEXTABLE_KEYS_AND_BUILD_VIEW 809 MATERIALIZE_FLEXTABLE_COLUMNS 810 RESTORE_FLEXTABLE_DEFAULT_KEYS_TABLE_AND_VIEW 812 License Management Functions 814 AUDIT 814 AUDIT_FLEX 818 AUDIT_LICENSE_SIZE 820 AUDIT_LICENSE_TERM 821 GET_AUDIT_TIME 821 GET_COMPLIANCE_STATUS 822 DISPLAY_LICENSE 823 SET_AUDIT_TIME 824 Partition Management Functions 825 DROP_PARTITION 825 DUMP_PROJECTION_PARTITION_KEYS 828 DUMP_TABLE_PARTITION_KEYS 829 MERGE_PARTITIONS 830 MOVE_PARTITIONS_TO_TABLE 832 PARTITION_PROJECTION 833 PARTITION_TABLE 835 PURGE_PARTITION 836 Profiling Functions 839 CLEAR_PROFILING 839 DISABLE_PROFILING 839 ENABLE_PROFILING 840 Projection Management Functions 842 EVALUATE_DELETE_PERFORMANCE 842 GET_PROJECTION_STATUS 845 HP Vertica Analytic Database (7.0.x) Page 23 of 1539 SQL Reference Manual Contents GET_PROJECTIONS, GET_TABLE_PROJECTIONS 846 REFRESH 848 START_REFRESH 851 Purge Functions 853 PURGE 853 PURGE_PARTITION 854 PURGE_PROJECTION 855 PURGE_TABLE 856 Session Management Functions 859 CANCEL_REFRESH 859 CLOSE_ALL_SESSIONS 860 Controlling Sessions 862 CLOSE_SESSION Controlling Sessions 863 866 GET_NUM_ACCEPTED_ROWS 867 GET_NUM_REJECTED_ROWS 868 INTERRUPT_STATEMENT 868 RELEASE_ALL_JVM_MEMORY 872 RELEASE_JVM_MEMORY 873 Statistic Management Functions 874 ANALYZE_HISTOGRAM 874 ANALYZE_STATISTICS 877 DROP_STATISTICS 881 EXPORT_STATISTICS 883 IMPORT_STATISTICS 885 Storage Management Functions 887 ADD_LOCATION Storage Location Subdirectories ALTER_LOCATION_USE 887 888 889 USER Storage Location Restrictions 890 Monitoring Storage Locations 890 HP Vertica Analytic Database (7.0.x) Page 24 of 1539 SQL Reference Manual Contents ALTER_LOCATION_LABEL 890 CLEAR_CACHES 892 CLEAR_OBJECT_STORAGE_POLICY 893 DROP_LOCATION 894 Retiring or Dropping a Storage Location 894 Storage Locations with Temp and Data Files 894 MEASURE_LOCATION_PERFORMANCE 895 RESTORE_LOCATION 897 Effects of Restoring a Previously Retired Location 897 Monitoring Storage Locations 897 RETIRE_LOCATION 898 Effects of Retiring a Storage Location 898 Monitoring Storage Locations 899 SET_LOCATION_PERFORMANCE 899 SET_OBJECT_STORAGE_POLICY 901 New Storage Policy 901 Existing Storage Policy 901 Forcing Existing Data Storage to a New Storage Location 902 Tuple Mover Functions 903 DO_TM_TASK 903 Workload Management Functions 906 ANALYZE_WORKLOAD 906 CHANGE_CURRENT_STATEMENT_RUNTIME_PRIORITY 909 CHANGE_RUNTIME_PRIORITY 910 CLEAR_CACHES 911 SLEEP 912 SQL Statements 915 ALTER DATABASE 915 ALTER FAULT GROUP 916 ALTER FUNCTION 917 ALTER LIBRARY 920 HP Vertica Analytic Database (7.0.x) Page 25 of 1539 SQL Reference Manual Contents ALTER NODE 921 ALTER NETWORK INTERFACE 922 ALTER PROJECTION RENAME 922 ALTER PROFILE 923 ALTER PROFILE RENAME 926 ALTER RESOURCE POOL 927 ALTER ROLE RENAME 938 ALTER SCHEMA 938 ALTER SEQUENCE 940 ALTER SUBNET 943 ALTER TABLE 944 Table Behavior After Alteration 955 Changing a Data Type for a Column Specified in a SEGMENTED BY Clause 955 Locked Tables 956 Adding and Changing Constraints on Columns Using ALTER TABLE 956 Adding and Dropping NOT NULL Column Constraints 957 Table-Constraint 958 Specifying Primary and Foreign Keys 959 Adding Constraints to Views 959 ALTER USER Database Account Changes Users Can Make 959 959 ALTER VIEW 963 BEGIN 963 COMMENT ON Statements 966 COMMENT ON COLUMN 966 COMMENT ON CONSTRAINT 968 COMMENT ON FUNCTION 969 COMMENT ON LIBRARY 971 COMMENT ON NODE 972 COMMENT ON PROJECTION 974 COMMENT ON SCHEMA 975 HP Vertica Analytic Database (7.0.x) Page 26 of 1539 SQL Reference Manual Contents COMMENT ON SEQUENCE 976 COMMENT ON TABLE 978 COMMENT ON TRANSFORM FUNCTION 979 COMMENT ON VIEW 981 COMMIT 982 CONNECT 983 Connection Details COPY 984 984 COPY Option Summary 995 Setting vsql Variables 997 Using Compressed Data and Named Pipes 998 COPY LOCAL 999 How Copy Local Works 1000 Viewing Copy Local Operations in a Query Plan 1000 COPY FROM VERTICA Source and Destination Column Mapping 1000 1003 CREATE EXTERNAL TABLE AS COPY 1004 CREATE FAULT GROUP 1006 CREATE FLEX TABLE 1008 Unsupported CREATE Options for Flex Tables 1009 Default Flex Table and Keys Table Projections 1009 CREATE FLEX EXTERNAL TABLE AS COPY 1010 CREATE FUNCTION Statements 1013 About Creating User Defined Transform Functions (UDTFs) 1014 CREATE AGGREGATE FUNCTION 1014 CREATE ANALYTIC FUNCTION 1017 CREATE FILTER 1018 CREATE FUNCTION (SQL Functions) 1020 CREATE FUNCTION (UDF) 1024 CREATE PARSER 1026 CREATE SOURCE 1029 HP Vertica Analytic Database (7.0.x) Page 27 of 1539 SQL Reference Manual Contents CREATE TRANSFORM FUNCTION UDTF Query Restrictions 1031 1032 CREATE HCATALOG SCHEMA 1033 CREATE LIBRARY 1035 CREATE LOCAL TEMPORARY VIEW 1037 CREATE NETWORK INTERFACE 1040 CREATE PROCEDURE 1040 CREATE PROFILE 1042 CREATE PROJECTION 1045 Projections and Superprojections 1051 Checking Column Constraints 1051 Updating the Projection Using Refresh 1051 Creating Unsegmented Projections with the ALL NODES Option 1052 Encoding-Type 1053 ENCODING AUTO (default) 1053 ENCODING BLOCK_DICT 1053 ENCODING BLOCKDICT_COMP 1053 ENCODING COMMONDELTA_COMP 1054 ENCODING DELTARANGE_COMP 1054 ENCODING DELTAVAL 1054 ENCODING GCDDELTA 1054 ENCODING RLE 1055 ENCODING NONE 1055 Hash-Segmentation-Clause 1055 CREATE RESOURCE POOL 1057 Permissions 1065 Built-In Pools 1067 Built-In Pool 1067 Settings 1067 Upgrade From Earlier Versions of HP Vertica 1069 Built-In Pool Configuration HP Vertica Analytic Database (7.0.x) 1069 Page 28 of 1539 SQL Reference Manual Contents GENERAL 1070 SYSQUERY 1071 SYSDATA 1072 WOSDATA 1073 TM 1073 REFRESH 1074 RECOVERY 1074 DBD 1075 JVM 1075 CREATE ROLE 1076 CREATE SCHEMA 1076 CREATE SEQUENCE 1078 Incrementing and Obtaining Sequence Values 1080 Removing a Sequence 1081 CREATE SUBNET 1083 CREATE TABLE 1084 Automatic Projection Creation 1090 Partition Clauses 1090 Column-Definition (table) 1095 Column-Name-List (table) 1096 Column-Constraint 1099 Table-Constraint 1106 Specifying Primary and Foreign Keys 1107 Adding Constraints to Views 1107 Hash-Segmentation-Clause (table) 1108 CREATE TEMPORARY TABLE 1109 Column-Definition (temp table) 1115 Column-Name-List (temp table) 1117 Hash-Segmentation-Clause (temp table) 1119 CREATE USER 1121 CREATE VIEW 1125 HP Vertica Analytic Database (7.0.x) Page 29 of 1539 SQL Reference Manual Contents Transforming a SELECT Query to Use a View 1126 Dropping a View 1127 Renaming a View 1127 DELETE Using the DELETE Statement 1128 1129 DISCONNECT 1130 DROP AGGREGATE FUNCTION 1131 DROP FAULT GROUP 1133 DROP FUNCTION 1134 DROP SOURCE 1135 DROP FILTER 1137 DROP PARSER 1138 DROP LIBRARY 1139 DROP NETWORK INTERFACE 1140 DROP PROCEDURE 1141 DROP PROFILE 1142 DROP PROJECTION 1143 DROP RESOURCE POOL 1144 Transferring Resource Requests 1145 DROP ROLE 1145 DROP SCHEMA 1146 DROP SEQUENCE 1147 DROP SUBNET 1149 DROP TABLE 1149 DROP TRANSFORM FUNCTION 1151 DROP USER 1152 DROP VIEW 1154 END 1154 EXPLAIN 1155 EXPORT TO VERTICA 1156 Source and Destination Column Mapping HP Vertica Analytic Database (7.0.x) 1157 Page 30 of 1539 SQL Reference Manual Contents GRANT Statements 1160 GRANT (Database) 1160 GRANT (Procedure) 1161 GRANT (Resource Pool) 1163 GRANT (Role) 1163 Creating Roles 1164 Activating a Role 1164 Granting One Role To Another 1165 Checking for Circular References 1165 Granting Administrative Privileges 1165 GRANT (Schema) 1166 GRANT (Sequence) 1167 GRANT (Storage Location) 1168 GRANT (Table) 1170 GRANT (User Defined Extension) 1172 GRANT (View) 1174 INSERT 1176 MERGE 1178 Using Named Sequences 1183 Improving MERGE Performance 1183 PROFILE 1183 Real-Time Profiling Example 1184 How to Use the Linux watch Command 1185 How to Find Out Which Counters are Available 1185 RELEASE SAVEPOINT 1186 REVOKE Statements 1188 REVOKE (Database) 1188 REVOKE (Procedure) 1189 REVOKE (Resource Pool) 1191 REVOKE (Role) 1191 REVOKE (Schema) 1192 HP Vertica Analytic Database (7.0.x) Page 31 of 1539 SQL Reference Manual Contents REVOKE (Sequence) 1194 REVOKE (Storage Location) 1195 REVOKE (Table) 1196 REVOKE (User Defined Extension) 1198 REVOKE (View) 1200 ROLLBACK 1201 ROLLBACK TO SAVEPOINT 1202 SAVEPOINT 1203 SELECT 1205 Parameters 1206 EXCEPT Clause 1208 FROM Clause 1212 Table-Reference 1212 Table-Primary 1213 Joined-Table 1213 GROUP BY Clause 1213 HAVING Clause 1215 INTERSECT Clause 1216 INTO Clause 1220 LIMIT Clause 1221 MATCH Clause 1222 Pattern Semantic Evaluation 1225 MINUS Clause 1227 OFFSET Clause 1227 ORDER BY Clause 1228 TIMESERIES Clause 1230 UNION Clause 1233 WHERE Clause 1236 WINDOW Clause 1238 WITH Clause 1238 SET DATESTYLE HP Vertica Analytic Database (7.0.x) 1240 Page 32 of 1539 SQL Reference Manual Contents SET ESCAPE_STRING_WARNING 1241 SET INTERVALSTYLE 1242 Output Intervals with Units 1243 Output Intervals Without Units 1243 Displaying the Current Interval OUTPUT Style 1243 SET LOCALE 1244 SET ROLE 1248 SET SEARCH_PATH 1249 SET SESSION AUTOCOMMIT 1251 SET SESSION CHARACTERISTICS 1251 Understanding READ COMMITTED and Snapshot Isolation 1252 Using SERIALIZABLE Transaction Isolation 1253 Setting READ ONLY Transaction Mode 1253 SET SESSION MEMORYCAP 1253 SET SESSION RESOURCE_POOL 1254 SET SESSION RUNTIMECAP 1255 SET SESSION TEMPSPACECAP 1257 SET STANDARD_CONFORMING_STRINGS 1259 SET TIME ZONE 1260 Time Zone Names for Setting TIME ZONE SHOW 1261 1263 How to Display All Current Run-Time Parameter Settings 1264 Displaying Current Search Path Settings 1265 Displaying the Transaction Isolation Level 1265 START TRANSACTION 1265 TRUNCATE TABLE 1269 UPDATE 1270 HP Vertica System Tables V_CATALOG Schema 1275 1276 ALL_TABLES 1276 CLUSTER_LAYOUT 1277 HP Vertica Analytic Database (7.0.x) Page 33 of 1539 SQL Reference Manual Contents COLUMNS 1278 COMMENTS 1281 CONSTRAINT_COLUMNS 1282 DATABASES 1283 DUAL 1284 ELASTIC_CLUSTER 1285 EPOCHS 1286 FAULT_GROUPS 1287 FOREIGN_KEYS 1288 GRANTS 1290 HCATALOG_COLUMNS 1294 HCATALOG_SCHEMATA 1298 HCATALOG_TABLES 1300 HCATALOG_TABLE_LIST 1302 LARGE_CLUSTER_CONFIGURATION_STATUS 1304 LICENSE_AUDITS 1304 LICENSES 1305 MATERIALIZE_FLEXTABLE_COLUMNS_RESULTS 1306 NODES 1307 ODBC_COLUMNS 1308 PASSWORDS 1310 PRIMARY_KEYS 1310 PROFILE_PARAMETERS 1311 PROFILES 1312 PROJECTION_CHECKPOINT_EPOCHS 1313 PROJECTION_COLUMNS 1314 PROJECTION_DELETE_CONCERNS 1321 PROJECTIONS 1321 RESOURCE_POOL_DEFAULTS 1324 RESOURCE_POOLS 1325 ROLES 1331 HP Vertica Analytic Database (7.0.x) Page 34 of 1539 SQL Reference Manual Contents SCHEMATA 1332 SEQUENCES 1333 STORAGE_LOCATIONS 1336 SYSTEM_COLUMNS 1340 SYSTEM_TABLES 1341 TABLE_CONSTRAINTS 1342 TABLES 1344 TYPES 1346 USER_AUDITS 1348 USER_FUNCTIONS 1349 USER_PROCEDURES 1350 USER_TRANSFORMS 1351 USERS 1352 VIEW_COLUMNS 1354 VIEWS 1356 V_MONITOR Schema 1358 ACTIVE_EVENTS 1358 COLUMN_STORAGE 1360 CONFIGURATION_CHANGES 1363 CONFIGURATION_PARAMETERS 1365 CPU_USAGE 1366 CRITICAL_HOSTS 1366 CRITICAL_NODES 1367 CURRENT_SESSION 1367 DATA_COLLECTOR 1373 DATABASE_BACKUPS 1378 DATABASE_CONNECTIONS 1379 DATABASE_SNAPSHOTS 1380 DELETE_VECTORS 1381 DEPLOY_STATUS 1382 DEPLOYMENT_PROJECTION_STATEMENTS 1383 HP Vertica Analytic Database (7.0.x) Page 35 of 1539 SQL Reference Manual Contents DEPLOYMENT_PROJECTIONS 1385 DESIGN_QUERIES 1387 DESIGN_STATUS 1389 DESIGN_TABLES 1391 DESIGNS 1392 DISK_RESOURCE_REJECTIONS 1394 DISK_STORAGE 1395 ERROR_MESSAGES 1400 EVENT_CONFIGURATIONS 1402 EXECUTION_ENGINE_PROFILES 1403 HOST_RESOURCES 1415 IO_USAGE 1418 LOAD_STREAMS 1418 LOCK_USAGE 1420 LOCKS 1423 LOGIN_FAILURES 1427 MEMORY_USAGE 1429 MONITORING_EVENTS 1430 NETWORK_INTERFACES 1432 NETWORK_USAGE 1434 NODE_RESOURCES 1434 NODE_STATES 1436 OUTPUT_DEPLOYMENT_STATUS 1437 OUTPUT_EVENT_HISTORY 1439 PARTITION_REORGANIZE_ERRORS 1442 PARTITION_STATUS 1443 PARTITIONS 1444 PROCESS_SIGNALS 1445 PROJECTION_RECOVERIES 1446 PROJECTION_REFRESHES 1449 PROJECTION_STORAGE 1453 HP Vertica Analytic Database (7.0.x) Page 36 of 1539 SQL Reference Manual Contents PROJECTION_USAGE 1456 QUERY_EVENTS 1458 QUERY_METRICS 1463 QUERY_PLAN_PROFILES 1464 QUERY_PROFILES 1466 QUERY_REQUESTS 1469 REBALANCE_PROJECTION_STATUS 1472 REBALANCE_TABLE_STATUS 1474 RECOVERY_STATUS 1476 RESOURCE_ACQUISITIONS 1477 RESOURCE_POOL_STATUS 1480 RESOURCE_QUEUES 1485 RESOURCE_REJECTION_DETAILS 1485 RESOURCE_REJECTIONS 1487 RESOURCE_USAGE 1490 SESSION_PROFILES 1492 SESSIONS 1494 STORAGE_CONTAINERS 1498 STORAGE_POLICIES 1502 STORAGE_TIERS 1503 STORAGE_USAGE 1504 STRATA 1506 STRATA_STRUCTURES 1508 SYSTEM 1511 SYSTEM_RESOURCE_USAGE 1512 SYSTEM_SERVICES 1514 SYSTEM_SESSIONS 1516 TRANSACTIONS 1519 TUNING_RECOMMENDATIONS 1521 TUPLE_MOVER_OPERATIONS 1523 UDX_FENCED_PROCESSES 1526 HP Vertica Analytic Database (7.0.x) Page 37 of 1539 SQL Reference Manual Contents USER_LIBRARIES 1527 USER_LIBRARY_MANIFEST 1528 USER_SESSIONS 1529 WOS_CONTAINER_STORAGE 1532 Appendix: Compatibility with Other RDBMS Data Type Mappings Between HP Vertica and Oracle We appreciate your feedback! HP Vertica Analytic Database (7.0.x) 1535 1535 1539 Page 38 of 1539 SQL Reference Manual SQL Overview SQL Overview An abbreviation for Structured Query Language, SQL is a widely-used, industry standard data definition and data manipulation language for relational databases. Note: In HP Vertica, use a semicolon to end a statement or to combine multiple statements on one line. HP Vertica Support for ANSI SQL Standards HP Vertica SQL supports a subset of ANSI SQL-99. See BNF Grammar for SQL-99 Support for Historical Queries Unlike most databases, the DELETE command in HP Vertica does not delete data; it marks records as deleted. The UPDATE command performs an INSERT and a DELETE. This behavior is necessary for historical queries. See Historical (Snapshot) Queries in the Programmer's Guide. Joins HP Vertica supports typical data warehousing query joins. For details, see Joins in the Programmer's Guide. HP Vertica also provides the INTERPOLATE predicate, which allows for a special type of join. The event series join is an HP Vertica SQL extension that lets you analyze two event series when their measurement intervals don’t align precisely—such as when timestamps don't match. These joins provide a natural and efficient way to query misaligned event data directly, rather than having to normalize the series to the same measurement interval. See Event Series Joins in the Programmer's Guide for details. Transactions Session-scoped isolation levels determine transaction characteristics for transactions within a specific user session. You set them through the SET SESSION CHARACTERISTICS command. Specifically, they determine what data a transaction can access when other transactions are running concurrently. See Transactions in the Concepts Guide. HP Vertica Analytic Database (7.0.x) Page 39 of 1539 SQL Reference Manual SQL Overview HP Vertica Analytic Database (7.0.x) Page 40 of 1539 SQL Reference Manual System Limits System Limits This section describes the system limits on the size and number of objects in an HP Vertica database. In most cases, computer memory and disk drive are the limiting factors. Item Limit Number of nodes Maximum 128 (without HP Vertica assistance). Database size Approximates the number of files times the file size on a platform, depending on the maximum disk configuration. Table size 2^64 rows per node, or 2^63 bytes per column, whichever is smaller. Row size 32 MB. The row size is approximately the sum of its maximum column sizes, where, for example, a VARCHAR(80) has a maximum size of 80 bytes. Key size Limited only by row size Number of tables/projections per database Limited by physical RAM, as the catalog must fit in memory. Number of concurrent connections per node Default of 50, limited by physical RAM (or threads per process), typically 1024. Number of concurrent connections per cluster Limited by physical RAM of a single node (or threads per process), typically 1024. Number of columns per table 1600. Number of rows per load 2^63. Number of partitions 1024. l While HP Vertica supports a maximum of 1024 partitions, few, if any, organizations will need to approach that maximum. Fewer partitions are likely to meet your business needs, while also ensuring maximum performance. Many customers, for example, partition their data by month, bringing their partition count to 12. HP Vertica recommends you keep the number of partitions between 10 and 20 to achieve excellent performance. HP Vertica Analytic Database (7.0.x) Page 41 of 1539 SQL Reference Manual System Limits Item Limit Length for a fixed- 65000 bytes. length column Length for a variable-length column 65000 bytes. Length of basic names 128 bytes. Basic names include table names, column names, etc. Query length No limit. Depth of nesting subqueries Unlimited in FROM, WHERE, or HAVING clause. HP Vertica Analytic Database (7.0.x) Page 42 of 1539 SQL Reference Manual SQL Language Elements SQL Language Elements This chapter presents detailed descriptions of the language elements and conventions of HP Vertica SQL. HP Vertica Analytic Database (7.0.x) Page 43 of 1539 SQL Reference Manual SQL Language Elements Keywords and Reserved Words Keywords are words that have a specific meaning in the SQL language. Although SQL is not casesensitive with respect to keywords, they are generally shown in uppercase letters throughout this documentation for readability purposes. Some SQL keywords are also reserved words that cannot be used in an identifier unless enclosed in double quote (") characters. Some unreserved keywords can be used in statements by preceding them with AS. For example, SOURCE is a keyword, but is not reserved, and you can use it as follows: VMART=> select my_node AS SOURCE from nodes; Keywords Keyword are words that are specially handled by the grammar. Every SQL statement contains one or more keywords. Begins with Keyword A ABORT, ABSOLUTE, ACCESS, ACCESRANK, ACCOUNT, ACTION, ADD, ADMIN, AFTER, AGGREGATE, ALL, ALSO, ALTER, ANALYSE, ANALYZE, AND, ANY, ARRAY, AS, ASC, ASSERTION, ASSIGNMENT, AT, AUTHORIZATION, AUTO, AUTO_INCREMENT, AVAILABLE B BACKWARD, BEFORE, BEGIN, BETWEEN, BIGINT, BINARY, BIT, BLOCK_ DICT, BLOCKDICT_COMP, BOOLEAN, BOTH, BY, BYTEA, BZIP C CACHE, CALLED, CASCADE, CASE, CAST, CATALOGPATH, CHAIN, CHAR, CHAR_LENGTH, CHARACTER, CHARACTER_LENGTH, CHARACTERISTICS, CHARACTERS, CHECK, CHECKPOINT, CLASS, CLOSE, CLUSTER, COLLATE, COLUMN, COLUMNS_COUNT, COMMENT, COMMIT, COMMITTED, COMMONDELTA_COMP, CONNECT, CONSTRAINT, CONSTRAINTS, COPY, CORRELATION, CREATE, CREATEDB, CREATEUSER, CROSS, CSV, CURRENT, CURRENT_DATABASE, CURRENT_DATE, CURRENT_SCHEMA, CURRENT_TIME, CURRENT_TIMESTAMP, CURRENT_USER, CURSOR, CYCLE D DATA, DATABASE, DATAPATH, DATE, DATEDIFF, DATETIME, DAY, DEALLOCATE, DEC, DECIMAL, DECLARE, DECODE, DEFAULT, DEFAULTS, DEFERRABLE, DEFERRED, DEFINE, DEFINER, DELETE, DELIMITER, DELIMITERS, DELTARANGE_COMP, DELTARANGE_COMP_SP, DELTAVAL, DESC, DETERMINES, DIRECT, DIRECTCOLS, DIRECTGROUPED, DIRECTPROJ, DISABLE, DISCONNECT, DISTINCT, DISTVALINDEX, DO, DOMAIN, DOUBLE, DROP, DURABLE HP Vertica Analytic Database (7.0.x) Page 44 of 1539 SQL Reference Manual SQL Language Elements Begins with Keyword E EACH, ELSE, ENABLE, ENABLED, ENCLOSED, ENCODED, ENCODING, ENCRYPTED, END, ENFORCELENGTH, EPHEMERAL, EPOCH, ERROR, ESCAPE, EVENT, EVENTS, EXCEPT, EXCEPTIONS, EXCLUDE, EXCLUDING, EXCLUSIVE, EXECUTE, EXISTS, EXPIRE, EXPLAIN, EXPORT, EXTERNAL, EXTRACT F FAILED_LOGIN_ATTEMPTS, FALSE, FETCH, FILLER, FIRST, FLOAT, FOLLOWING, FOR, FORCE, FOREIGN, FORMAT, FORWARD, FREEZE, FROM, FULL, FUNCTION G GCDDELTA, GLOBAL, GRANT, GROUP, GROUPED, GZIP H HANDLER, HASH, HAVING, HOLD, HOSTNAME, HOUR, HOURS I IDENTIFIED, IDENTITY, IF, IGNORE, ILIKE, ILIKEB, IMMEDIATE, IMMUTABLE, IMPLICIT, IN, INCLUDING, INCREMENT, INDEX, INHERITS, INITIALLY, INNER, INOUT, INPUT, INSENSITIVE, INSERT, INSTEAD, INT, INTEGER, INTERPOLATE, INTERSECT, INTERVAL, INTERVALYM, INTO, INVOKER, IS, ISNULL, ISOLATION J JOIN K KEY, KSAFE L LANCOMPILER, LANGUAGE, LARGE, LAST, LATEST, LEADING, LEFT, LESS, LEVEL, LIBRARY, LIKE, LIKEB, LIMIT, LISTEN, LOAD, LOCAL, LOCALTIME, LOCALTIMESTAMP, LOCATION, LOCK M MANAGED, MATCH, MAXCONCURRENCY, MAXMEMORYSIZE, MAXVALUE, MEMORYCAP, MEMORYSIZE, MERGE, MERGEOUT, MICROSECONDS, MILLISECONDS, MINUTE, MINUTES, MINVALUE, MODE, MONEY, MONTH, MOVE, MOVEOUT N NAME, NATIONAL, NATIVE, NATURAL, NCHAR, NEW, NEXT, NO, NOCREATEDB, NOCREATEUSER, NODE, NODES, NONE, NOT, NOTHING, NOTIFY, NOTNULL, NOWAIT, NULL, NULLCOLS, NULLS, NULLSEQUAL, NULLIF, NUMBER, NUMERIC O OBJECT, OCTETS, OF, OFF, OFFSET, OIDS, OLD, ON, ONLY, OPERATOR, OPTION, OR, ORDER, OTHERS, OUT, OUTER, OVER, OVERLAPS, OVERLAY, OWNER HP Vertica Analytic Database (7.0.x) Page 45 of 1539 SQL Reference Manual SQL Language Elements Begins with Keyword P PARTIAL, PARTITION, PASSWORD, PASSWORD_GRACE_TIME, PASSWORD_ LIFE_TIME, PASSWORD_LOCK_TIME, PASSWORD_MAX_LENGTH, PASSWORD_MIN_DIGITS, PASSWORD_MIN_LENGTH, PASSWORD_MIN_ LETTERS, PASSWORD_MIN_LOWERCASE_LETTERS, PASSWORD_MIN_ SYMBOLS, PASSWORD_MIN_UPPERCASE_LETTERS,PASSWORD_REUSE_ MAX, PASSWORD_REUSE_TIME, PATTERN, PERCENT, PERMANENT, PINNED, PLACING, PLANNEDCONCURRENCY, POOL, POSITION, PRECEDING, PRECISION, PREPARE, PRESERVE, PREVIOUS, PRIMARY, PRIOR, PRIORITY, PRIVILEGES, PROCEDURAL, PROCEDURE, PROFILE, PROJECTION Q QUEUETIMEOUT, QUOTE R RANGE, RAW, READ, REAL, RECHECK, RECORD, RECOVER, REFERENCES, REFRESH, REINDEX, REJECTED, REJECTMAX, RELATIVE, RELEASE, RENAME, REPEATABLE, REPLACE, RESET, RESOURCE, RESTART, RESTRICT, RESULTS, RETURN, RETURNREJECTED, REVOKE, RIGHT, RLE, ROLE, ROLES, ROLLBACK, ROW, ROWS, RULE, RUNTIMECAP S SAMPLE, SAVEPOINT, SCHEMA, SCROLL, SECOND, SECONDS, SECURITY, SEGMENTED, SELECT, SEQUENCE, SERIALIZABLE, SESSION, SESSION_ USER, SET, SETOF, SHARE, SHOW, SIMILAR, SIMPLE, SINGLEINITIATOR, SITE, SITES, SKIP, SMALLDATETIME, SMALLINT, SOME, SOURCE, SPLIT, STABLE, START, STATEMENT, STATISTICS, STDERR, STDIN, STDOUT, STORAGE, STREAM, STRENGTH, STRICT, SUBSTRING, SYSDATE, SYSID, SYSTEM T TABLE, TABLESPACE, TEMP, TEMPLATE, TEMPORARY, TEMPSPACECAP, TERMINATOR, THAN, THEN, TIES, TIME, TIMESERIES, TIMESTAMP, TIMESTAMPADD, TIMESTAMPDIFF, TIMESTAMPTZ, TIMETZ, TIMEZONE, TINYINT, TO, TOAST, TRAILING, TRANSACTION, TRANSFORM, TREAT, TRICKLE, TRIGGER, TRIM, TRUE, TRUNCATE, TRUSTED, TUNING, TYPE U UNBOUNDED, UNCOMMITTED, UNCOMPRESSED, UNENCRYPTED, UNION, UNIQUE, UNKNOWN, UNLIMITED, UNLISTEN, UNLOCK, UNSEGMENTED, UNTIL, UPDATE, USAGE, USER, USING V VACUUM, VALIDATOR, VALINDEX, VALUE, VALUES, VARBINARY, VARCHAR, VARCHAR2, VARYING, VERBOSE, VERTICA, VIEW, VOLATILE W WAIT, WHEN, WHERE, WINDOW, WITH, WITHIN, WITHOUT, WORK, WRITE Y YEAR Z ZONE HP Vertica Analytic Database (7.0.x) Page 46 of 1539 SQL Reference Manual SQL Language Elements Reserved Words Many SQL keywords are also reserved words, but a reserved word is not necessarily a keyword. For example, a reserved word might be reserved for other/future use. In HP Vertica, reserved words can be used anywhere identifiers can be used, as long as you double-quote them. Begins with Reserved Word A ALL, ANALYSE, ANALYZE, AND, ANY, ARRAY, AS, ASC B BINARY, BOTH C CASE, CAST, CHECK, COLUMN, CONSTRAINT, CORRELATION, CREATE, CURRENT_DATABASE, CURRENT_DATE, CURRENT_SCHEMA, CURRENT_ TIME, CURRENT_TIMESTAMP, CURRENT_USER D DEFAULT, DEFERRABLE, DESC, DISTINCT, DO E ELSE, ENCODED, END, EXCEPT F FALSE, FOR, FOREIGN, FROM G GRANT, GROUP, GROUPED H HAVING I IN, INITIALLY, INTERSECT, INTERVAL, INTERVALYM, INTO J JOIN K KSAFE L LEADING, LIMIT, LOCALTIME, LOCALTIMESTAMP M MATCH N NEW, NOT, NULL, NULLSEQUAL O OFF, OFFSET, OLD, ON, ONLY, OR, ORDER P PINNED, PLACING, PRIMARY, PROJECTION R REFERENCES S SCHEMA, SEGMENTED, SELECT, SESSION_USER, SOME, SYSDATE T TABLE, THEN, TIMESERIES, TO, TRAILING, TRUE U UNBOUNDED, UNION, UNIQUE, UNSEGMENTED, USER, USING W WHEN, WHERE, WINDOW, WITH, WITHIN HP Vertica Analytic Database (7.0.x) Page 47 of 1539 SQL Reference Manual SQL Language Elements Identifiers Identifiers (names) of objects such as schema, table, projection, column names, and so on, can be up to 128 bytes in length. Unquoted Identifiers Unquoted SQL identifiers must begin with one of the following: l Letters such as A–Z or a–z, including letters with diacritical marks and non-Latin letters) l Underscore (_) Subsequent characters in an identifier can be: l Letters l Digits(0–9) l Dollar sign ($). Dollar sign is not allowed in identifiers according to the SQL standard and could cause application portability problems. l Underscore (_) Quoted Identifiers Identifiers enclosed in double quote (") characters can contain any character. If you want to include a double quote, you need a pair of them; for example """". You can use names that would otherwise be invalid, such as names that include only numeric characters ("123") or contain space characters, punctuation marks, keywords, and so on; for example: CREATE SEQUENCE "my sequence!"; Double quotes are required for non-alphanumerics and SQL keywords such as "1time", "Next week" and "Select". Note: Identifiers are not case-sensitive. Thus, identifiers "ABC", "ABc", and "aBc" are synonymous, as are ABC, ABc, and aBc. Non-ASCII Characters HP Vertica accepts non-ASCII UTF-8 Unicode characters for table names, column names, and other Identifiers, extending the cases in which upper/lower case distinctions are ignored (casefolded) to all alphabets, including Latin, Cyrillic, and Greek. HP Vertica Analytic Database (7.0.x) Page 48 of 1539 SQL Reference Manual SQL Language Elements Identifiers Are Stored As Created SQL identifiers, such as table and column names, are no longer converted to lowercase. They are stored as created, and references to them are resolved using case-insensitive compares. It is not necessary to double quote mixed-case identifiers. For example, The following statement creates table ALLCAPS. => CREATE TABLE ALLCAPS(c1 varchar(30)); => INSERT INTO ALLCAPS values('upper case'); The following statements are variations of the same query and all return identical results: => SELECT * FROM ALLCAPS; => SELECT * FROM allcaps; => SELECT * FROM "allcaps"; All three commands return the same result: c1 -----------upper case (1 row) Note that the system returns an error if you try to create table AllCaps: => CREATE TABLE allcaps(c1 varchar(30)); ROLLBACK: table "AllCaps" already exists See QUOTE_IDENT for additional information. Case-Sensitive System Tables The V_CATALOG.TABLES.TABLE_SCHEMA and TABLE_NAME columns are case sensitive when used with an equality (=) predicate in queries. For example, given the following sample schema, if you execute a query using the = predicate, HP Vertica returns 0 rows: => CREATE SCHEMA SS; => CREATE TABLE SS.TT (c1 int); => INSERT INTO ss.tt VALUES (1); => SELECT table_schema, table_name FROM v_catalog.tables WHERE table_schema ='ss'; table_schema | table_name --------------+-----------(0 rows) Tip: Use the case-insensitive ILIKE predicate to return the expected results. HP Vertica Analytic Database (7.0.x) Page 49 of 1539 SQL Reference Manual SQL Language Elements => SELECT table_schema, table_name FROM v_catalog.tables WHERE table_schema ILIKE 'ss'; table_schema | table_name -------------+-----------SS | TT (1 row) HP Vertica Analytic Database (7.0.x) Page 50 of 1539 SQL Reference Manual SQL Language Elements Literals Literals are numbers or strings used in SQL as constants. Literals are included in the select-list, along with expressions and built-in functions and can also be constants. HP Vertica provides support for number-type literals (integers and numerics), string literals, VARBINARY string literals, and date/time literals. The various string literal formats are discussed in this section. Number-Type Literals There are three types of numbers in HP Vertica: Integers, numerics, and floats. l Integers are whole numbers less than 2^63 and must be digits. l Numerics are whole numbers larger than 2^63 or that include a decimal point with a precision and a scale. Numerics can contain exponents. Numbers that begin with 0x are hexadecimal numerics. Numeric-type values can also be generated using casts from character strings. This is a more general syntax. See the Examples section below, as well as Data Type Coercion Operators (CAST). Syntax digits digits.[digits] | [digits].digits digits e[+-]digits | [digits].digits e[+-]digits | digits.[digits] e[+-]digits Parameters digits Represents one or more numeric characters (0 through 9). e Represents an exponent marker. Notes l At least one digit must follow the exponent marker (e), if e is present. l There cannot be any spaces or other characters embedded in the constant. l Leading plus (+) or minus (–) signs are not considered part of the constant; they are unary operators applied to the constant. HP Vertica Analytic Database (7.0.x) Page 51 of 1539 SQL Reference Manual SQL Language Elements l In most cases a numeric-type constant is automatically coerced to the most appropriate type depending on context. When necessary, you can force a numeric value to be interpreted as a specific data type by casting it as described in Data Type Coercion Operators (CAST). l Floating point literals are not supported. If you specifically need to specify a float, you can cast as described in Data Type Coercion Operators (CAST). l HP Vertica follows the IEEE specification for floating point, including NaN (not a number) and Infinity (Inf). l A NaN is not greater than and at the same time not less than anything, even itself. In other words, comparisons always return false whenever a NaN is involved. See Numeric Expressions for examples. l Dividing INTEGERS (x / y) yields a NUMERIC result. You can use the // operator to truncate the result to a whole number. Examples The following are examples of number-type literals: 42 3.5 4. .001 5e2 1.925e-3 Scientific notation: => SELECT NUMERIC '1e10'; ?column? ------------10000000000 (1 row) BINARY scaling: => SELECT NUMERIC '1p10'; ?column? ---------1024 (1 row) => SELECT FLOAT 'Infinity'; ?column? ---------Infinity (1 row) The following examples illustrated using the / and // operators to divide integers: HP Vertica Analytic Database (7.0.x) Page 52 of 1539 SQL Reference Manual SQL Language Elements VMart=> SELECT 40/25; ?column? ---------------------1.600000000000000000 (1 row) VMart=> SELECT 40//25; ?column? ---------1 (1 row) See Also l Data Type Coercion HP Vertica Analytic Database (7.0.x) Page 53 of 1539 SQL Reference Manual SQL Language Elements String Literals String literals are string values surrounded by single or double quotes. Double-quoted strings are subject to the backslash, but single-quoted strings do not require a backslash, except for \' and \\. You can embed single quotes and backslashes into single-quoted strings. To include other backslash (escape) sequences, such as \t (tab), you must use the double-quoted form. Precede single-quoted strings with a space between the string and its preceding word, since single quotes are allowed in identifiers. See Also l SET STANDARD_CONFORMING_STRINGS l SET ESCAPE_STRING_WARNING l Internationalization Parameters l Implement Locales for International Data Sets Character String Literals Character string literals are a sequence of characters from a predefined character set and are enclosed by single quotes. If the single quote is part of the sequence, it must be doubled as "''". Syntax 'characters' Parameters characters Arbitrary sequence of characters bounded by single quotes (') Single Quotes in a String The SQL standard way of writing a single-quote character within a string literal is to write two adjacent single quotes. for example: => SELECT 'Chester''s gorilla'; ?column? ------------------Chester's gorilla HP Vertica Analytic Database (7.0.x) Page 54 of 1539 SQL Reference Manual SQL Language Elements (1 row) Standard Conforming Strings and Escape Characters HP Vertica uses standard conforming strings as specified in the SQL standard, which means that backslashes are treated as string literals, not escape characters. Note: Earlier versions of HP Vertica did not use standard conforming strings, and backslashes were always considered escape sequences. To revert to this older behavior, set the StandardConformingStrings parameter to '0', as described in Configuration Parameters in the Administrator's Guide. Examples => SELECT 'This is a string'; ?column? -----------------This is a string (1 row) => SELECT 'This \is a string'; WARNING: nonstandard use of escape in a string literal at character 8 HINT: Use the escape string syntax for escapes, e.g., E'\r\n'. ?column? -----------------This is a string (1 row) vmartdb=> SELECT E'This \is a string'; ?column? -----------------This is a string => SELECT E'This is a \n new line'; ?column? ---------------------This is a new line (1 row) => SELECT 'String''s characters'; ?column? -------------------String's characters (1 row) See Also l SET STANDARD_CONFORMING_STRINGS l SET ESCAPE_STRING_WARNING HP Vertica Analytic Database (7.0.x) Page 55 of 1539 SQL Reference Manual SQL Language Elements l Internationalization Parameters l Implement Locales for International Data Sets Dollar-Quoted String Literals Dollar-quoted string literals are rarely used, but are provided here for your convenience. The standard syntax for specifying string literals can be difficult to understand. To allow more readable queries in such situations, HP Vertica SQL provides dollar quoting. Dollar quoting is not part of the SQL standard, but it is often a more convenient way to write complicated string literals than the standard-compliant single quote syntax. Syntax $$characters$$ Parameters characters Arbitrary sequence of characters bounded by paired dollar signs ($$) Dollar-quoted string content is treated as a literal. Single quote, backslash, and dollar sign characters have no special meaning within a dollar-quoted string. Notes A dollar-quoted string that follows a keyword or identifier must be separated from the preceding word by whitespace; otherwise, the dollar-quoting delimiter is taken as part of the preceding identifier. Examples => SELECT $$Fred's\n car$$; ?column? ------------------Fred's\n car (1 row) => SELECT 'SELECT 'fact';'; ERROR: syntax error at or near "';'" at character 21 LINE 1: SELECT 'SELECT 'fact';'; => SELECT 'SELECT $$fact';$$; ?column? --------------SELECT $$fact HP Vertica Analytic Database (7.0.x) Page 56 of 1539 SQL Reference Manual SQL Language Elements (1 row) => SELECT 'SELECT ''fact'';'; ?column? ---------------SELECT 'fact'; (1 row) Unicode String Literals Syntax U&'characters' [ UESCAPE '' ] Parameters characters Arbitrary sequence of UTF-8 characters bounded by single quotes (') Unicode escape character A single character from the source language character set other than a hexit, plus sign (+), quote ('), double quote (''), or white space Using Standard Conforming Strings With StandardConformingStrings enabled, HP Vertica supports SQL standard Unicode character string literals (the character set is UTF-8 only). Before you enter a Unicode character string literal, enable standard conforming strings in one of the following ways. l To enable for all sessions, update the StandardConformingStrings configuration parameter. See Configuration Parameters in the Administrator's Guide. l To treat backslashes as escape characters for the current session, use the SET STANDARD_ CONFORMING_STRINGS statement. See also Extended String Literals. Examples To enter a Unicode character in hexadecimal, such as the Russian phrase for "thank you, use the following syntax: => SET STANDARD_CONFORMING_STRINGS TO ON; => SELECT U&'\0441\043F\0430\0441\0438\0431\043E' as 'thank you'; thank you HP Vertica Analytic Database (7.0.x) Page 57 of 1539 SQL Reference Manual SQL Language Elements ----------спасибо (1 row) To enter the German word mude (where u is really u-umlaut) in hexadecimal: => SELECT U&'m\00fcde'; ?column? ---------müde (1 row) => SELECT 'ü'; ?column? ---------ü (1 row) To enter the LINEAR B IDEOGRAM B240 WHEELED CHARIOT in hexadecimal: => SELECT E'\xF0\x90\x83\x8C'; ?column? ---------(wheeled chariot character) (1 row) Note: Not all fonts support the wheeled chariot character. See Also l SET STANDARD_CONFORMING_STRINGS l SET ESCAPE_STRING_WARNING l Internationalization Parameters l Implement Locales for International Data Sets VARBINARY String Literals VARBINARY string literals allow you to specify hexadecimal or binary digits in a string literal. Syntax X' 'B' ' HP Vertica Analytic Database (7.0.x) Page 58 of 1539 SQL Reference Manual SQL Language Elements Parameters X Specifies hexadecimal digits. The string must be enclosed in single quotes ('). B Specifies binary digits. The string must be enclosed in single quotes ('). Examples => SELECT X'abcd'; ?column? ---------\253\315 (1 row) => SELECT B'101100'; ?column? ---------, (1 row) Extended String Literals Syntax E'characters' Parameters characters Arbitrary sequence of characters bounded by single quotes (') You can use C-style backslash sequence in extended string literals, which are an extension to the SQL standard. You specify an extended string literal by writing the letter E as a prefix (before the opening single quote); for example: E'extended character string\n' Within an extended string, the backslash character (\) starts a C-style backslash sequence, in which the combination of backslash and following character or numbers represent a special byte value, as shown in the following list. Any other character following a backslash is taken literally; for example, to include a backslash character, write two backslashes (\\). l \\ is a backslash l \b is a backspace HP Vertica Analytic Database (7.0.x) Page 59 of 1539 SQL Reference Manual SQL Language Elements l \f is a form feed l \n is a newline l \r is a carriage return l \t is a tab l \x##,where ## is a 1 or 2-digit hexadecimal number; for example \x07 is a tab l \###, where ### is a 1, 2, or 3-digit octal number representing a byte with the corresponding code. When an extended string literal is concatenated across lines, write only E before the first opening quote: => SELECT E'first part o' -> 'f a long line'; ?column? --------------------------first part of a long line (1 row) Two adjacent single quotes are used as one single quote: => SELECT 'Aren''t string literals fun?'; ?column? ----------------------------Aren't string literals fun? (1 row) Standard Conforming Strings and Escape Characters When interpreting commands, such as those entered in vsql or in queries passed via JDBC or ODBC, HP Vertica uses standard conforming strings as specified in the SQL standard. In standard conforming strings, backslashes are treated as string literals (ordinary characters), not escape characters. Note: Text read in from files or streams (such as the data inserted using the COPY statement) are not treated as literal strings. The COPY command defines its own escape characters for the data it reads. See the COPY statement documentation for details. In HP Vertica databases prior to 4.0, the standard conforming strings was not on by default, and backslashes were considered escape sequences. After 4.0, escape sequences, including Windows path names, did not work as they had previously. For example, the TAB character '\t' is two characters: '\' and 't'. E'...' is the Extended character string literal format, so to treat backslashes as escape characters, use E'\t'. HP Vertica Analytic Database (7.0.x) Page 60 of 1539 SQL Reference Manual SQL Language Elements The following options are available, but HP recommends that you migrate your application to use standard conforming strings at your earliest convenience, after warnings have been addressed. l To revert to pre 4.0 behavior, set the StandardConformingStrings parameter to '0', as described in Configuration Parameters in the Administrator's Guide. l To enable standard conforming strings permanently, set the StandardConformingStrings parameter to '1', as described in the procedure in the section, "Identifying Strings that are not Standard Conforming," below. l To enable standard conforming strings per session, use SET STANDARD_CONFORMING_ STRING TO ON, which treats backslashes as escape characters for the current session only. The two sections that follow help you identify issues between HP Vertica 3.5 and 4.0. Identifying Strings That Are Not Standard Conforming The following procedure can be used to identify non-standard conforming strings in your application so that you can convert them into standard conforming strings: 1. Be sure the StandardConformingStrings parameter is off, as described in Internationalization Parameters in the Administrator's Guide. => SELECT SET_CONFIG_PARAMETER ('StandardConformingStrings' ,'0'); Note: HP recommends that you migrate your application to use Standard Conforming Strings at your earliest convenience. 2. Turn on the EscapeStringWarning parameter. (ON is the default in HP Vertica Version 4.0 and later.) => SELECT SET_CONFIG_PARAMETER ('EscapeStringWarning','1'); HP Vertica now returns a warning each time it encounters an escape string within a string literal. For example, HP Vertica interprets the \n in the following example as a new line: => SELECT 'a\nb'; WARNING: nonstandard use of escape in a string literal at character 8 HINT: Use the escape string syntax for escapes, e.g., E'\r\n'. ?column? ---------a b (1 row) HP Vertica Analytic Database (7.0.x) Page 61 of 1539 SQL Reference Manual SQL Language Elements When StandardConformingStrings is ON, the string is interpreted as four characters: a \ n b. Modify each string that HP Vertica flags by extending it as in the following example: E'a\nb' Or if the string has quoted single quotes, double them; for example, 'one'' double'. 3. Turn on the StandardConformingStrings parameter for all sessions: SELECT SET_CONFIG_PARAMETER ('StandardConformingStrings' ,'1'); Doubled Single Quotes This section discusses vsql inputs that are not passed on to the server. HP Vertica recognizes two consecutive single quotes within a string literal as one single quote character. For example, the following inputs, 'You''re here!' ignored the second consecutive quote and returns the following: => SELECT 'You''re here!'; ?column? -------------You're here! (1 row) This is the SQL standard representation and is preferred over the form, 'You\'re here!', because backslashes are not parsed as before. You need to escape the backslash: => SELECT (E'You\'re here!'); ?column? -------------You're here! (1 row) This behavior change introduces a potential incompatibility in the use of the vsql \set command, which automatically concatenates its arguments. For example, the following works in both HP Vertica 3.5 and 4.0: \set file '\'' `pwd` '/file.txt' '\''\echo :file vsql takes the four arguments and outputs the following: '/home/vertica/file.txt' HP Vertica Analytic Database (7.0.x) Page 62 of 1539 SQL Reference Manual SQL Language Elements In HP Vertica 3.5 the above \set file command could be written all with the arguments run together, but in 4.0 the adjacent single quotes are now parsed differently: \set file '\''`pwd`'/file.txt''\''\echo :file '/home/vertica/file.txt'' Note the extra single quote at the end. This is due to the pair of adjacent single quotes together with the backslash-quoted single quote. The extra quote can be resolved either as in the first example above, or by combining the literals as follows: \set file '\''`pwd`'/file.txt'''\echo :file '/home/vertica/file.txt' In either case the backslash-quoted single quotes should be changed to doubled single quotes as follows: \set file '''' `pwd` '/file.txt''' Additional Examples => SELECT 'This \is a string'; ?column? -----------------This \is a string (1 row) => SELECT E'This \is a string'; ?column? -----------------This is a string => SELECT E'This is a \n new line'; ?column? ---------------------This is a new line (1 row) => SELECT 'String''s characters'; ?column? -------------------String's characters (1 row) HP Vertica Analytic Database (7.0.x) Page 63 of 1539 SQL Reference Manual SQL Language Elements Date/Time Literals Date or time literal input must be enclosed in single quotes. Input is accepted in almost any reasonable format, including ISO 8601, SQL-compatible, traditional POSTGRES, and others. HP Vertica is more flexible in handling date/time input than the SQL standard requires.The exact parsing rules of date/time input and for the recognized text fields including months, days of the week, and time zones are described in Date/Time Expressions. Time Zone Values HP Vertica attempts to be compatible with the SQL standard definitions for time zones. However, the SQL standard has an odd mix of date and time types and capabilities. Obvious problems are: l Although the DATE type does not have an associated time zone, the TIME type can. Time zones in the real world have little meaning unless associated with a date as well as a time, since the offset can vary through the year with daylight-saving time boundaries. l HP Vertica assumes your local time zone for any data type containing only date or time. l The default time zone is specified as a constant numeric offset from UTC. It is therefore not possible to adapt to daylight-saving time when doing date/time arithmetic across DST boundaries. To address these difficulties, HP recommends using Date/Time types that contain both date and time when you use time zones. HP recommends that you do not use the type TIME WITH TIME ZONE, even though it is supported it for legacy applications and for compliance with the SQL standard. Time zones and time-zone conventions are influenced by political decisions, not just earth geometry. Time zones around the world became somewhat standardized during the 1900's, but continue to be prone to arbitrary changes, particularly with respect to daylight-savings rules. HP Vertica currently supports daylight-savings rules over the time period 1902 through 2038, corresponding to the full range of conventional UNIX system time. Times outside that range are taken to be in "standard time" for the selected time zone, no matter what part of the year in which they occur. Example Description PST Pacific Standard Time -8:00 ISO-8601 offset for PST -800 ISO-8601 offset for PST -8 ISO-8601 offset for PST zulu Military abbreviation for UTC z Short form of zulu HP Vertica Analytic Database (7.0.x) Page 64 of 1539 SQL Reference Manual SQL Language Elements Day of the Week Names The following tokens are recognized as names of days of the week: Day Abbreviations SUNDAY SUN MONDAY MON TUESDAY TUE, TUES WEDNESDAY WED, WEDS THURSDAY THU, THUR, THURS FRIDAY FRI SATURDAY SAT Month Names The following tokens are recognized as names of months: Month Abbreviations JANUARY JAN FEBRUARY FEB MARCH MAR APRIL APR MAY MAY JUNE JUN JULY JUL AUGUST AUG SEPTEMBER SEP, SEPT OCTOBER OCT NOVEMBER NOV DECEMBER DEC HP Vertica Analytic Database (7.0.x) Page 65 of 1539 SQL Reference Manual SQL Language Elements Interval Values An interval value represents the duration between two points in time. Syntax [ @ ] quantity unit [ quantity unit... ] [ AGO ] Parameters @ (at sign) is optional and ignored quantity Is an integer numeric constant unit Is one of the following units or abbreviations or plurals of the following units: MILLISECONDSECOND MINUTE HOUR AGO DAYWEEK MONTH YEAR DECADECENTURY MILLENNIUM [Optional] specifies a negative interval value (an interval going back in time). 'AGO' is a synonym for '-'. The amounts of different units are implicitly added up with appropriate sign accounting. Notes l Quantities of days, hours, minutes, and seconds can be specified without explicit unit markings. For example: '1 12:59:10' is read the same as '1 day 12 hours 59 min 10 sec' l l l The boundaries of an interval constant are: n '9223372036854775807 usec' to '9223372036854775807 usec ago' n 296533 years 3 mons 21 days 04:00:54.775807 to -296533 years -3 mons -21 days 04:00:54.775807 The range of an interval constant is +/– 263 – 1 (plus or minus two to the sixty-third minus one) microseconds. In HP Vertica, the interval fields are additive and accept large floating-point numbers. HP Vertica Analytic Database (7.0.x) Page 66 of 1539 SQL Reference Manual SQL Language Elements Examples => SELECT INTERVAL '1 12:59:10'; ?column? -----------1 12:59:10 (1 row) => SELECT INTERVAL '9223372036854775807 usec'; ?column? --------------------------106751991 04:00:54.775807 (1 row) => SELECT INTERVAL '-9223372036854775807 usec'; ?column? ----------------------------106751991 04:00:54.775807 (1 row) => SELECT INTERVAL '-1 day 48.5 hours'; ?column? ----------3 00:30 (1 row) => SELECT TIMESTAMP 'Apr 1, 07' - TIMESTAMP 'Mar 1, 07'; ?column? ---------31 (1 row) => SELECT TIMESTAMP 'Mar 1, 07' - TIMESTAMP 'Feb 1, 07'; ?column? ---------28 (1 row) => SELECT TIMESTAMP 'Feb 1, 07' + INTERVAL '29 days'; ?column? --------------------03/02/2007 00:00:00 (1 row) => SELECT TIMESTAMP WITHOUT TIME ZONE '1999-10-01 00:00:01' + INTERVAL '1 month - 1 second' AS "Oct 31"; Oct 31 --------------------1999-10-31 00:00:00 (1 row) Interval-Literal The following table lists the units allowed for the required interval-literal parameter. HP Vertica Analytic Database (7.0.x) Page 67 of 1539 SQL Reference Manual SQL Language Elements Unit Description a Julian year, 365.25 days exactly ago Indicates negative time offset c, cent, century Century centuries Centuries d, day Day days Days dec, decade Decade decades, decs Decades h, hour, hr Hour hours, hrs Hours ka Julian kilo-year, 365250 days exactly m Minute or month for year/month, depending on context. See Notes below this table. microsecond Microsecond microseconds Microseconds mil, millennium Millennium millennia, mils Millennia millisecond Millisecond milliseconds Milliseconds min, minute, mm Minute mins, minutes Minutes mon, month Month mons, months Months ms, msec, millisecond Millisecond mseconds, msecs Milliseconds q, qtr, quarter Quarter qtrs, quarters Quarters HP Vertica Analytic Database (7.0.x) Page 68 of 1539 SQL Reference Manual SQL Language Elements Unit Description s, sec, second Second seconds, secs Seconds us, usec Microsecond microseconds, useconds, usecs Microseconds w, week Week weeks Weeks y, year, yr Year years, yrs Years Processing the Input Unit 'm' The input unit 'm' can represent either 'months' or 'minutes,' depending on the context. For instance, the following command creates a one-column table with an interval value: => CREATE TABLE int_test(i INTERVAL YEAR TO MONTH); In the first INSERT statement, the values are inserted as 1 year, six months: => INSERT INTO int_test VALUES('1 year 6 months'); The second INSERT statement results in an error from specifying minutes for a YEAR TO MONTH interval. At runtime, the result will be a NULL: => INSERT INTO int_test VALUES('1 year 6 minutes'); ERROR: invalid input syntax for type interval year to month: "1 year 6 minutes" In the third INSERT statement, the 'm' is processed as months (not minutes), because DAY TO SECOND is truncated: => INSERT INTO int_test VALUES('1 year 6 m'); -- the m counts as months The table now contains two identical values, with no minutes: => SELECT * FROM int_test; i ----1 year 6 months 1 year 6 months (2 rows) HP Vertica Analytic Database (7.0.x) Page 69 of 1539 SQL Reference Manual SQL Language Elements In the following command, the 'm' counts as minutes, because the DAY TO SECOND interval-qualifier extracts day/time values from the input: => SELECT INTERVAL '1y6m' DAY TO SECOND; ?column? ----------365 days 6 mins (1 row) Interval-Qualifier The following table lists the optional interval qualifiers. Values in INTERVAL fields, other than SECOND, are integers with a default precision of 2 when they are not the first field. You cannot combine day/time and year/month qualifiers. For example, the following intervals are not allowed: l DAY TO YEAR l HOUR TO MONTH Interval Type Day/time intervals Units Valid interval-literal entries DAY Unconstrained. DAY TO HOUR An interval that represents a span of days and hours. DAY TO MINUTE An interval that represents a span of days and minutes. DAY TO SECOND (Default) interval that represents a span of days, hours, minutes, seconds, and fractions of a second if subtype unspecified. HOUR Hours within days. HOUR TO MINUTE An interval that represents a span of hours and minutes. HOUR TO SECOND An interval that represents a span of hours and seconds. MINUTE Minutes within hours. MINUTE TO SECOND An interval that represents a span of minutes and seconds. HP Vertica Analytic Database (7.0.x) Page 70 of 1539 SQL Reference Manual SQL Language Elements Interval Type Units Valid interval-literal entries SECOND Seconds within minutes. Note: The SECOND field can have an interval fractional seconds precision, which indicates the number of decimal digits maintained following the decimal point in the SECONDS value. When SECOND is not the first field, it has a precision of 2 places before the decimal point. Year/month MONTH intervals Months within year. YEAR Unconstrained. YEAR TO MONTH An interval that represents a span of years and months. HP Vertica Analytic Database (7.0.x) Page 71 of 1539 SQL Reference Manual SQL Language Elements Operators Operators are logical, mathematical, and equality symbols used in SQL to evaluate, compare, or calculate values. Binary Operators Each of the functions in the following table works with BINARY and VARBINARY data types. Operator Function Description '=' binary_eq Equal to '<>' binary_ne Not equal to '<' binary_lt Less than '<=' binary_le Less than or equal to '>' binary_gt Greater than binary_ge Greater than or equal to '&' binary_and And '~' binary_not Not '|' binary_or Or '#' binary_xor Either or '||' binary_cat Concatenate '>=' Notes l If the arguments vary in length binary operators treat the values as though they are all equal in length by right-extending the smaller values with the zero byte to the full width of the column (except when using the binary_cat function). For example, given the values 'ff' and 'f', the value 'f' is treated as 'f0'. l Operators are strict with respect to nulls. The result is null if any argument is null. For example, null <> 'a'::binary returns null. l To apply the OR ('|') operator to a VARBINARY type, explicitly cast the arguments; for example: => SELECT '1'::VARBINARY | '2'::VARBINARY; ?column? HP Vertica Analytic Database (7.0.x) Page 72 of 1539 SQL Reference Manual SQL Language Elements ---------3 (1 row) Similarly, to apply the LENGTH, REPEAT, TO_HEX, and SUBSTRING functions to a BINARY type, explicitly cast the argument; for example: => SELECT LENGTH('\\001\\002\\003\\004'::varbinary(4)); LENGTH -------4 (1 row) When applying an operator or function to a column, the operator's or function's argument type is derived from the column type. Examples In the following example, the zero byte is not removed from column cat1 when values are concatenated: => SELECT 'ab'::BINARY(3) || 'cd'::BINARY(2) AS cat1, 'ab'::VARBINARY(3) || 'cd'::VARBINARY(2) AS cat2; cat1 | cat2 ----------+-----ab\000cd | abcd (1 row) When the binary value 'ab'::binary(3) is translated to varbinary, the result is equivalent to 'ab\\000'::varbinary(3); for example: => SELECT 'ab'::binary(3); binary -------ab\000 (1 row) The following example performs a bitwise AND operation on the two input values (see also BIT_ AND): => SELECT '10001' & '011' as AND; AND ----1 (1 row) The following example performs a bitwise OR operation on the two input values (see also BIT_OR): HP Vertica Analytic Database (7.0.x) Page 73 of 1539 SQL Reference Manual SQL Language Elements => SELECT '10001' | '011' as OR; OR ------10011 (1 row) The following example concatenates the two input values: => SELECT '10001' || '011' as CAT; CAT ---------10001011 (1 row) Boolean Operators Syntax [ AND | OR | NOT ] Parameters SQL uses a three-valued Boolean logic where the null value represents "unknown." a b a AND b a OR b TRUE TRUE TRUE TRUE FALSE FALSE TRUE TRUE NULL TRUE NULL TRUE FALSE FALSE FALSE FALSE FALSE NULL FALSE NULL NULL NULL NULL NULL a NOT a TRUE FALSE FALSE TRUE NULL NULL HP Vertica Analytic Database (7.0.x) Page 74 of 1539 SQL Reference Manual SQL Language Elements Notes l The operators AND and OR are commutative, that is, you can switch the left and right operand without affecting the result. However, the order of evaluation of subexpressions is not defined. When it is essential to force evaluation order, use a CASE construct. l Do not confuse Boolean operators with the Boolean-Predicate or the Boolean data type, which can have only two values: true and false. Comparison Operators Comparison operators are available for all data types where comparison makes sense. All comparison operators are binary operators that return values of True, False, or NULL. Syntax and Parameters < less than > greater than <= less than or equal to >= greater than or equal to = or <=> equal <> or != not equal Notes l The != operator is converted to <> in the parser stage. It is not possible to implement != and <> operators that do different things. l The comparison operators return NULL (signifying "unknown") when either operand is null. l The <=> operator performs an equality comparison like the = operator, but it returns true, instead of NULL, if both operands are NULL, and false, instead of NULL, if one operand is NULL. Data Type Coercion Operators (CAST) Data type coercion (casting) passes an expression value to an input conversion routine for a specified data type, resulting in a constant of the indicated type. Syntax SELECT CAST ( expression AS data_type ) SELECT expression::data_type HP Vertica Analytic Database (7.0.x) Page 75 of 1539 SQL Reference Manual SQL Language Elements SELECT data_type 'string' Parameters expression An expression of any type data_type Converts the value of expression to one of the following data types: BINARY BOOLEAN CHARACTER DATE/TIME NUMERIC Notes l In HP Vertica, data type coercion (casting) can be invoked by an explicit cast request. It must use one of the following constructs: => SELECT CAST ( expression AS data_type ) => SELECT expression::data_type => SELECT data_type 'string' l The explicit type cast can be omitted if there is no ambiguity as to the type the constant must be. For example, when a constant is assigned directly to a column, it is automatically coerced to the column's data type. l If a binary value is cast (implicitly or explicitly) to a binary type with a smaller length, the value is silently truncated. For example: => SELECT 'abcd'::BINARY(2); ?column? ---------ab (1 row) l Similarly, if a character value is cast (implicitly or explicitly) to a character value with a smaller length, the value is silently truncated. For example: => SELECT 'abcd'::CHAR(3); HP Vertica Analytic Database (7.0.x) Page 76 of 1539 SQL Reference Manual SQL Language Elements ?column? ---------abc l l HP Vertica supports only casts and resize operations as follows: n BINARY to and from VARBINARY n VARBINARY to and from LONG VARBINARY n BINARY to and from LONG VARBINARY On binary data that contains a value with fewer bytes than the target column, values are rightextended with the zero byte '\0' to the full width of the column. Trailing zeros on variable-length binary values are not right-extended: => SELECT 'ab'::BINARY(4), 'ab'::VARBINARY(4), 'ab'::LONG VARBINARY(4); ?column? | ?column? | ?column? ------------+----------+---------ab\000\000 | ab | ab (1 row) Examples => SELECT CAST((2 + 2) AS VARCHAR); ?column? ---------4 (1 row) => SELECT (2 + 2)::VARCHAR; ?column? ---------4 (1 row) => SELECT INTEGER '123'; ?column? ---------123 (1 row) => SELECT (2 + 2)::LONG VARCHAR ?column? ---------4 (1 row) => SELECT '2.2' + 2; ERROR: invalid input syntax for integer: "2.2" HP Vertica Analytic Database (7.0.x) Page 77 of 1539 SQL Reference Manual SQL Language Elements => SELECT FLOAT '2.2' + 2; ?column? ---------4.2 (1 row) See Also l Data Type Conversions l Data Type Coercion Chart Date/Time Operators Syntax [ + | – | * | / ] Parameters + – * / Addition Subtraction Multiplication Division Notes l The operators described below that take TIME or TIMESTAMP inputs actually come in two variants: one that takes TIME WITH TIME ZONE or TIMESTAMP WITH TIME ZONE, and one that takes TIME WITHOUT TIME ZONE or TIMESTAMP WITHOUT TIME ZONE. For brevity, these variants are not shown separately. l The + and * operators come in commutative pairs (for example both DATE + INTEGER and INTEGER + DATE); only one of each such pair is shown. Example Result Type Result DATE '2001-09-28' + INTEGER '7' DATE '2001-10-05' DATE '2001-09-28' + INTERVAL '1 HOUR' TIMESTAMP '2001-09-28 01:00:00' DATE '2001-09-28' + TIME '03:00' TIMESTAMP '2001-09-28 03:00:00' INTERVAL '1 DAY' + INTERVAL '1 HOUR' INTERVAL '1 DAY 01:00:00' HP Vertica Analytic Database (7.0.x) Page 78 of 1539 SQL Reference Manual SQL Language Elements Example Result Type Result TIMESTAMP '2001-09-28 01:00' + INTERVAL '23 HOURS' TIMESTAMP '2001-09-29 00:00:00' TIME '01:00' + INTERVAL '3 HOURS' TIME '04:00:00' - INTERVAL '23 HOURS' INTERVAL '-23:00:00' DATE '2001-10-01' – DATE '2001-09-28' INTEGER '3' DATE '2001-10-01' – INTEGER '7' DATE '2001-09-24' DATE '2001-09-28' – INTERVAL '1 HOUR' TIMESTAMP '2001-09-27 23:00:00' TIME '05:00' – TIME '03:00' INTERVAL '02:00:00' TIME '05:00' '2 HOURS' TIME '03:00:00' TIMESTAMP '2001-09-28 23:00' – INTERVAL '23 HOURS' TIMESTAMP '2001-09-28 00:00:00' INTERVAL '1 DAY' – INTERVAL '1 HOUR' INTERVAL '1 DAY -01:00:00' TIMESTAMP '2001-09-29 03:00' – TIMESTAMP '2001-09-27 12:00' INTERVAL '1 DAY 15:00:00' 900 * INTERVAL '1 SECOND' INTERVAL '00:15:00' 21 * INTERVAL '1 DAY' INTERVAL '21 DAYS' DOUBLE PRECISION '3.5' * INTERVAL '1 HOUR' INTERVAL '03:30:00' INTERVAL '1 HOUR' / DOUBLE PRECISION '1.5' INTERVAL INTERVAL '00:40:00' Mathematical Operators Mathematical operators are provided for many data types. Operator Description Example Result ! Factorial 5 ! 120 + Addition 2 + 3 5 – Subtraction 2 – 3 –1 * Multiplication 2 * 3 6 / Division (integer division produces NUMERIC results). 4 / 2 2.00... HP Vertica Analytic Database (7.0.x) Page 79 of 1539 SQL Reference Manual SQL Language Elements Operator Description Example Result // With integer division, returns an INTEGER rather than a NUMERIC. 117.32 // 2.5 46 % Modulo (remainder) 5 % 4 1 ^ Exponentiation 2.0 ^ 3.0 8 |/ Square root |/ 25.0 5 ||/ Cube root ||/ 27.0 3 !! Factorial (prefix operator) !! 5 120 @ Absolute value @ -5.0 5 & Bitwise AND 91 & 15 11 | Bitwise OR 32 | 3 35 # Bitwise XOR 17 # 5 20 ~ Bitwise NOT ~1 -2 << Bitwise shift left 1 << 4 16 >> Bitwise shift right 8 >> 2 2 Notes l The bitwise operators work only on integer data types, whereas the others are available for all numeric data types. l HP Vertica supports the use of the factorial operators on positive and negative floating point (DOUBLE PRECISION) numbers as well as integers. For example: => SELECT 4.98!; ?column? -----------------115.978600750905 (1 row) l Factorial is defined in term of the gamma function, where (-1) = Infinity and the other negative integers are undefined. For example: (–4)! = NaN –! = –(4!) = –24 HP Vertica Analytic Database (7.0.x) Page 80 of 1539 SQL Reference Manual SQL Language Elements l Factorial is defined as z! = gamma(z+1) for all complex numbers z. See the Handbook of Mathematical Functions (1964) Section 6.1.5. l See MOD() for details about the behavior of %. NULL Operators To check whether a value is or is not NULL, use the constructs: expression IS NULL expression IS NOT NULL Alternatively, use equivalent, but nonstandard, constructs: expression ISNULL expression NOTNULL Do not write expression = NULL because NULL represents an unknown value, and two unknown values are not necessarily equal. This behavior conforms to the SQL standard. Note: Some applications might expect that expression = NULL returns true if expression evaluates to null. HP Vertica strongly recommends that these applications be modified to comply with the SQL standard. String Concatenation Operators To concatenate two strings on a single line, use the concatenation operator (two consecutive vertical bars). Syntax string || string Parameters string Is an expression of type CHAR or VARCHAR Notes l || is used to concatenate expressions and constants. The expressions are cast to VARCHAR if possible, otherwise to VARBINARY, and must both be one or the other. l Two consecutive strings within a single SQL statement on separate lines are automatically concatenated HP Vertica Analytic Database (7.0.x) Page 81 of 1539 SQL Reference Manual SQL Language Elements Examples The following example is a single string written on two lines: => SELECT E'xx'-> '\\'; ?column? ---------xx\ (1 row) The following examples show two strings concatenated: => SELECT E'xx' ||-> '\\'; ?column? ---------xx\\ (1 row) => SELECT 'auto' || 'mobile'; ?column? ---------automobile (1 row) => SELECT 'auto'-> 'mobile'; ?column? ---------automobile (1 row) => SELECT 1 || 2; ?column? ---------12 (1 row) => SELECT '1' || '2'; ?column? ---------12 (1 row)=> SELECT '1'-> '2'; ?column? ---------12 (1 row) HP Vertica Analytic Database (7.0.x) Page 82 of 1539 SQL Reference Manual SQL Language Elements Expressions SQL expressions are the components of a query that compare a value or values against other values. They can also perform calculations. Expressions found inside any SQL command are usually in the form of a conditional statement. Operator Precedence The following table shows operator precedence in decreasing (high to low) order. Note: When an expression includes more than one operator, HP recommends that you specify the order of operation using parentheses, rather than relying on operator precedence. Operator/Element Associativity Description . left table/column name separator :: left typecast [ ] left array element selection - right unary minus ^ left exponentiation * / % left multiplication, division, modulo + - left addition, subtraction IS IS TRUE, IS FALSE, IS UNKNOWN, IS NULL IN set membership BETWEEN range containment OVERLAPS time interval overlap LIKE string pattern matching < > less than, greater than = right equality, assignment NOT right logical negation AND left logical conjunction OR left logical disjunction HP Vertica Analytic Database (7.0.x) Page 83 of 1539 SQL Reference Manual SQL Language Elements Expression Evaluation Rules The order of evaluation of subexpressions is not defined. In particular, the inputs of an operator or function are not necessarily evaluated left-to-right or in any other fixed order. To force evaluation in a specific order, use a CASE construct. For example, this is an untrustworthy way of trying to avoid division by zero in a WHERE clause: => SELECT x, y WHERE x <> 0 AND y/x > 1.5; But this is safe: => SELECT x, y WHERE CASE WHEN x <> 0 THEN y/x > 1.5 ELSE false END; A CASE construct used in this fashion defeats optimization attempts, so use it only when necessary. (In this particular example, it would be best to avoid the issue by writing y > 1.5*x instead.) Aggregate Expressions An aggregate expression represents the application of an aggregate function across the rows or groups of rows selected by a query. Using AVG() as an example, the syntax of an aggregate expression is one of the following: l Invokes the aggregate across all input rows for which the given expression yields a non-null value: AVG (expression) l Is the same as AVG(expression), because ALL is the default: AVG (ALL expression) l Invokes the AVG() function across all input rows for all distinct, non-null values of the expression, where expression is any value expression that does not itself contain an aggregate expression. AVG (DISTINCT expression) HP Vertica Analytic Database (7.0.x) Page 84 of 1539 SQL Reference Manual SQL Language Elements An aggregate expression only can appear in the select list or HAVING clause of a SELECT statement. It is forbidden in other clauses, such as WHERE, because those clauses are evaluated before the results of aggregates are formed. CASE Expressions The CASE expression is a generic conditional expression that can be used wherever an expression is valid. It is similar to case and if/then/else statements in other languages. Syntax (form 1) CASE WHEN condition THEN result [ WHEN condition THEN result ] ... [ ELSE result ] END Parameters condition Is an expression that returns a boolean (true/false) result. If the result is false, subsequent WHEN clauses are evaluated in the same manner. result Specifies the value to return when the associated condition is true. ELSE result If no condition is true then the value of the CASE expression is the result in the ELSE clause. If the ELSE clause is omitted and no condition matches, the result is null. Syntax (form 2) CASE expression WHEN value THEN result [ WHEN value THEN result ] ... [ ELSE result ] END Parameters expression An expression that is evaluated and compared to all the value specifications in the WHEN clauses until one is found that is equal. value Specifies a value to compare to the expression. result Specifies the value to return when the expression is equal to the specified value. ELSE result Specifies the value to return when the expression is not equal to any value; if no ELSE clause is specified, the value returned is null. HP Vertica Analytic Database (7.0.x) Page 85 of 1539 SQL Reference Manual SQL Language Elements Notes The data types of all the result expressions must be convertible to a single output type. Examples The following examples show two uses of the CASE statement. => SELECT * FROM test; a --1 2 3 => SELECT a, CASE WHEN a=1 THEN 'one' WHEN a=2 THEN 'two' ELSE 'other' END FROM test; a | case ---+------1 | one 2 | two 3 | other => SELECT a, CASE a WHEN 1 THEN 'one' WHEN 2 THEN 'two' ELSE 'other' END FROM test; a | case ---+------1 | one 2 | two 3 | other Special Example A CASE expression does not evaluate subexpressions that are not needed to determine the result. You can use this behavior to avoid division-by-zero errors: => SELECT x FROM T1 WHERE CASE WHEN x <> 0 THEN y/x > 1.5 ELSE false END; HP Vertica Analytic Database (7.0.x) Page 86 of 1539 SQL Reference Manual SQL Language Elements Column References Syntax [ [ [db-name.]schema. ] tablename. ] columnname Parameters [ [db-name.]schema. ] [Optional] Specifies the database name and optional schema name. Using a database name identifies objects that are not unique within the current search path (see Setting Search Paths). You must be connected to the database you specify, and you cannot change objects in other databases. Specifying different database objects lets you qualify database objects as explicitly as required. For example, you can use a database and a schema name (mydb.myschema). tablename. columnname Is one of: l The name of a table l An alias for a table defined by means of a FROM clause in a query Is the name of a column that must be unique across all the tables being used in a query Notes There are no space characters in a column reference. If you do not specify a schema, HP Vertica searches the existing schemas according to the order defined in the SET SEARCH_PATH command. Example This example uses the schema from the VMart database. See Introducing the VMart Example Database. In the following command, transaction_type and transaction_time are the unique column references, store is the name of the schema, and store_sales_fact is the table name: => SELECT transaction_type, transaction_time FROM store.store_sales_fact ORDER BY transaction_time; transaction_type | transaction_time ------------------+------------------ HP Vertica Analytic Database (7.0.x) Page 87 of 1539 SQL Reference Manual SQL Language Elements purchase purchase purchase purchase purchase purchase purchase return return purchase purchase purchase purchase purchase purchase purchase purchase purchase return purchase (20 rows) | | | | | | | | | | | | | | | | | | | | 00:00:23 00:00:32 00:00:54 00:00:54 00:01:15 00:01:30 00:01:50 00:03:34 00:03:35 00:03:39 00:05:13 00:05:20 00:05:23 00:05:27 00:05:30 00:05:35 00:05:35 00:05:42 00:06:36 00:06:39 Comments A comment is an arbitrary sequence of characters beginning with two consecutive hyphen characters and extending to the end of the line. For example: -- This is a standard SQL comment A comment is removed from the input stream before further syntax analysis and is effectively replaced by white space. Alternatively, C-style block comments can be used where the comment begins with /* and extends to the matching occurrence of */. /* multiline comment * with nesting: /* nested block comment */ */ These block comments nest, as specified in the SQL standard. Unlike C, you can comment out larger blocks of code that might contain existing block comments. Date/Time Expressions HP Vertica uses an internal heuristic parser for all date/time input support. Dates and times are input as strings, and are broken up into distinct fields with a preliminary determination of what kind of information might be in the field. Each field is interpreted and either assigned a numeric value, ignored, or rejected. The parser contains internal lookup tables for all textual fields, including months, days of the week, and time zones. The date/time type inputs are decoded using the following procedure. HP Vertica Analytic Database (7.0.x) Page 88 of 1539 SQL Reference Manual SQL Language Elements l Break the input string into tokens and categorize each token as a string, time, time zone, or number. l If the numeric token contains a colon (:), this is a time string. Include all subsequent digits and colons. l If the numeric token contains a dash (-), slash (/), or two or more dots (.), this is a date string which might have a text month. l If the token is numeric only, then it is either a single field or an ISO 8601 concatenated date (for example, 19990113 for January 13, 1999) or time (for example, 141516 for 14:15:16). l If the token starts with a plus (+) or minus (–), then it is either a time zone or a special field. l If the token is a text string, match up with possible strings. l Do a binary-search table lookup for the token as either a special string (for example, today), day (for example, Thursday), month (for example, January), or noise word (for example, at, on). l Set field values and bit mask for fields. For example, set year, month, day for today, and additionally hour, minute, second for now. l If not found, do a similar binary-search table lookup to match the token with a time zone. l If still not found, throw an error. l When the token is a number or number field: l If there are eight or six digits, and if no other date fields have been previously read, then interpret as a "concatenated date" (for example, 19990118 or 990118). The interpretation is YYYYMMDD or YYMMDD. l If the token is three digits and a year has already been read, then interpret as day of year. l If four or six digits and a year has already been read, then interpret as a time (HHMM or HHMMSS). l If three or more digits and no date fields have yet been found, interpret as a year (this forces yymm-dd ordering of the remaining date fields). l Otherwise the date field ordering is assumed to follow the DateStyle setting: mm-dd-yy, ddmm-yy, or yy-mm-dd. Throw an error if a month or day field is found to be out of range. l If BC has been specified, negate the year and add one for internal storage. (There is no year zero in the our implementation, so numerically 1 BC becomes year zero.) l If BC was not specified, and if the year field was two digits in length, then adjust the year to four digits. If the field is less than 70, then add 2000, otherwise add 1900. Tip: Gregorian years AD 1–99 can be entered by using 4 digits with leading zeros (for example, HP Vertica Analytic Database (7.0.x) Page 89 of 1539 SQL Reference Manual SQL Language Elements 0099 is AD 99). Month Day Year Ordering For some formats, ordering of month, day, and year in date input is ambiguous and there is support for specifying the expected ordering of these fields. Special Date/Time Values HP Vertica supports several special date/time values for convenience, as shown below. All of these values need to be written in single quotes when used as constants in SQL statements. The values INFINITY and -INFINITY are specially represented inside the system and are displayed the same way. The others are simply notational shorthands that are converted to ordinary date/time values when read. (In particular, NOW and related strings are converted to a specific time value as soon as they are read.) String Valid Data Types Description epoch DATE, TIMESTAMP 1970-01-01 00:00:00+00 (UNIX SYSTEM TIME ZERO) INFINITY TIMESTAMP Later than all other time stamps -INFINITY TIMESTAMP Earlier than all other time stamps NOW DATE, TIME, TIMESTAMP Current transaction's start time Note: NOW is not the same as the NOW function. TODAY DATE, TIMESTAMP Midnight today TOMORROW DATE, TIMESTAMP Midnight tomorrow YESTERDAY DATE, TIMESTAMP Midnight yesterday ALLBALLS TIME 00:00:00.00 UTC The following SQL-compatible functions can also be used to obtain the current time value for the corresponding data type: l CURRENT_DATE l CURRENT_TIME l CURRENT_TIMESTAMP l LOCALTIME l LOCALTIMESTAMP HP Vertica Analytic Database (7.0.x) Page 90 of 1539 SQL Reference Manual SQL Language Elements The latter four accept an optional precision specification. (See Date/Time Functions.) However, these functions are SQL functions and are not recognized as data input strings. NULL Value NULL is a reserved keyword used to indicate that a data value is unknown. Be very careful when using NULL in expressions. NULL is not greater than, less than, equal to, or not equal to any other expression. Use the Boolean-Predicate for determining whether an expression value is NULL. Notes l HP Vertica stores data in projections, which are sorted in a specific way. All columns are stored in ASC (ascending) order. For columns of data type NUMERIC, INTEGER, DATE, TIME, TIMESTAMP, and INTERVAL, NULL values are placed at the beginning of sorted projections (NULLS FIRST), while for columns of data type FLOAT, STRING, and BOOLEAN, NULL values are placed at the end (NULLS LAST). For details, see Analytics Null Placement and Minimizing Sort Operations in the Programmer's Guide. l HP Vertica also accepts NUL characters ('\0') in constant strings and no longer removes null characters from VARCHAR fields on input or output. NUL is the ASCII abbreviation for the NULL character. l You can write queries with expressions that contain the <=> operator for NULL=NULL joins. See Equi-joins and Non Equi-Joins in the Programmer's Guide. See Also l NULL-handling Functions Numeric Expressions HP Vertica follows the IEEE specification for floating point, including NaN. A NaN is not greater than and at the same time not less than anything, even itself. In other words, comparisons always return false whenever a NaN is involved. Examples => SELECT CBRT('Nan'); -- cube root CBRT -----NaN (1 row) => SELECT 'Nan' > 1.0; ?column? HP Vertica Analytic Database (7.0.x) Page 91 of 1539 SQL Reference Manual SQL Language Elements ---------f (1 row) HP Vertica Analytic Database (7.0.x) Page 92 of 1539 SQL Reference Manual SQL Language Elements Predicates Predicates are truth-tests. If the predicate test is true, it returns a value. Each predicate is evaluated per row, so that when the predicate is part of an entire table SELECT statement, the statement can return multiple results. Predicates consist of a set of parameters and arguments. For example, in the following example WHERE clause: WHERE name = 'Smith'; l name = 'Smith' is the predicate l 'Smith' is an expression BETWEEN-predicate The special BETWEEN predicate is available as a convenience. Syntax a BETWEEN x AND y Notes a BETWEEN x AND y is equivalent to: a >= x AND a <= y Similarly: a NOT BETWEEN x AND y is equivalent to: a < x OR a > y Boolean-Predicate Retrieves rows where the value of an expression is true, false, or unknown (null). HP Vertica Analytic Database (7.0.x) Page 93 of 1539 SQL Reference Manual SQL Language Elements Syntax expression IS [NOT] TRUE expression IS [NOT] FALSE expression IS [NOT] UNKNOWN Notes l A null input is treated as the value UNKNOWN. l IS UNKNOWN and IS NOT UNKNOWN are effectively the same as the NULL-predicate, except that the input expression does not have to be a single column value. To check a single column value for NULL, use the NULL-predicate. l Do not confuse the boolean-predicate with Boolean Operators or the Boolean data type, which can have only two values: true and false. Column-Value-Predicate Syntax column-name comparison-op constant-expression Parameters column-name A single column of one the tables specified in the FROM Clause. comparison-op A Comparison Operators. constant-expression A constant value of the same data type as the column-name. Notes To check a column value for NULL, use the NULL-predicate. Examples table.column1 = 2 table.column2 = 'Seafood' table.column3 IS NULL HP Vertica Analytic Database (7.0.x) Page 94 of 1539 SQL Reference Manual SQL Language Elements IN-predicate Syntax column-expression [ NOT ] IN ( list-expression ) Parameters column-expression One or more columns from the tables specified in the FROM Clause. list-expression Comma-separated list of constant values matching the data type of the column-expression Examples x, y IN ((1,2), (3, 4)), OR x, y IN (SELECT a, b FROM table)x IN (5, 6, 7) INTERPOLATE Used to join two event series together using some ordered attribute, event series joins let you compare values from two series directly, rather than having to normalize the series to the same measurement interval. Syntax expression1 INTERPOLATE PREVIOUS VALUE expression2 Parameters expression1expression2 olumn-reference from one the tables specified in the FROM Clause. The column-reference can be any data type, but DATE/TIME types are the most useful, especially TIMESTAMP,since you are joining data that represents an event series. PREVIOUS VALUE Pads the non-preserved side with the previous values from relation when there is no match. Input rows are sorted in ascending logical order of the join column. Note: An ORDER BY clause, if used, does not determine the input order but only determines query output order. HP Vertica Analytic Database (7.0.x) Page 95 of 1539 SQL Reference Manual SQL Language Elements Notes l An event series join is an extension of a regular outer join. Instead of padding the non-preserved side with null values when there is no match, the event series join pads the non-preserved side with the previous values from the table. l The difference between expressing a regular outer join and an event series join is the INTERPOLATE predicate, which is used in the ON clause. See the Examples section below Notes and Restrictions. See also Event Series Joins in the Programmer's Guide. l Data is logically partitioned on the table in which it resides, based on other ON clause equality predicates. l Interpolated values come from the table that contains the null, not from the other table. l HP Vertica does not guarantee that there will be no null values in the output. If there is no previous value for a mismatched row, that row will be padded with nulls. l Event series join requires that both tables be sorted on columns in equality predicates, in any order, followed by the INTERPOLATED column. If data is already sorted in this order, then an explicit sort is avoided, which can improve query performance. For example, given the following tables: ask: exchange, stock, ts, pricebid: exchange, stock, ts, price In the query that follows n ask is sorted on exchange, stock (or the reverse), ts n bid is sorted on exchange, stock (or the reverse), ts SELECT ask.price - bid.price, ask.ts, ask.stock, ask.exchange FROM ask FULL OUTER JOIN bid ON ask.stock = bid.stock AND ask.exchange = bid.exchange AND ask.ts INTERPOLATE PREVIOUS VALUE bid.ts; Restrictions l Only one INTERPOLATE expression is allowed per join. l INTERPOLATE expressions are used only with ANSI SQL-99 syntax (the ON clause), which is already true for full outer joins. l INTERPOLATE can be used with equality predicates only. HP Vertica Analytic Database (7.0.x) Page 96 of 1539 SQL Reference Manual SQL Language Elements l The AND operator is supported but not the OR and NOT operators. l Expressions and implicit or explicit casts are not supported, but subqueries are allowed. Example The examples that follow use this simple schema. CREATE TABLE t(x TIME); CREATE TABLE t1(y TIME); INSERT INTO t VALUES('12:40:23'); INSERT INTO t VALUES('14:40:25'); INSERT INTO t VALUES('14:45:00'); INSERT INTO t VALUES('14:49:55'); INSERT INTO t1 VALUES('12:40:23'); INSERT INTO t1 VALUES('14:00:00'); COMMIT; Normal Full Outer Join => SELECT * FROM t FULL OUTER JOIN t1 ON t.x = t1.y; Notice the null rows from the non-preserved table: x | y ----------+---------12:40:23 | 12:40:23 14:40:25 | 14:45:00 | 14:49:55 | | 14:00:00 (5 rows) Full Outer Join with Interpolation => SELECT * FROM t FULL OUTER JOIN t1 ON t.x INTERPOLATE PREVIOUS VALUE t1.y; In this case, the rows with no entry point are padded with values from the previous row. x | y ----------+---------12:40:23 | 12:40:23 12:40:23 | 14:00:00 14:40:25 | 14:00:00 14:45:00 | 14:00:00 14:49:55 | 14:00:00 HP Vertica Analytic Database (7.0.x) Page 97 of 1539 SQL Reference Manual SQL Language Elements (5 rows) Normal Left Outer Join => SELECT * FROM t LEFT OUTER JOIN t1 ON t.x = t1.y; Again, there are nulls in the non-preserved table x | y ----------+---------12:40:23 | 12:40:23 14:40:25 | 14:45:00 | 14:49:55 | (4 rows) Left Outer Join with Interpolation => SELECT * FROM t LEFT OUTER JOIN t1 ON t.x INTERPOLATE PREVIOUS VALUE t1.y; Nulls padded with interpolated values. x | y ----------+---------12:40:23 | 12:40:23 14:40:25 | 14:00:00 14:45:00 | 14:00:00 14:49:55 | 14:00:00 (4 rows) Inner Joins For inner joins, there is no difference between a regular inner join and an event series inner join. Since null values are eliminated from the result set, there is nothing to interpolate. A regular inner join returns only the single matching row at 12:40:23: => SELECT * FROM t INNER JOIN t1 ON t.x = t1.y; x | y ----------+---------12:40:23 | 12:40:23 (1 row) An event series inner join finds the same single-matching row at 12:40:23: HP Vertica Analytic Database (7.0.x) Page 98 of 1539 SQL Reference Manual SQL Language Elements => SELECT * FROM t INNER JOIN t1 ON t.x INTERPOLATE PREVIOUS VALUE t1.y; x | y ----------+---------12:40:23 | 12:40:23 (1 row) Semantics When you write an event series join in place of normal join, values are evaluated as follows (using the schema in the above examples): l t is the outer, preserved table l t1 is the inner, non-preserved table l For each row in outer table t, the ON clause predicates are evaluated for each combination of each row in the inner table t1. l If the ON clause predicates evaluate to true for any combination of rows, those combination rows are produced at the output. l If the ON clause is false for all combinations, a single output row is produced with the values of the row from t along with the columns of t1 chosen from the row in t1 with the greatest t1.y value such that t1.y < t.x; If no such row is found, pad with nulls. Note: t LEFT OUTER JOIN t1 is equivalent to t1 RIGHT OUTER JOIN t. In the case of a full outer join, all values from both tables are preserved. See Also l Join-Predicate Combines records from two or more tables in a database. Syntax column-reference = column-reference Parameters column-reference Refers to a column of one the tables specified in the FROM Clause. HP Vertica Analytic Database (7.0.x) Page 99 of 1539 SQL Reference Manual SQL Language Elements LIKE-predicate Retrieves rows where the string value of a column matches a specified pattern. The pattern can contain one or more wildcard characters. ILIKE is equivalent to LIKE except that the match is caseinsensitive (non-standard extension). Syntax string [ NOT ]{ LIKE | ILIKE | LIKEB | ILIKEB } ... pattern [ESCAPE 'escape-character' ] Parameters string (CHAR, VARCHAR, BINARY, VARBINARY) is the column value to be compared to the pattern. NOT Returns true if LIKE returns false, and the reverse; equivalent to NOT string LIKE pattern. pattern Specifies a string containing wildcard characters. ESCAPE l Underscore (_) matches any single character. l Percent sign (%) matches any string of zero or more characters. Specifies an escape-character. An ESCAPE character can be used to escape itself, underscore (_), and % only. This is enforced only for non-default collations. To match the ESCAPE character itself, use two consecutive escape characters. The default ESCAPE character is the backslash (\) character, although standard SQL specifies no default ESCAPE character. ESCAPE works for CHAR and VARCHAR strings only. escape-character Causes character to be treated as a literal, rather than a wildcard, when preceding an underscore or percent sign character in the pattern. Notes l The LIKE predicate is compliant with the SQL standard. l In the default locale, LIKE and ILIKE handle UTF-8 character-at-a-time, locale-insensitive comparisons. ILIKE handles language-independent case-folding. Note: In non-default locales, LIKE and ILIKE do locale-sensitive string comparisons, HP Vertica Analytic Database (7.0.x) Page 100 of 1539 SQL Reference Manual SQL Language Elements including some automatic normalization, using the same algorithm as the "=" operator on VARCHAR types. l The LIKEB and ILIKEB predicates do byte-at-a-time ASCII comparisons, providing access to HP Vertica 4.0 functionality. l LIKE and ILIKE are stable for character strings, but immutable for binary strings, while LIKEB and ILIKEB are both immutable. l For collation=binary settings, the behavior is similar to HP Vertica 4.0. For other collations, LIKE operates on UTF-8 character strings, with the exact behavior dependent on collation parameters, such as strength. In particular, ILIKE works by setting S=2 (ignore case) in the current session locale. See Locale Specification in the Administrator's Guide. l Although the SQL standard specifies no default ESCAPE character, in HP Vertica the default is the backslash (\) and works for CHAR and VARCHAR strings only. Tip: HP recommends that you specify an explicit escape character in all cases, to avoid problems should this behavior change. To use a backslash character as a literal, either specify a different escape character or use two backslashes. l ESCAPE expressions evaluate to exactly one octet—or one UTF-8 character for non-default locales. l An ESCAPE character can be used only to escape itself, _, and %. This is enforced only for nondefault collations. l LIKE requires that the entire string expression match the pattern. To match a sequence of characters anywhere within a string, the pattern must start and end with a percent sign. l The LIKE predicate does not ignore trailing "white space" characters. If the data values that you want to match have unknown numbers of trailing spaces, tabs, etc., terminate each LIKE predicate pattern with the percent sign wildcard character. l To use binary data types, you must use a valid binary character as the escape character, since backslash is not a valid BINARY character. l The following symbols are substitutes for the actual keywords: ~~ ~# ~~* ~#* !~~ !~# LIKE LIKEB ILIKE ILIKEB NOT LIKE NOT LIKEB HP Vertica Analytic Database (7.0.x) Page 101 of 1539 SQL Reference Manual SQL Language Elements !~~* !~#* NOT ILIKE NOT IILIKEB The ESCAPE keyword is not valid for the above symbols. l HP Vertica extends support for single-row subqueries as the pattern argument for LIKEB and ILIKEB; for example: SELECT * FROM t1 WHERE t1.x LIKEB (SELECT MAX (t2.a) FROM t2); Querying Case-Sensitive Data in System Tables The V_CATALOG.TABLES.TABLE_SCHEMA and TABLE_NAME columns are case sensitive when used with an equality (=) predicate in queries. For example, given the following sample schema, if you execute a query using the = predicate, HP Vertica returns 0 rows: => CREATE SCHEMA SS; => CREATE TABLE SS.TT (c1 int); => INSERT INTO ss.tt VALUES (1); => SELECT table_schema, table_name FROM v_catalog.tables WHERE table_schema ='ss'; table_schema | table_name --------------+-----------(0 rows) Tip: Use the case-insensitive ILIKE predicate to return the expected results. => SELECT table_schema, table_name FROM v_catalog.tables WHERE table_schema ILIKE 'ss'; table_schema | table_name -------------+-----------SS | TT (1 row) Examples 'abc' LIKE 'abc' true'abc' LIKE 'a%' 'abc' LIKE '_b_' true 'abc' LIKE 'c' false 'abc' LIKE 'ABC' false 'abc' ILIKE 'ABC' true 'abc' not like 'abc' false not 'abc' like 'abc' false true The following example illustrates pattern matching in locales. HP Vertica Analytic Database (7.0.x) Page 102 of 1539 SQL Reference Manual SQL Language Elements \locale default=> CREATE TABLE src(c1 VARCHAR(100)); => INSERT INTO src VALUES (U&'\00DF'); --The sharp s (ß) => INSERT INTO src VALUES ('ss'); => COMMIT; Querying the src table in the default locale returns both ss and sharp s. => SELECT * FROM src; c1 ---ß ss (2 rows) The following query combines pattern-matching predicates to return the results from column c1: => SELECT c1, c1 = 'ss' AS equality, c1 LIKE 'ss' AS LIKE, c1 ILIKE 'ss' AS ILIKE FROM src; c1 | equality | LIKE | ILIKE ----+----------+------+------ß | f | f | f ss | t | t | t (2 rows) The next query specifies unicode format for c1: => SELECT c1, c1 = U&'\00DF' AS equality, c1 LIKE U&'\00DF' AS LIKE, c1 ILIKE U&'\00DF' AS ILIKE from src; c1 | equality | LIKE | ILIKE ----+----------+------+------ß | t | t | t ss | f | f | f (2 rows) Now change the locale to German with a strength of 1 (ignore case and accents): \locale LDE_S1 => SELECT c1, c1 = 'ss' AS equality, c1 LIKE 'ss' as LIKE, c1 ILIKE 'ss' AS ILIKE from src; c1 | equality | LIKE | ILIKE ----+----------+------+------ß | t | t | t ss | t | t | t (2 rows) This example illustrates binary data types with pattern-matching predicates: => CREATE TABLE t (c BINARY(1)); => INSERT INTO t values(HEX_TO_BINARY('0x00')); => INSERT INTO t values(HEX_TO_BINARY('0xFF')); HP Vertica Analytic Database (7.0.x) Page 103 of 1539 SQL Reference Manual SQL Language Elements => SELECT TO_HEX(c) from t; TO_HEX -------00 ff (2 rows) select * from t; c -----\000 \377 (2 rows) => SELECT c, c = '\000', c LIKE '\000', c ILIKE '\000' from t; c | ?column? | ?column? | ?column? ------+----------+----------+---------\000 | t | t | t \377 | f | f | f (2 rows) => SELECT c, c = '\377', c LIKE '\377', c ILIKE '\377' from t; c | ?column? | ?column? | ?column? ------+----------+----------+---------\000 | f | f | f \377 | t | t | t (2 rows) NULL-predicate Tests for null values. Syntax value_expression IS [ NOT ] NULL Parameters value_expression A column name, literal, or function. Examples Column name: => SELECT date_key FROM date_dimension WHERE date_key IS NOT NULL; date_key ---------1 366 1462 1097 2 3 HP Vertica Analytic Database (7.0.x) Page 104 of 1539 SQL Reference Manual SQL Language Elements 6 7 8 ... Function: => SELECT MAX(household_id) IS NULL FROM customer_dimension; ?column? ---------f (1 row) Literal: => SELECT 'a' IS NOT NULL; ?column? ---------t (1 row) See Also l NULL Value HP Vertica Analytic Database (7.0.x) Page 105 of 1539 SQL Reference Manual SQL Language Elements HP Vertica Analytic Database (7.0.x) Page 106 of 1539 SQL Reference Manual SQL Data Types SQL Data Types The following tables summarize the data types that HP Vertica supports. It also shows the default placement of null values in projections. The Size column is listed as uncompressed bytes. Size (bytes) Description NULL Sorting BINARY 1 to 65000 Fixed-length binary string NULLS LAST VARBINARY 1 to 65000 Variable-length binary string NULLS LAST LONG VARBINARY 1 to Long variable-length binary string 32,000,000 NULLS LAST BYTEA 1 to 65000 Variable-length binary string (synonym for VARBINARY) NULLS LAST RAW 1 to 65000 Variable-length binary string (synonym for VARBINARY) NULLS LAST 1 True or False or NULL NULLS LAST CHAR 1 to 65000 Fixed-length character string NULLS LAST VARCHAR 1 to 65000 Variable-length character string NULLS LAST LONG VARCHAR 1 to Long variable-length character string 32,000,000 NULLS LAST DATE 8 Represents a month, day, and year NULLS FIRST DATETIME 8 Represents a date and time with or without timezone (synonym for TIMESTAMP) NULLS FIRST SMALLDATETIME 8 Represents a date and time with or without timezone (synonym for TIMESTAMP) NULLS FIRST TIME 8 Represents a time of day without timezone NULLS FIRST TIME WITHTIMEZONE 8 Represents a time of day with timezone NULLS FIRST TIMESTAMP 8 Represents a date and time without timezone NULLS FIRST Type Binary types Boolean types BOOLEAN Character types Date/time types HP Vertica Analytic Database (7.0.x) Page 107 of 1539 SQL Reference Manual SQL Data Types Type Size (bytes) Description NULL Sorting TIMESTAMP WITHTIMEZONE 8 Represents a date and time with timezone NULLS FIRST INTERVAL 8 Measures the difference between two points in time NULLS FIRST Approximate numeric types DOUBLE PRECISION 8 Signed 64-bit IEEE floating point number, requiring 8 bytes of storage NULLS LAST FLOAT 8 Signed 64-bit IEEE floating point number, requiring 8 bytes of storage NULLS LAST FLOAT(n) 8 Signed 64-bit IEEE floating point number, requiring 8 bytes of storage NULLS LAST FLOAT8 8 Signed 64-bit IEEE floating point number, requiring 8 bytes of storage NULLS LAST REAL 8 Signed 64-bit IEEE floating point number, requiring 8 bytes of storage NULLS LAST INTEGER 8 Signed 64-bit integer, requiring 8 bytes of storage NULLS FIRST INT 8 Signed 64-bit integer, requiring 8 bytes of storage NULLS FIRST BIGINT 8 Signed 64-bit integer, requiring 8 bytes of storage NULLS FIRST INT8 8 Signed 64-bit integer, requiring 8 bytes of storage NULLS FIRST SMALLINT 8 Signed 64-bit integer, requiring 8 bytes of storage NULLS FIRST TINYINT 8 Signed 64-bit integer, requiring 8 bytes of storage NULLS FIRST DECIMAL 8+ 8 bytes for the first 18 digits of precision, plus 8 bytes for each additional 19 digits NULLS FIRST NUMERIC 8+ 8 bytes for the first 18 digits of precision, plus 8 bytes for each additional 19 digits NULLS FIRST NUMBER 8+ 8 bytes for the first 18 digits of precision, plus 8 bytes for each additional 19 digits NULLS FIRST Exact numeric types HP Vertica Analytic Database (7.0.x) Page 108 of 1539 SQL Reference Manual SQL Data Types Type Size (bytes) MONEY 8+ Description 8 bytes for the first 18 digits of precision, plus 8 bytes for each additional 19 digits NULL Sorting NULLS FIRST Binary Data Types Store raw-byte data, such as IP addresses, up to 65000 bytes. Syntax BINARY ( length ){ VARBINARY | BINARY VARYING | BYTEA | RAW } ( max-length ) Parameters length | max-length Specifies the length of the string (column width, declared in bytes (octets), in CREATE TABLE statements). Notes l BYTEA and RAW are synonyms for VARBINARY. l The data types BINARY and BINARY VARYING (VARBINARY) are collectively referred to as binary string types and the values of binary string types are referred to as binary strings. l A binary string is a sequence of octets, or bytes. Binary strings store raw-byte data, while character strings store text. l A binary value value of NULL appears last (largest) in ascending order. l The binary data types, BINARY and VARBINARY, are similar to the Character Data Types, CHAR and VARCHAR, respectively, except that binary data types contain byte strings, rather than character strings. l BINARY—A fixed-width string of length bytes, where the number of bytes is declared as an optional specifier to the type. If length is omitted, the default is 1. Where necessary, values are right-extended to the full width of the column with the zero byte. For example: => SELECT TO_HEX('ab'::BINARY(4)); to_hex ---------61620000 l VARBINARY—A variable-width string up to a length of max-length bytes, where the maximum number of bytes is declared as an optional specifier to the type. The default is the default HP Vertica Analytic Database (7.0.x) Page 109 of 1539 SQL Reference Manual SQL Data Types attribute size, which is 80, and the maximum length is 65000 bytes. VARBINARY values are not extended to the full width of the column. For example: => SELECT TO_HEX('ab'::VARBINARY(4)); to_hex -------6162 l You can use several formats when working with binary values, but the hexadecimal format is generally the most straightforward and is emphasized in HP Vertica documentation. l Binary operands &, ~, | and # have special behavior for binary data types, as described in Binary Operators. l On input, strings are translated from: n Hexadecimal representation to a binary value using the HEX_TO_BINARY function n Bitstring representation to a binary value using the BITSTRING_TO_BINARY function. Both functions take a VARCHAR argument and return a VARBINARY value. See the Examples section below. l Binary values can also be represented in octal format by prefixing the value with a backslash '\'. Note: If you use vsql, you must use the escape character (\) when you insert another backslash on input; for example, input '\141' as '\\141'. You can also input values represented by printable characters. For example, the hexadecimal value '0x61' can also be represented by the symbol '. See Bulk Loading Data in the Administrator's Guide. l Like the input format the output format is a hybrid of octal codes and printable ASCII characters. A byte in the range of printable ASCII characters (the range [0x20, 0x7e]) is represented by the corresponding ASCII character, with the exception of the backslash ('\'), which is escaped as '\\'. All other byte values are represented by their corresponding octal values. For example, the bytes {97,92,98,99}, which in ASCII are {a,\,b,c}, are translated to text as 'a\\bc'. l The following aggregate functions are supported for binary data types: n BIT_AND n BIT_OR n BIT_XOR HP Vertica Analytic Database (7.0.x) Page 110 of 1539 SQL Reference Manual SQL Data Types n MAX n MIN BIT_AND, BIT_OR, and BIT_XOR are bitwise operations that are applied to each non-null value in a group, while MAX and MIN are bytewise comparisons of binary values. l Like their binary operator counterparts, if the values in a group vary in length, the aggregate functions treat the values as though they are all equal in length by extending shorter values with zero bytes to the full width of the column. For example, given a group containing the values 'ff', null, and 'f', a binary aggregate ignores the null value and treats the value 'f' as 'f0'. Also, like their binary operator counterparts, these aggregate functions operate on VARBINARY types explicitly and operate on BINARY types implicitly through casts. See Data Type Coercion Operators (CAST). Examples The following example shows VARBINARY HEX_TO_BINARY(VARCHAR) and VARCHAR TO_HEX (VARBINARY) usage. Table t and and its projection are created with binary columns: => CREATE TABLE t (c BINARY(1)); => CREATE PROJECTION t_p (c) AS SELECT c FROM t; Insert minimum byte and maximum byte values: => INSERT INTO t values(HEX_TO_BINARY('0x00')); => INSERT INTO t values(HEX_TO_BINARY('0xFF')); Binary values can then be formatted in hex on output using the TO_HEX function: => SELECT TO_HEX(c) FROM t; to_hex -------00 ff (2 rows) The BIT_AND, BIT_OR, and BIT_XORfunctions are interesting when operating on a group of values. For example, create a sample table and projections with binary columns: This example uses the following schema, which creates table t with a single column of VARBINARY data type: => => => => CREATE INSERT INSERT INSERT TABLE t ( c VARBINARY(2) ); INTO t values(HEX_TO_BINARY('0xFF00')); INTO t values(HEX_TO_BINARY('0xFFFF')); INTO t values(HEX_TO_BINARY('0xF00F')); HP Vertica Analytic Database (7.0.x) Page 111 of 1539 SQL Reference Manual SQL Data Types Query table t to see column c output: => SELECT TO_HEX(c) FROM t; TO_HEX -------ff00 ffff f00f (3 rows) Now issue the bitwise AND operation. Because these are aggregate functions, an implicit GROUP BY operation is performed on results using (ff00&(ffff)&f00f): => SELECT TO_HEX(BIT_AND(c)) FROM t; TO_HEX -------f000 (1 row) Issue the bitwise OR operation on (ff00|(ffff)|f00f): => SELECT TO_HEX(BIT_OR(c)) FROM t; TO_HEX -------ffff (1 row) Issue the bitwise XOR operation on (ff00#(ffff)#f00f): => SELECT TO_HEX(BIT_XOR(c)) FROM t; TO_HEX -------f0f0 (1 row) See Also l BIT_AND l BIT_OR l BIT_XOR l MAX [Aggregate] l MIN [Aggregate] l Binary Operators l COPY HP Vertica Analytic Database (7.0.x) Page 112 of 1539 SQL Reference Manual SQL Data Types l Data Type Coercion Operators (CAST) l INET_ATON l INET_NTOA l V6_ATON l V6_NTOA l V6_SUBNETA l V6_SUBNETN l V6_TYPE l BITCOUNT l BITSTRING_TO_BINARY l HEX_TO_BINARY l LENGTH l REPEAT l SUBSTRING l TO_HEX l TO_BITSTRING Boolean Data Type HP Vertica provides the standard SQL type BOOLEAN, which has two states: true and false. The third state in SQL boolean logic is unknown, which is represented by the NULL value. Syntax BOOLEAN Parameters Valid literal data values for input are: TRUE 't' 'true' 'y' 'yes' '1' 1 FALSE 'f' 'false' 'n' 'no' '0' 0 HP Vertica Analytic Database (7.0.x) Page 113 of 1539 SQL Reference Manual SQL Data Types Notes l Do not confuse the BOOLEAN data type with Boolean Operators or the Boolean-Predicate. l The keywords TRUE and FALSE are preferred and are SQL-compliant. l A Boolean value of NULL appears last (largest) in ascending order. l All other values must be enclosed in single quotes. l Boolean values are output using the letters t and f. See Also l NULL Value l Data Type Coercion Chart Character Data Types Stores strings of letters, numbers, and symbols. Character data can be stored as fixed-length or variable-length strings. Fixed-length strings are right-extended with spaces on output; variable-length strings are not extended. Syntax [ CHARACTER | CHAR ] ( octet_length )[ VARCHAR | CHARACTER VARYING ] ( octet_length ) Parameters octet_length Specifies the length of the string (column width, declared in bytes (octets), in CREATE TABLE statements). Notes l The data types CHARACTER (CHAR) and CHARACTER VARYING (VARCHAR) are collectively referred to as character string types, and the values of character string types are known as character strings. l CHAR is conceptually a fixed-length, blank-padded string. Any trailing blanks (spaces) are removed on input, and only restored on output. The default length is 1, and the maximum length is 65000 octets (bytes). HP Vertica Analytic Database (7.0.x) Page 114 of 1539 SQL Reference Manual SQL Data Types l VARCHAR is a variable-length character data type. The default length is 80, and the maximum length is 65000 octets. Values can include trailing spaces. l When you define character columns, specify the maximum size of any string to be stored in a column. For example, to store strings up to 24 octets in length, use either of the following definitions: CHAR(24) l /* fixed-length */VARCHAR(24) /* variable-length */ The maximum length parameter for VARCHAR and CHAR data type refers to the number of octets that can be stored in that field, not the number of characters (Unicode code points). When using multibyte UTF-8 characters, the fields must be sized to accommodate from 1 to 4 octets per character, depending on the data. If the data loaded into a VARCHAR/CHAR column exceeds the specified maximum size for that column, data is truncated on UTF-8 character boundaries to fit within the specified size. See COPY. Note: Remember to include the extra octets required for multibyte characters in the columnwidth declaration, keeping in mind the 65000 octet column-width limit. l String literals in SQL statements must be enclosed in single quotes. l Due to compression in HP Vertica, the cost of overestimating the length of these fields is incurred primarily at load time and during sorts. l NULL appears last (largest) in ascending order. See also GROUP BY Clause for additional information about NULL ordering. The Difference Between NULL and NUL NUL represents a character whose ASCII/Unicode code is 0, sometimes qualified "ASCII NUL". NULL means no value, and is true of a field (column) or constant, not of a character. CHAR, LONG VARCHAR, and VARCHAR string data types accept ASCII NULs. The following example casts the input string containing NUL values to VARCHAR: => SELECT 'vert\0ica'::CHARACTER VARYING AS VARCHAR; VARCHAR --------vert\0ica (1 row) The result contains 9 characters: => SELECT LENGTH('vert\0ica'::CHARACTER VARYING); length HP Vertica Analytic Database (7.0.x) Page 115 of 1539 SQL Reference Manual SQL Data Types -------9 (1 row) If you use an extended string literal, the length is 8 characters: => SELECT E'vert\0ica'::CHARACTER VARYING AS VARCHAR; VARCHAR --------vertica (1 row) => SELECT LENGTH(E'vert\0ica'::CHARACTER VARYING); LENGTH -------8 (1 row) See Also l Data Type Coercion HP Vertica Analytic Database (7.0.x) Page 116 of 1539 SQL Reference Manual SQL Data Types Date/Time Data Types HP Vertica supports the full set of SQL date and time data types. In most cases, a combination of DATE, DATETIME, SMALLDATETIME, TIME, TIMESTAMP WITHOUT TIME ZONE, and TIMESTAMP WITH TIME ZONE, and INTERVAL provides a complete range of date/time functionality required by any application. In compliance with the SQL standard, HP Vertica also supports the TIME WITH TIME ZONE data type. The following table lists the characteristics about the date/time data types. All these data types have a size of 8 bytes. Name Description Low Value High Value Resolut ion DATE Dates only (no time of d ay) ~ 25e+15 BC ~ 25e+15 AD 1 day TIME [(p] Time of day o nly (no date) 00:00:00.00 23:59:60.999999 1 μs TIMETZ [(p)] Time of day o nly, with time zon e 00:00:00.00+14 23:59:59.999999-14 1 μs TIMESTAMP [(p)] Both date and time, without time zone 290279-12-22 19:59:05.2241 94 BC 294277-01-09 04:00:54:7758 06 AD 1 μs TIMESTAMPTZ [(p)] Both date and time, with time zon e 290279-12-22 19:59:05.2241 94 BC UTC 294277-01-09 04:00:54:7758 06 AD UTC 1 μs INTERVAL [(p)]DAY T O SECOND Time intervals -106751991 days 04:00:54.7 75807 +-106751991 days 04:00:54. 775807 1 μs INTERVAL [(p)]YEAR TO MONTH Time intervals ~ -768e15 yrs ~ 768e15 yrs 1 month Time Zone Abbreviations for Input HP Vertica recognizes the files in /opt/vertica/share/timezonesets as date/time input values and defines the default list of strings accepted in the AT TIME ZONE zone parameter. The names are not necessarily used for date/time output—output is driven by the official time zone abbreviations associated with the currently selected time zone parameter setting. HP Vertica Analytic Database (7.0.x) Page 117 of 1539 SQL Reference Manual SQL Data Types Notes l In HP Vertica, TIME ZONE is a synonym for TIMEZONE. l HP Vertica uses Julian dates for all date/time calculations, which can correctly predict and calculate any date more recent than 4713 BC to far into the future, based on the assumption that the average length of the year is 365.2425 days. l All date/time types are stored in eight bytes. l A date/time value of NULL appears first (smallest) in ascending order. l All the date/time data types accept the special literal value NOW to specify the current date and time. For example: => SELECT TIMESTAMP 'NOW'; ?column? ---------------------------2012-03-13 11:42:22.766989 (1 row) l In HP Vertica, the INTERVAL data type is SQL:2008 compliant and allows modifiers, called interval qualifiers, that divide the INTERVAL type into two primary subtypes, DAY TO SECOND (the default) and YEAR TO MONTH. You use the SET INTERVALSTYLE command to change the intervalstyle run-time parameter for the current session. Intervals are represented internally as some number of microseconds and printed as up to 60 seconds, 60 minutes, 24 hours, 30 days, 12 months, and as many years as necessary. Fields can be positive or negative. See Also l TZ Environment Variable l Using Time Zones With HP Vertica l Sources for Time Zone and Daylight Saving Time Data DATE Consists of a month, day, and year. Syntax DATE HP Vertica Analytic Database (7.0.x) Page 118 of 1539 SQL Reference Manual SQL Data Types Parameters/Limits Low Value High Value Resolution ~ 25e+15 BC ~ 25e+15 AD 1 DAY See SET DATESTYLE for information about ordering. Example Description January 8, 1999 Unambiguous in any datestyle input mode 1999-01-08 ISO 8601; January 8 in any mode (recommended format) 1/8/1999 January 8 in MDY mode; August 1 in DMY mode 1/18/1999 January 18 in MDY mode; rejected in other modes 01/02/03 January 2, 2003 in MDY mode February 1, 2003 in DMY mode February 3, 2001 in YMD mode 1999-Jan-08 January 8 in any mode Jan-08-1999 January 8 in any mode 08-Jan-1999 January 8 in any mode 99-Jan-08 January 8 in YMD mode, else error 08-Jan-99 January 8, except error in YMD mode Jan-08-99 January 8, except error in YMD mode 19990108 ISO 8601; January 8, 1999 in any mode 990108 ISO 8601; January 8, 1999 in any mode 1999.008 Year and day of year J2451187 Julian day January 8, 99 BC Year 99 before the Common Era DATETIME DATETIME is an alias for TIMESTAMP. HP Vertica Analytic Database (7.0.x) Page 119 of 1539 SQL Reference Manual SQL Data Types INTERVAL Measures the difference between two points in time. The INTERVAL data type is divided into two major subtypes: l DAY TO SECOND (day/time, in microseconds) l YEAR TO MONTH (year/month, in months) A day/time interval represents a span of days, hours, minutes, seconds, and fractional seconds. A year/month interval represents a span of years and months. Intervals can be positive or negative. Syntax INTERVAL [ (p) ] [ - ] 'Interval-Literal' [ Interval-Qualifier ] Parameters (p) [Optional] Specifies the precision for the number of digits retained in the seconds field. Enter the precision value in parentheses (). The interval precision can range from 0 to 6. The default is 6. - [Optional] Indicates a negative interval. 'interval-literal' Indicates a literal character string expressing a specific interval. interval-qualifier [Optional] Specifies a range of interval subtypes with optional precision specifications. If omitted, the default is DAY TO SECOND(6). Sometimes referred to as subtype in this topic. Within the single quotes of an interval-literal, units can be plural, but outside the quotes, the interval-qualifier must be singular. Limits Name Low Value High Value Resolution INTERVAL [(p)] DAY TO SECOND –106751991 days +/–106751991 days 1 microsecond 04:00:54.775807 04:00:54.775807 ~/ –768e15 yrs ~ 768e15 yrs INTERVAL [(p)] YEAR TO MONTH 1 month Displaying or Omitting Interval Units in Output To display or omit interval units from the output of a SELECT INTERVAL query, use the INTERVALSTYLE and DATESTYLE settings. These settings affect only the interval output format, not HP Vertica Analytic Database (7.0.x) Page 120 of 1539 SQL Reference Manual SQL Data Types the interval input format. To omit interval units from the output, set INTERVALSTYLE to PLAIN. This is the default value, and it follows the SQL:2008 standard (ISO): => SET INTERVALSTYLE TO PLAIN; SET => SELECT INTERVAL '3 2'; ?column? ---------3 02:00 When INTERVALSTYLE is set to PLAIN, units are omitted from the output, even if you specify the units in the query: => SELECT INTERVAL '3 days 2 hours'; ?column? ---------3 02:00 To display interval units in the output, set INTERVALSTYLE to UNITS: => SET INTERVALSTYLE TO UNITS; SET => SELECT INTERVAL '3 2'; ?column? ---------------3 days 2 hours When INTERVALSTYLE is set to UNITS to display units in the result, the DATESTYLE setting controls the format of the units in the output. If you set DATESTYLE to SQL, interval units are omitted from the output, even if you set INTERVALSTYLE to UNITS: => SET INTERVALSTYLE TO UNITS; SET => SET DATESTYLE TO SQL; SET => SELECT INTERVAL '3 2'; ?column? ---------3 02:00 To display interval units on output, set DATESTYLE to ISO: => SET INTERVALSTYLE TO UNITS; SET => SET DATESTYLE TO ISO; SET => SELECT INTERVAL '3 2'; ?column? HP Vertica Analytic Database (7.0.x) Page 121 of 1539 SQL Reference Manual SQL Data Types ---------------3 days 2 hours To check the INTERVALSTYLE or DATESTYLE setting, use the SHOW command: => SHOW INTERVALSTYLE; name | setting ---------------+--------intervalstyle | units => SHOW DATESTYLE; name | setting -----------+---------datestyle | ISO, MDY Specifying Units on Input You can specify interval units in the interval-literal: => SELECT INTERVAL '3 days 2 hours'; ?column? -------------3 days 2 hours The following command uses the same interval-literal as the previous example, but specifies a MINUTE interval-qualifier to so that the results are displayed only in minutes: => SELECT INTERVAL '3 days 2 hours' MINUTE; ?column? ----------4440 mins HP Vertica allows combinations of units in the interval-qualifier, as in the next three examples: => SELECT INTERVAL '1 second 1 millisecond' DAY TO SECOND; ?column? -------------1.001 secs => SELECT INTERVAL '28 days 3 hours 65 min' HOUR TO MINUTE; ?column? ----------676 hours 5 mins Units less than a month are not valid for YEAR TO MONTH interval-qualifiers: => SELECT INTERVAL '1 Y 30 DAYS' YEAR TO MONTH; ERROR: invalid input syntax for type interval year to month: "1 Y 30 DAYS" If you replace DAYS in the interval-literal with M to represent months, HP Vertica returns the correct information of 1 year, 3 months: HP Vertica Analytic Database (7.0.x) Page 122 of 1539 SQL Reference Manual SQL Data Types => SELECT INTERVAL '1 Y 3 M' YEAR TO MONTH; ?column? ---------1 year 3 months in the previous example, M was used as the interval-literal, representing months. If you specify a DAY TO SECOND interval-qualifier, HP Vertica knows that M represents minutes, as in the following example: => SELECT INTERVAL '1 D 3 M' DAY TO SECOND; ?column? ---------1 day 3 mins The next two examples use units in the input to return microseconds: => SELECT INTERVAL '4:5 1 2 34us'; ?column? ------------------1 day 04:05:02.000034 => SELECT INTERVAL '4:5 1d 2 34us' HOUR TO SECOND; ?column? ----------------28 hours 5 mins 2.000034 secs How the Interval-Qualifier Affects Output Units The interval-qualifier specifies a range of interval subtypes to apply to the interval-literal. You can also specify the precision in the interval-qualifier. If an interval-qualifier is not specified, the default subtype is DAY TO SECOND(6), regardless of what is inside the quotes. For example, as an extension to SQL:2008, both of the following commands return 910 days: => SELECT INTERVAL '2-6' ; ?column? ----------------910 days => SELECT INTERVAL '2 years 6 months'; ?column? ----------------910 days However, if you change the interval-qualifier to YEAR TO MONTH, you get the following results: => SELECT INTERVAL '2 years 6 months' YEAR TO MONTH; ?column? ----------------2 years 6 months HP Vertica Analytic Database (7.0.x) Page 123 of 1539 SQL Reference Manual SQL Data Types An interval-qualifier can extract other values from the input parameters. For example, the following command extracts the HOUR value from the input parameters: => SELECT INTERVAL '3 days 2 hours' HOUR; ?column? ---------74 hours When specifying intervals that use subtype YEAR TO MONTH, the returned value is kept as months: => SELECT INTERVAL '2 years 6 months' YEAR TO MONTH; ?column? ----------2 years 6 months The primary day/time (DAY TO SECOND) and year/month (YEAR TO MONTH) subtype ranges can be restricted to more specific range of types by an interval-qualifier. For example, HOUR TO MINUTE is a limited form of day/time interval, which can be used to express time zone offsets. => SELECT INTERVAL '1 3' HOUR to MINUTE; ?column? --------------01:03 The formats hh:mm:ss and hh:mm are used only when at least two of the fields specified in the interval-qualifier are non-zero and there are no more than 23 hours or 59 minutes: => SELECT INTERVAL '2 days 12 hours 15 mins' DAY TO MINUTE; ?column? -------------2 days 12:15 => SELECT INTERVAL '15 mins 20 sec' MINUTE TO SECOND; ?column? ---------00:15:20 => SELECT INTERVAL '1 hour 15 mins 20 sec' MINUTE TO SECOND; ?column? ----------------75 mins 20 secs Specifying Precision SQL:2008 allows you to specify precision for the interval output by entering the precision value in parentheses after the INTERVAL keyword or the interval-qualifier. HP Vertica rounds the input to the number of decimal places specified. SECOND(2) and SECOND (2) produce the same result: If you specify two different precisions, HP Vertica picks the lesser of the two: => SELECT INTERVAL(1) '1.2467' SECOND(2); ?column? HP Vertica Analytic Database (7.0.x) Page 124 of 1539 SQL Reference Manual SQL Data Types ---------1.2 secs When you specify a precision inside an interval-literal, HP Vertica processes the precision by removing the parentheses. In this example, (3) is processed as 3 minutes, the first omitted field: => SELECT INTERVAL '28 days 3 hours 1.234567 second(3)'; ?column? -------------------28 days 03:03:01.234567 The following command specifies that the day field can hold 4 digits, the hour field 2 digits, the minutes field 2 digits, the seconds field 2 digits, and the fractional seconds field 6 digits: => SELECT INTERVAL '1000 12:00:01.123456' DAY(4) TO SECOND(6); ?column? --------------------------1000 days 12:00:01.123456 AN HP Vertica extension lets you specify the seconds precision on the INTERVAL keyword. The result is the same: => SELECT INTERVAL(6) '1000 12:00:01.123456' DAY(4) TO SECOND; 1000 days 12:00:01.123456 Casting with Intervals You can cast a string to an interval: => SELECT CAST('3700 sec' AS INTERVAL); ?column? ---------01:01:40 You can cast an interval to a string: => SELECT CAST((SELECT INTERVAL '3700 seconds') AS VARCHAR(20)); ?column? ---------01:01:40 You can cast intervals within the day/time or the year/month subtypes but not between them. Use CAST to convert interval types: => SELECT CAST(INTERVAL '4440' MINUTE as INTERVAL); ?column? ---------3 days 2 hours HP Vertica Analytic Database (7.0.x) Page 125 of 1539 SQL Reference Manual SQL Data Types => SELECT CAST(INTERVAL -'01:15' as INTERVAL MINUTE); ?column? ----------75 mins Processing Signed Intervals In the SQL:2008 standard, a minus sign before an interval-literal or as the first character of the interval-literal negates the entire literal, not just the first component. In HP Vertica, a leading minus sign negates the entire interval, not just the first component. The following commands both return the same value: => SELECT INTERVAL '-1 month - 1 second'; ?column? ----------29 days 23:59:59 => SELECT INTERVAL -'1 month - 1 second'; ?column? ----------29 days 23:59:59 Use one of the following commands instead to return the intended result: => SELECT INTERVAL -'1 month 1 second'; ?column? ----------30 days 1 sec => SELECT INTERVAL -'30 00:00:01'; ?column? ----------30 days 1 sec Two negatives together return a positive: => SELECT INTERVAL -'-1 month - 1 second'; ?column? ---------29 days 23:59:59 => SELECT INTERVAL -'-1 month 1 second'; ?column? ---------30 days 1 sec You can use the year-month syntax with no spaces. HP Vertica allows the input of negative months but requires two negatives when paired with years. => SELECT INTERVAL '3-3' YEAR TO MONTH; ?column? ---------- HP Vertica Analytic Database (7.0.x) Page 126 of 1539 SQL Reference Manual SQL Data Types 3 years 3 months => SELECT INTERVAL '3--3' YEAR TO MONTH; ?column? ---------2 years 9 months When the interval-literal looks like a year/month type, but the type is day/second, or vice versa, HP Vertica reads the interval-literal from left to right, where number-number is years-months, and number is whatever the units specify. HP Vertica processes the following command as (–) 1 year 1 month = (–) 365 + 30 = –395 days: => SELECT INTERVAL '-1-1' DAY TO HOUR; ?column? ----------395 days If you insert a space in the interval-literal, HP Vertica processes it based on the subtype DAY TO HOUR: (–) 1 day – 1 hour = (–) 24 – 1 = –23 hours: => SELECT INTERVAL '-1 -1' DAY TO HOUR; ?column? ----------23 hours Two negatives together returns a positive, so HP Vertica processes the following command as (–) 1 year – 1 month = (–) 365 – 30 = –335 days: => SELECT INTERVAL '-1--1' DAY TO HOUR; ?column? ----------335 days If you omit the value after the hyphen, HP Vertica assumes 0 months and processes the following command as 1 year 0 month –1 day = 365 + 0 – 1 = –364 days: => SELECT INTERVAL '1- -1' DAY TO HOUR; ?column? ---------364 days Processing Interval-Literals Without Units You can specify quantities of days, hours, minutes, and seconds without explicit units. HP Vertica recognizes colons in interval-literals as part of the timestamp: => SELECT INTERVAL '1 4 5 6'; ?column? ------------ HP Vertica Analytic Database (7.0.x) Page 127 of 1539 SQL Reference Manual SQL Data Types 1 day 04:05:06 => SELECT INTERVAL '1 4:5:6'; ?column? -----------1 day 04:05:06 => SELECT INTERVAL '1 day 4 hour 5 min 6 sec'; ?column? -----------1 day 04:05:06 If HP Vertica cannot determine the units, it applies the quantity to any missing units based on the interval-qualifier. In the next two examples, HP Vertica uses the default interval-qualifier (DAY TO SECOND(6)) and assigns the trailing 1 to days, since it has already processed hours, minutes, and seconds in the output: => SELECT INTERVAL '4:5:6 1'; ?column? -----------1 day 04:05:06 => SELECT INTERVAL '1 4:5:6'; ?column? -----------1 day 04:05:06 In the next two examples, HP Vertica recognizes 4:5 as hours:minutes. The remaining values in the interval-literal are assigned to the missing units; 1 is assigned to days and 2 is assigned to seconds: SELECT INTERVAL '4:5 1 2'; ?column? -----------1 day 04:05:02 => SELECT INTERVAL '1 4:5 2'; ?column? -----------1 day 04:05:02 Specifying the interval-qualifier can change how HP Vertica interprets 4:5: => SELECT INTERVAL '4:5' MINUTE TO SECOND; ?column? -----------00:04:05 Using INTERVALYM for INTERVAL YEAR TO MONTH INTERVALYM is an alias for the INTERVAL YEAR TO MONTH subtypes and is used only on input: => SELECT INTERVALYM '1 2'; ?column? HP Vertica Analytic Database (7.0.x) Page 128 of 1539 SQL Reference Manual SQL Data Types -----------1 year 2 months Operations with Intervals If you divide an interval by an interval, you get a FLOAT: => SELECT INTERVAL '28 days 3 hours' HOUR(4) / INTERVAL '27 days 3 hours' HOUR(4); ?column? -----------1.036866359447 An INTERVAL divided by FLOAT returns an INTERVAL: => SELECT INTERVAL '3' MINUTE / 1.5; ?column? -----------2 mins INTERVAL MODULO (remainder) INTERVAL returns an INTERVAL: => SELECT INTERVAL '28 days 3 hours' HOUR % INTERVAL '27 days 3 hours' HOUR; ?column? -----------24 hours If you add INTERVAL and TIME, the result is TIME, modulo 24 hours: => SELECT INTERVAL '1' HOUR + TIME '1:30'; ?column? -----------02:30:00 Fractional Seconds in Interval Units HP Vertica supports intervals in milliseconds (hh:mm:ss:ms), where 01:02:03:25 represents 1 hour, 2 minutes, 3 seconds, and 025 milliseconds. Milliseconds are converted to fractional seconds as in the following example, which returns 1 day, 2 hours, 3 minutes, 4 seconds, and 25.5 milliseconds: => SELECT INTERVAL '1 02:03:04:25.5'; ?column? -----------1 day 02:03:04.0255 HP Vertica allows fractional minutes. The fractional minutes are rounded into seconds: HP Vertica Analytic Database (7.0.x) Page 129 of 1539 SQL Reference Manual SQL Data Types => SELECT INTERVAL '10.5 minutes'; ?column? -----------00:10:30 => select interval '10.659 minutes'; ?column? ------------00:10:39.54 => select interval '10.3333333333333 minutes'; ?column? ---------00:10:20 Notes l The HP Vertica INTERVAL data type is SQL:2008 compliant, with extensions. On HP Vertica databases created prior to version 4.0, all INTERVAL columns are interpreted as INTERVAL DAY TO SECOND, as in the previous releases. l An INTERVAL can include only the subset of units that you need; however, year/month intervals represent calendar years and months with no fixed number of days, so year/month interval values cannot include days, hours, minutes. When year/month values are specified for day/time intervals, the intervals extension assumes 30 days per month and 365 days per year. Since the length of a given month or year varies, day/time intervals are never output as months or years, only as days, hours, minutes, and so on. l Day/time and year/month intervals are logically independent and cannot be combined with or compared to each other. In the following example, an interval-literal that contains DAYS cannot be combined with the YEAR TO MONTH type: => SELECT INTERVAL '1 2 3' YEAR TO MONTH; ERROR 3679: Invalid input syntax for interval year to month: "1 2 3" l HP Vertica accepts intervals up to 2^63 – 1 microseconds or months (about 18 digits). l INTERVAL YEAR TO MONTH can be used in an analytic RANGE window when the ORDER BY column type is TIMESTAMP/TIMESTAMP WITH TIMEZONE, or DATE. Using TIME/TIME WITH TIMEZONE are not supported. l You can use INTERVAL DAY TO SECOND when the ORDER BY column type is TIMESTAMP/TIMESTAMP WITH TIMEZONE, DATE, and TIME/TIME WITH TIMEZONE. Examples The table in this section contains additional interval examples. The INTERVALSTYLE is set to PLAIN (omitting units on output) for brevity. HP Vertica Analytic Database (7.0.x) Page 130 of 1539 SQL Reference Manual SQL Data Types Note: If you omit the Interval-Qualifier, the interval type defaults to DAY TO SECOND(6). Command Result SELECT INTERVAL '00:2500:00'; 1 17:40 SELECT INTERVAL '2500' MINUTE TO SECOND; 2500 SELECT INTERVAL '2500' MINUTE; 2500 SELECT INTERVAL '28 days 3 hours' HOUR TO SECOND; 675:00 SELECT INTERVAL(3) '28 days 3 hours'; 28 03:00 SELECT INTERVAL(3) '28 days 3 hours 1.234567'; 28 03:01:14.0 74 SELECT INTERVAL(3) '28 days 3 hours 1.234567 sec'; 28 03:00:01.2 35 SELECT INTERVAL(3) '28 days 3.3 hours' HOUR TO SECOND; 675:18 SELECT INTERVAL(3) '28 days 3.35 hours' HOUR TO SECOND; 675:21 SELECT INTERVAL(3) '28 days 3.37 hours' HOUR TO SECOND; 675:22:12 SELECT INTERVAL '1.234567 days' HOUR TO SECOND; 29:37:46.5888 SELECT INTERVAL '1.23456789 days' HOUR TO SECOND; 29:37:46.6656 96 SELECT INTERVAL(3) '1.23456789 days' HOUR TO SECOND; 29:37:46.666 SELECT INTERVAL(3) '1.23456789 days' HOUR TO SECOND(2); 29:37:46.67 SELECT INTERVAL(3) '01:00:01.234567' as "one hour+"; 01:00:01.235 SELECT INTERVAL(3) '01:00:01.234567' = INTERVAL(3) '01:00:01.234567'; t SELECT INTERVAL(3) '01:00:01.234567' = INTERVAL '01:00:01.234567'; f SELECT INTERVAL(3) '01:00:01.234567' = INTERVAL '01:00:01.234567' HOUR TO SE COND(3); t SELECT INTERVAL(3) '01:00:01.234567' = INTERVAL '01:00:01.234567'MINUTE TO S ECOND(3); t SELECT INTERVAL '255 1.1111' MINUTE TO SECOND(3); 255:01.111 SELECT INTERVAL '@ - 5 ago'; 5 SELECT INTERVAL '@ - 5 minutes ago'; 00:05 SELECT INTERVAL '@ 5 minutes ago'; -00:05 SELECT INTERVAL '@ ago -5 minutes'; 00:05 SELECT DATE_PART('month', INTERVAL '2-3' YEAR TO MONTH); 3 SELECT FLOOR((TIMESTAMP '2005-01-17 10:00' - TIMESTAMP '2005-01-01') / INTER VAL '7'); 2 HP Vertica Analytic Database (7.0.x) Page 131 of 1539 SQL Reference Manual SQL Data Types See Also l Interval Values l SET INTERVALSTYLE l SET DATESTYLE l AGE_IN_MONTHS l AGE_IN_YEARS Interval-Literal The following table lists the units allowed for the required interval-literal parameter. Unit Description a Julian year, 365.25 days exactly ago Indicates negative time offset c, cent, century Century centuries Centuries d, day Day days Days dec, decade Decade decades, decs Decades h, hour, hr Hour hours, hrs Hours ka Julian kilo-year, 365250 days exactly m Minute or month for year/month, depending on context. See Notes below this table. microsecond Microsecond microseconds Microseconds mil, millennium Millennium millennia, mils Millennia HP Vertica Analytic Database (7.0.x) Page 132 of 1539 SQL Reference Manual SQL Data Types Unit Description millisecond Millisecond milliseconds Milliseconds min, minute, mm Minute mins, minutes Minutes mon, month Month mons, months Months ms, msec, millisecond Millisecond mseconds, msecs Milliseconds q, qtr, quarter Quarter qtrs, quarters Quarters s, sec, second Second seconds, secs Seconds us, usec Microsecond microseconds, useconds, usecs Microseconds w, week Week weeks Weeks y, year, yr Year years, yrs Years Processing the Input Unit 'm' The input unit 'm' can represent either 'months' or 'minutes,' depending on the context. For instance, the following command creates a one-column table with an interval value: => CREATE TABLE int_test(i INTERVAL YEAR TO MONTH); In the first INSERT statement, the values are inserted as 1 year, six months: => INSERT INTO int_test VALUES('1 year 6 months'); The second INSERT statement results in an error from specifying minutes for a YEAR TO MONTH interval. At runtime, the result will be a NULL: HP Vertica Analytic Database (7.0.x) Page 133 of 1539 SQL Reference Manual SQL Data Types => INSERT INTO int_test VALUES('1 year 6 minutes'); ERROR: invalid input syntax for type interval year to month: "1 year 6 minutes" In the third INSERT statement, the 'm' is processed as months (not minutes), because DAY TO SECOND is truncated: => INSERT INTO int_test VALUES('1 year 6 m'); -- the m counts as months The table now contains two identical values, with no minutes: => SELECT * FROM int_test; i ----1 year 6 months 1 year 6 months (2 rows) In the following command, the 'm' counts as minutes, because the DAY TO SECOND interval-qualifier extracts day/time values from the input: => SELECT INTERVAL '1y6m' DAY TO SECOND; ?column? ----------365 days 6 mins (1 row) Interval-Qualifier The following table lists the optional interval qualifiers. Values in INTERVAL fields, other than SECOND, are integers with a default precision of 2 when they are not the first field. You cannot combine day/time and year/month qualifiers. For example, the following intervals are not allowed: l DAY TO YEAR l HOUR TO MONTH Interval Type Day/time intervals Units Valid interval-literal entries DAY Unconstrained. DAY TO HOUR An interval that represents a span of days and hours. DAY TO MINUTE An interval that represents a span of days and minutes. HP Vertica Analytic Database (7.0.x) Page 134 of 1539 SQL Reference Manual SQL Data Types Interval Type Units Valid interval-literal entries DAY TO SECOND (Default) interval that represents a span of days, hours, minutes, seconds, and fractions of a second if subtype unspecified. HOUR Hours within days. HOUR TO MINUTE An interval that represents a span of hours and minutes. HOUR TO SECOND An interval that represents a span of hours and seconds. MINUTE Minutes within hours. MINUTE TO SECOND An interval that represents a span of minutes and seconds. SECOND Seconds within minutes. Note: The SECOND field can have an interval fractional seconds precision, which indicates the number of decimal digits maintained following the decimal point in the SECONDS value. When SECOND is not the first field, it has a precision of 2 places before the decimal point. Year/month MONTH intervals Months within year. YEAR Unconstrained. YEAR TO MONTH An interval that represents a span of years and months. SMALLDATETIME SMALLDATETIME is an alias for TIMESTAMP. HP Vertica Analytic Database (7.0.x) Page 135 of 1539 SQL Reference Manual SQL Data Types TIME Consists of a time of day with or without a time zone. Syntax TIME [ (p) ] [ { WITH | WITHOUT } TIME ZONE ] | TIMETZ [ AT TIME ZONE ] Parameters p (Precision) specifies the number of fractional digits retained in the seconds field. By default, there is no explicit bound on precision. The allowed range 0 to 6. WITH TIME ZONE Specifies that valid values must include a time zone WITHOUT TIME ZONE Specifies that valid values do not include a time zone (default). If a time zone is specified in the input it is silently ignored. TIMETZ This is the same as TIME WITH TIME ZONE with no precision Limits Name Low Value High Value Resolution TIME [p] 00:00:00.00 23:59:60.999999 1 µs TIME [p] WITH TIME ZONE 00:00:00.00+14 23:59:59.999999-14 1 µs Example Description 04:05:06.789 ISO 8601 04:05:06 ISO 8601 04:05 ISO 8601 040506 ISO 8601 04:05 AM Same as 04:05; AM does not affect value 04:05 PM Same as 16:05; input hour must be <= 12 04:05:06.789-8 ISO 8601 04:05:06-08:00 ISO 8601 HP Vertica Analytic Database (7.0.x) Page 136 of 1539 SQL Reference Manual SQL Data Types Example Description 04:05-08:00 ISO 8601 040506-08 ISO 8601 04:05:06 PST Time zone specified by name Notes l HP Vertica permits coercion from TIME and TIME WITH TIME ZONE types to TIMESTAMP or TIMESTAMP WITH TIME ZONE or INTERVAL (Day to Second). l HP Vertica supports adding milliseconds to a TIME or TIMETZ value. => => => => => CREATE TABLE temp (datecol TIME); INSERT INTO temp VALUES (TIME '12:47:32.62'); INSERT INTO temp VALUES (TIME '12:55:49.123456'); INSERT INTO temp VALUES (TIME '01:08:15.12374578'); SELECT * FROM temp; datecol ----------------12:47:32.62 12:55:49.123456 01:08:15.123746 (3 rows) See Also l Data Type Coercion Chart TIME AT TIME ZONE The TIME AT TIME ZONE construct converts TIMESTAMP and TIMESTAMP WITH ZONE types to different time zones. TIME ZONE is a synonym for TIMEZONE. Both are allowed in HP Vertica syntax. Syntax timestamp AT TIME ZONE zone HP Vertica Analytic Database (7.0.x) Page 137 of 1539 SQL Reference Manual SQL Data Types Parameters timestamp zone TIMESTAMP Converts UTC to local time in given time zone TIMESTAMP WITH TIME ZONE Converts local time in given time zone to UTC TIME WITH TIME ZONE Converts local time across time zones Desired time zone specified either as a text string (for example: 'PST') or as an interval (for example: INTERVAL '-08:00'). In the text case, the available zone names are abbreviations. The files in /opt/vertica/share/timezonesets define the default list of strings accepted in the zone parameter Examples The local time zone is PST8PDT. The first example takes a zone-less timestamp and interprets it as MST time (UTC-7) to produce a UTC timestamp, which is then rotated to PST (UTC-8) for display: => SELECT TIMESTAMP '2001-02-16 20:38:40' AT TIME ZONE 'MST'; timezone -----------------------2001-02-16 22:38:40-05 (1 row) The second example takes a timestamp specified in EST (UTC-5) and converts it to local time in MST (UTC-7): => SELECT TIMESTAMP WITH TIME ZONE '2001-02-16 20:38:40-05' AT TIME ZONE 'MST'; timezone --------------------2001-02-16 18:38:40 (1 row) HP Vertica Analytic Database (7.0.x) Page 138 of 1539 SQL Reference Manual SQL Data Types TIMESTAMP Consists of a date and a time with or without a time zone and with or without a historical epoch (AD or BC). Syntax TIMESTAMP [ (p) ] [ { WITH | WITHOUT } TIME ZONE ] | TIMESTAMPTZ[ AT TIME ZONE ] Parameters p Optional precision value that specifies the number of fractional digits retained in the seconds field. By default, there is no explicit bound on precision. The allowed range of p is 0 to 6. WITH TIME ZONE Specifies that valid values must include a time zone. All TIMESTAMP WITH TIME ZONE values are stored internally in UTC. They are converted to local time in the zone specified by the time zone configuration parameter before being displayed to the client. WITHOUT TIME ZONE Specifies that valid values do not include a time zone (default). If a time zone is specified in the input it is silently ignored. TIMESTAMPTZ This is the same as TIMESTAMP WITH TIME ZONE. Limits In the following table, values are rounded. See Date/Time Data Types for additional detail. Name Low Value High Value Resolution TIMESTAMP [ (p) ] [ WITHOUT TIME ZONE ] 290279 BC 294277 AD 1 µs TIMESTAMP [ (p) ] WITH TIME ZONE 290279 BC 294277 AD 1 µs Notes l TIMESTAMP is an alias for DATETIMEand SMALLDATETIME. l Valid input for TIMESTAMP types consists of a concatenation of a date and a time, followed by an optional time zone, followed by an optional AD or BC. l AD/BC can appear before the time zone, but this is not the preferred ordering. l The SQL standard differentiates TIMESTAMP WITHOUT TIME ZONE and TIMESTAMP WITH TIME HP Vertica Analytic Database (7.0.x) Page 139 of 1539 SQL Reference Manual SQL Data Types ZONE literals by the existence of a "+"; or "-". Hence, according to the standard: TIMESTAMP '2004-10-19 10:23:54' is a TIMESTAMP WITHOUT TIME ZONE. TIMESTAMP '2004-10-19 10:23:54+02' is a TIMESTAMP WITH TIME ZONE. Note: HP Vertica differs from the standard by requiring that TIMESTAMP WITH TIME ZONE literals be explicitly typed: TIMESTAMP WITH TIME ZONE '2004-10-19 10:23:54+02' l If a literal is not explicitly indicated as being of TIMESTAMP WITH TIME ZONE, HP Vertica silently ignores any time zone indication in the literal. That is, the resulting date/time value is derived from the date/time fields in the input value, and is not adjusted for time zone. l For TIMESTAMP WITH TIME ZONE, the internally stored value is always in UTC. An input value that has an explicit time zone specified is converted to UTC using the appropriate offset for that time zone. If no time zone is stated in the input string, then it is assumed to be in the time zone indicated by the system's TIME ZONE parameter, and is converted to UTC using the offset for the TIME ZONE zone. l When a TIMESTAMP WITH TIME ZONE value is output, it is always converted from UTC to the current TIME ZONE zone and displayed as local time in that zone. To see the time in another time zone, either change TIME ZONE or use the AT TIME ZONE construct. l Conversions between TIMESTAMP WITHOUT TIME ZONE and TIMESTAMP WITH TIME ZONE normally assume that the TIMESTAMP WITHOUT TIME ZONE value are taken or given as TIME ZONE local time. A different zone reference can be specified for the conversion using AT TIME ZONE. l TIMESTAMPTZ and TIMETZ are not parallel SQL constructs. TIMESTAMPTZ records a time and date in GMT, converting from the specified TIME ZONE. TIMETZ records the specified time and the specified time zone, in minutes, from GMT.timezone l The following list represents typical date/time input variations: n 1999-01-08 04:05:06 n 1999-01-08 04:05:06 -8:00 n January 8 04:05:06 1999 PST l HP Vertica supports adding a floating-point (in days) to a TIMESTAMP or TIMESTAMPTZ value. l HP Vertica supports adding milliseconds to a TIMESTAMP or TIMESTAMPTZ value. l In HP Vertica, intervals are represented internally as some number of microseconds and printed as up to 60 seconds, 60 minutes, 24 hours, 30 days, 12 months, and as many years as necessary. Fields are either positive or negative. HP Vertica Analytic Database (7.0.x) Page 140 of 1539 SQL Reference Manual SQL Data Types Examples You can return infinity by specifying 'infinity': => SELECT TIMESTAMP 'infinity'; timestamp ----------infinity (1 row) To use the minimum TIMESTAMP value lower than the minimum rounded value: => SELECT '-infinity'::timestamp; timestamp -----------infinity (1 row) TIMESTAMP/TIMESTAMPTZ has +/-infinity values. AD/BC can be placed almost anywhere within the input string; for example: SELECT TIMESTAMPTZ 'June BC 1, 2000 03:20 PDT'; timestamptz --------------------------2000-06-01 05:20:00-05 BC (1 row) Notice the results are the same if you move the BC after the 1: SELECT TIMESTAMPTZ 'June 1 BC, 2000 03:20 PDT'; timestamptz --------------------------2000-06-01 05:20:00-05 BC (1 row) And the same if you place the BC in front of the year: SELECT TIMESTAMPTZ 'June 1, BC 2000 03:20 PDT'; timestamptz --------------------------2000-06-01 05:20:00-05 BC (1 row); The following example returns the year 45 before the Common Era: => SELECT TIMESTAMP 'April 1, 45 BC'; timestamp -----------------------0045-04-01 00:00:00 BC HP Vertica Analytic Database (7.0.x) Page 141 of 1539 SQL Reference Manual SQL Data Types (1 row) If you omit the BC from the date input string, the system assumes you want the year 45 in the current century: => SELECT TIMESTAMP 'April 1, 45'; timestamp --------------------2045-04-01 00:00:00 (1 row) In the following example, HP Vertica returns results in years, months, and days, whereas other RDBMS might return results in days only: => SELECT TIMESTAMP WITH TIME ZONE '02/02/294276'- TIMESTAMP WITHOUT TIME ZONE '02/20/200 9' AS result; result -----------------------------292266 years 11 mons 12 days (1 row) To specify a specific time zone, add it to the statement, such as the use of 'ACST' in the following example: => SELECT T1 AT TIME ZONE 'ACST', t2 FROM test; timezone | t2 ---------------------+------------2009-01-01 04:00:00 | 02:00:00-07 2009-01-01 01:00:00 | 02:00:00-04 2009-01-01 04:00:00 | 02:00:00-06 You can specify a floating point in days: => SELECT 'NOW'::TIMESTAMPTZ + INTERVAL '1.5 day' AS '1.5 days from now'; 1.5 days from now ------------------------------2009-03-18 21:35:23.633-04 (1 row) The following example illustrates the difference between TIMESTAMPTZ with and without a precision specified: => SELECT TIMESTAMPTZ(3) 'now', TIMESTAMPTZ 'now'; timestamptz timestamptz ----------------------------+------------------------------2009-02-24 11:40:26.177-05 | 2009-02-24 11:40:26.177368-05 (1 row) | The following statement returns an error because the TIMESTAMP is out of range: HP Vertica Analytic Database (7.0.x) Page 142 of 1539 SQL Reference Manual SQL Data Types => SELECT TIMESTAMP '294277-01-09 04:00:54.775808'; ERROR: date/time field value out of range: "294277-01-09 04:00:54.775808" There is no 0 AD, so be careful when you subtract BC years from AD years: => SELECT EXTRACT(YEAR FROM TIMESTAMP '2001-02-16 20:38:40'); date_part ----------2001 (1 row) The following commands create a table with a TIMESTAMP column that contains milliseconds: CREATE TABLE temp (datecol TIMESTAMP); INSERT INTO temp VALUES (TIMESTAMP '2010-03-25 12:47:32.62'); INSERT INTO temp VALUES (TIMESTAMP '2010-03-25 12:55:49.123456'); INSERT INTO temp VALUES (TIMESTAMP '2010-03-25 01:08:15.12374578'); SELECT * FROM temp; datecol ---------------------------2010-03-25 12:47:32.62 2010-03-25 12:55:49.123456 2010-03-25 01:08:15.123746 (3 rows) Additional Examples Command Result select (timestamp '2005-01-17 10:00' - timestamp '2005-01-01'); 16 10:10 select (timestamp '2005-01-17 10:00' - timestamp '2005-01-01') / 7; 2 08:17:08.571429 select (timestamp '2005-01-17 10:00' - timestamp '2005-01-01') day; 16 select cast((timestamp '2005-01-17 10:00' - timestamp '2005-01-01') d ay as integer) / 7; 2 select floor((timestamp '2005-01-17 10:00' - timestamp '2005-01-01') / interval '7'); 2 select timestamptz '2009-05-29 15:21:00.456789'; 2009-05-2915:21:00.4 56789-04 select timestamptz '2009-05-28'; 2009-05-2800:00:00-0 4 select timestamptz '2009-05-29 15:21:00.456789'-timestamptz '2009-0528'; 1 15:21:00.456789 select (timestamptz '2009-05-29 15:21:00.456789'-timestamptz '2009-0 5-28'); 1 15:21:00.456789 HP Vertica Analytic Database (7.0.x) Page 143 of 1539 SQL Reference Manual SQL Data Types Command Result select (timestamptz '2009-05-29 15:21:00.456789'-timestamptz '2009-0 5-28')(3); 1 15:21:00.457 select (timestamptz '2009-05-29 15:21:00.456789'-timestamptz '2009-0 5-28')second; 141660.456789 select (timestamptz '2009-05-29 15:21:00.456789'-timestamptz '2009-0 5-28') year; 0 select (timestamptz '2009-05-29 15:21:00.456789'-timestamptz '2007-0 1-01') month; 28 select (timestamptz '2009-05-29 15:21:00.456789'-timestamptz '2007-0 1-01') year; 2 select (timestamptz '2009-05-29 15:21:00.456789'-timestamptz '2007-0 1-01') year to month; 2-4 select (timestamptz '2009-05-29 15:21:00.456789'-timestamptz '2009-0 5-28') second(3); 141660.457 select (timestamptz '2009-05-29 15:21:00.456789'-timestamptz '2009-0 5-28') minute(3); 2361 select (timestamptz '2009-05-29 15:21:00.456789'-timestamptz '2009-0 5-28') minute; 2361 select (timestamptz '2009-05-29 15:21:00.456789'-timestamptz '2009-0 5-28') minute to second(3); 2361:00.457 select (timestamptz '2009-05-29 15:21:00.456789'-timestamptz '2009-0 5-28') minute to second; 2361:00.456789 TIMESTAMP AT TIME ZONE The TIMESTAMP AT TIME ZONE (or TIMEZONE) construct converts TIMESTAMP and TIMESTAMP WITH TIMEZONE intervals to different time zones. Note: TIME ZONE is a synonym for TIMEZONE. Both are allowed in HP Vertica syntax. Syntax timestamp AT TIME ZONE zone HP Vertica Analytic Database (7.0.x) Page 144 of 1539 SQL Reference Manual SQL Data Types Parameters timestamp zone TIMESTAMP Converts UTC to local time in the given time zone TIMESTAMP WITH TIME ZONE Converts local time in given time zone to UTC TIME Converts local time. TIME WITH TIME ZONE Converts local time across time zones Specifies the time zone either as a text string, (such as 'America/Chicago') or as an interval (INTERVAL '-08:00'). The preferred way to express a time zone is in the format 'America/Chicago'. For a list of time zone text strings, see TZ Environment Variable in the Installation Guide. To view the default list of acceptable strings for the zone parameter, see the files in: /opt/vertica/share/timezonesets Examples If you indicate a TIME interval timezone (such as America/Chicago in the following example), the interval function converts the interval to the timezone you specify and includes the UTC offset value (-05 here): => select time '10:00' at time zone 'America/Chicago'; ?column? --------------------09:00:00-05 (1 row) Casting a TIMESTAMPTZ interval to a TIMESTAMP without a zone depends on the local time zone. => select (varchar '2013-03-31 5:10 AMERICA/CHICAGO')::timestamp; ?column? --------------------2013-03-31 06:10:00 (1 row) Note: For a complete list of valid time zone definitions, see Wikipedia - tz database time zones. Casting a TIME (or TIMETZ) interval to a TIMESTAMP returns the local date and time, without the UTC offset: HP Vertica Analytic Database (7.0.x) Page 145 of 1539 SQL Reference Manual SQL Data Types => select (time '3:01am')::timestamp; ?column? --------------------2012-08-30 03:01:00 (1 row) => select (timetz '3:01am')::timestamp; ?column? --------------------2012-08-22 03:01:00 (1 row) Casting the same interval (TIME or TIMETZ) to a TIMESTAMPTZ returns the local date and time appended with the UTC offset (-04 here): => select (time '3:01am')::timestamptz; ?column? --------------------2012-08-30 03:01:00-04 (1 row) Long Data Types Store data up to 32,000,000 bytes: l LONG VARBINARY—Variable-length raw-byte data, such as IP addresses. LONG VARBINARY values are not extended to the full width of the column. l LONG VARCHAR—Variable-length strings of letters, numbers, and symbols. LONG VARCHAR values are not extended to the full width of the column. The maximum size for the LONG data types is 32,000,000 bytes. Use the LONG data types only when you need to store data greater than 65,000 bytes, which is the maximum size for VARBINARY and VARCHAR data types. Such data might include unstructured data, online comments or posts, or small log files. Syntax LONG VARBINARY ( max_length ) LONG VARCHAR ( octet_length ) Parameters max_length Specifies the length of the byte string (column width, declared in bytes (octets)), in CREATE TABLE statements). Maximum value: 32,000,000. octet_length Specifies the length of the string (column width, declared in bytes (octets)), in CREATE TABLE statements). Maximum value: 32,000,000. HP Vertica Analytic Database (7.0.x) Page 146 of 1539 SQL Reference Manual SQL Data Types Notes Data type coercion for the LONG data types is limited. HP Vertica only supports coercion from LONG VARBINARY to VARCHAR and vice versa. Converting a data type from a shorter type to a longer type is implicit; to convert from a longer type to a shorter type requires an explicit cast. For optimal performance of LONG data types, HP Vertica recommends that you: l Use the LONG data types as "storage only" containers; HP Vertica does not support operations on their content. l Use the VARBINARY and VARCHAR data types, instead of their LONG counterparts, whenever possible. The VARBINARY and VARCHAR data types are more flexible and have a wider range of operations. l Use efficient encoding formats for LONG data types, even if the decoded value is less than 32,000,000. For example, HP Vertica returns an error if you attempt to load a 32,000,000-byte LONG VARBINARY value encoded in octal format, because the octal encoding quadruples the size of the value; each byte is converted into a backslash followed by three-digit values. l Do not sort, segment, or partition projections on LONG data type columns. l Do not add constraints, such as primary key, to any LONG VARBINARY or LONG VARCHAR columns. l Do not join or aggregate any LONG data type columns. Example The following example creates a table user_comments with a LONG VARCHAR column and inserts data into it: => CREATE TABLE user_comments VALUES ( id INTEGER, username VARCHAR(200) time_posted TIMESTAMP, comment_text LONG VARCHAR(200000) ); => INSERT INTO user_comments VALUES ( 1, 'User1', TIMESTAMP '2013-06-25 12:47:32.62', 'The weather tomorrow will be cold and rainy and then on the day after, the sun will come and the temperature will rise dramatically.' ); => INSERT INTO user_comments VALUES ( 2, 'User2', TIMESTAMP '2013-06-25 12:55:49.123456', HP Vertica Analytic Database (7.0.x) Page 147 of 1539 SQL Reference Manual SQL Data Types 'To get to the main library entrance, take Exit 21, turn left at the end of the exit ramp onto Rt. 333, travel four miles, and the parking lot will be on the left-hand side of the street.' ); => INSERT INTO user_comments VALUES ( ( 3, 'User3', TIMESTAMP '2013-06-25 01:08:15.12374578', 'To purchase your tickets for this event, contact the box office. Tickets will be sold on a first-come, first-serve basis, and are nonrefundable.' ); HP Vertica Analytic Database (7.0.x) Page 148 of 1539 SQL Reference Manual SQL Data Types Numeric Data Types Numeric data types are numbers stored in database columns. These data types are typically grouped by: l Exact numeric types, values where the precision and scale need to be preserved. The exact numeric types are BIGINT, DECIMAL, INTEGER, NUMERIC, NUMBER, and MONEY. l Approximate numeric types, values where the precision needs to be preserved and the scale can be floating. The approximate numeric types are DOUBLE PRECISION, FLOAT, and REAL. Implicit casts from INTEGER, FLOAT, and NUMERIC to VARCHAR are not supported. If you need that functionality, write an explicit cast using one of the following forms: CAST(x AS data-type-name) or x::data-type-name The following example casts a float to an integer: => SELECT(FLOAT '123.5')::INT; ?column? ---------124 (1 row) String-to-numeric data type conversions accept formats of quoted constants for scientific notation, binary scaling, hexadecimal, and combinations of numeric-type literals: l Scientific notation: => SELECT FLOAT '1e10'; ?column? ------------10000000000 (1 row) l BINARY scaling: => SELECT NUMERIC '1p10'; ?column? ---------1024 (1 row) l Hexadecimal: => SELECT NUMERIC '0x0abc'; HP Vertica Analytic Database (7.0.x) Page 149 of 1539 SQL Reference Manual SQL Data Types ?column? ---------2748 (1 row) DOUBLE PRECISION (FLOAT) HP Vertica supports the numeric data type DOUBLE PRECISION, which is the IEEE-754 8-byte floating point type, along with most of the usual floating point operations. Syntax [ DOUBLE PRECISION | FLOAT | FLOAT(n) | FLOAT8 | REAL ] Parameters Note: On a machine whose floating-point arithmetic does not follow IEEE-754, these values probably do not work as expected. Double precision is an inexact, variable-precision numeric type. In other words, some values cannot be represented exactly and are stored as approximations. Thus, input and output operations involving double precision might show slight discrepancies. l All of the DOUBLE PRECISION data types are synonyms for 64-bit IEEE FLOAT. l The n in FLOAT(n) must be between 1 and 53, inclusive, but a 53-bit fraction is always used. See the IEEE-754 standard for details. l For exact numeric storage and calculations (money for example), use NUMERIC. l Floating point calculations depend on the behavior of the underlying processor, operating system, and compiler. l Comparing two floating-point values for equality might not work as expected. Values COPY accepts floating-point data in the following format: l Optional leading white space l An optional plus ("+") or minus sign ("-") l A decimal number, a hexadecimal number, an infinity, a NAN, or a null value HP Vertica Analytic Database (7.0.x) Page 150 of 1539 SQL Reference Manual SQL Data Types A decimal number consists of a non-empty sequence of decimal digits possibly containing a radix character (decimal point "."), optionally followed by a decimal exponent. A decimal exponent consists of an "E" or "e", followed by an optional plus or minus sign, followed by a non-empty sequence of decimal digits, and indicates multiplication by a power of 10. A hexadecimal number consists of a "0x" or "0X" followed by a non-empty sequence of hexadecimal digits possibly containing a radix character, optionally followed by a binary exponent. A binary exponent consists of a "P" or "p", followed by an optional plus or minus sign, followed by a non-empty sequence of decimal digits, and indicates multiplication by a power of 2. At least one of radix character and binary exponent must be present. An infinity is either INF or INFINITY, disregarding case. A NaN (Not A Number) is NAN (disregarding case) optionally followed by a sequence of characters enclosed in parentheses. The character string specifies the value of NAN in an implementationdependent manner. (The HP Vertica internal representation of NAN is 0xfff8000000000000LL on x86 machines.) When writing infinity or NAN values as constants in a SQL statement, enclose them in single quotes. For example: => UPDATE table SET x = 'Infinity' Note: HP Vertica follows the IEEE definition of NaNs (IEEE 754). The SQL standards do not specify how floating point works in detail. IEEE defines NaNs as a set of floating point values where each one is not equal to anything, even to itself. A NaN is not greater than and at the same time not less than anything, even itself. In other words, comparisons always return false whenever a NaN is involved. However, for the purpose of sorting data, NaN values must be placed somewhere in the result. The value generated 'NaN' appears in the context of a floating point number matches the NaN value generated by the hardware. For example, Intel hardware generates (0xfff8000000000000LL), which is technically a Negative, Quiet, Non-signaling NaN. HP Vertica uses a different NaN value to represent floating point NULL (0x7ffffffffffffffeLL). This is a Positive, Quiet, Non-signaling NaN and is reserved by HP Vertica The load file format of a null value is user defined, as described in the COPY command. The HP Vertica internal representation of a null value is 0x7fffffffffffffffLL. The interactive format is controlled by the vsql printing option null. For example: \pset null '(null)' The default option is not to print anything. HP Vertica Analytic Database (7.0.x) Page 151 of 1539 SQL Reference Manual SQL Data Types Rules l -0 == +0 l 1/0 = Infinity l 0/0 == Nan l NaN != anything (even NaN) To search for NaN column values, use the following predicate: ... WHERE column != column This is necessary because WHERE column = 'Nan' cannot be true by definition. Sort Order (Ascending) l NaN l -Inf l numbers l +Inf l NULL Notes l NULL appears last (largest) in ascending order. l All overflows in floats generate +/-infinity or NaN, per the IEEE floating point standard. INTEGER A signed 8-byte (64-bit) data type. Syntax [ INTEGER | INT | BIGINT | INT8 | SMALLINT | TINYINT ] Parameters INT, INTEGER, INT8, SMALLINT, TINYINT, and BIGINT are all synonyms for the same signed 64-bit integer data type. Automatic compression techniques are used to conserve disk space in cases where the full 64 bits are not required. HP Vertica Analytic Database (7.0.x) Page 152 of 1539 SQL Reference Manual SQL Data Types Notes l The range of values is –2^63+1 to 2^63-1. l 2^63 = 9,223,372,036,854,775,808 (19 digits). l The value –2^63 is reserved to represent NULL. l NULL appears first (smallest) in ascending order. l HP Vertica does not have an explicit 4-byte (32-bit integer) or smaller types. HP Vertica's encoding and compression automatically eliminate the storage overhead of values that fit in less than 64 bits. Restrictions l The JDBC type INTEGER is 4 bytes and is not supported by HP Vertica. Use BIGINT instead. l HP Vertica does not support the SQL/JDBC types NUMERIC, SMALLINT, or TINYINT. l HP Vertica does not check for overflow (positive or negative) except in the aggregate function SUM(). If you encounter overflow when using SUM, use SUM_FLOAT(), which converts to floating point. See Also Data Type Coercion Chart NUMERIC Numeric data types store numeric data. For example, a money value of $123.45 can be stored in a NUMERIC(5,2) field. Syntax NUMERIC | DECIMAL | NUMBER | MONEY [ ( precision [ , scale ] ) ] Parameters precision The total number of significant digits that the data type stores. precision must be positive and <= 1024. If you assign a value that exceeds the precision value, an error occurs. scale The maximum number of digits to the right of the decimal point that the data type stores. scale must be non-negative and less than or equal to precision. If you omit the scale parameter, the scale value is set to 0. If you assign a value with more decimal digits than scale, the value is rounded to scale digits. HP Vertica Analytic Database (7.0.x) Page 153 of 1539 SQL Reference Manual SQL Data Types Notes l l NUMERIC, DECIMAL, NUMBER, and MONEY are all synonyms that return NUMERIC types. However, the default values NUMBER and MONEY are different. Type Precision Scale NUMERIC 37 15 DECIMAL 37 15 NUMBER 38 0 MONEY 18 4 NUMERIC data types support exact representations of numbers that can be expressed with a number of digits before and after a decimal point. This contrasts slightly with existing HP Vertica data types: n DOUBLE PRECISION (FLOAT) types support ~15 digits, variable exponent, and represent numeric values approximately. n INTEGER (and similar) types support ~18 digits, whole numbers only. l NUMERIC data types are generally called exact numeric data types because they store numbers of a specified precision and scale. The approximate numeric data types, such as DOUBLE PRECISION, use floating points and are less precise. l Supported numeric operations include the following: n Basic math: +, –, *, / n Aggregation: SUM, MIN, MAX, COUNT n Comparison operators: <, <=, =, <=>, <>, >, >= l NUMERIC divide operates directly on numeric values, without converting to floating point. The result has at least 18 decimal places and is rounded. l NUMERIC mod (including %) operates directly on numeric values, without converting to floating point. The result has the same scale as the numerator and never needs rounding. l NULL appears first (smallest) in ascending order. l COPY accepts a DECIMAL data type with a decimal point ('.'), prefixed by – or +(optional). l LZO, RLE, and BLOCK_DICT are supported encoding types. Anything that can be used on an INTEGER can also be used on a NUMERIC, as long as the precision is <= 18. l The NUMERIC data type is preferred for non-integer constants, because it is always exact. For HP Vertica Analytic Database (7.0.x) Page 154 of 1539 SQL Reference Manual SQL Data Types example: => SELECT 1.1 + 2.2 = 3.3; ?column? ---------t (1 row) => SELECT 1.1::float + 2.2::float = 3.3::float; ?column? ---------f (1 row) l Performance of the NUMERIC data type has been fine tuned for the common case of 18 digits of precision. l Some of the more complex operations used with NUMERIC data types result in an implicit cast to FLOAT. When using SQRT, STDDEV, transcendental functions such as LOG, and TO_CHAR/TO_ NUMBER formatting, the result is always FLOAT. Examples The following series of commands creates a table that contains a NUMERIC data type and then performs some mathematical operations on the data: => CREATE TABLE num1 (id INTEGER, amount NUMERIC(8,2)); Insert some values into the table: => INSERT INTO num1 VALUES (1, 123456.78); Query the table: => SELECT * FROM num1; id | amount ------+----------1 | 123456.78 (1 row) The following example returns the NUMERIC column, amount, from table num1: => SELECT amount FROM num1; amount ----------123456.78 (1 row) The following syntax adds one (1) to the amount: HP Vertica Analytic Database (7.0.x) Page 155 of 1539 SQL Reference Manual SQL Data Types => SELECT amount+1 AS 'amount' FROM num1; amount ----------123457.78 (1 row) The following syntax multiplies the amount column by 2: => SELECT amount*2 AS 'amount' FROM num1; amount ----------246913.56 (1 row) The following syntax returns a negative number for the amount column: => SELECT -amount FROM num1; ?column? ------------123456.78 (1 row) The following syntax returns the absolute value of the amount argument: => SELECT ABS(amount) FROM num1; ABS ----------123456.78 (1 row) The following syntax casts the NUMERIC amount as a FLOAT data type: => SELECT amount::float FROM num1; amount ----------123456.78 (1 row) See Also Mathematical Functions Numeric Data Type Overflow HP Vertica does not check for overflow (positive or negative) except in the aggregate function SUM (). If you encounter overflow when using SUM, use SUM_FLOAT() which converts to floating point. Dividing zero by zero returns zero: HP Vertica Analytic Database (7.0.x) Page 156 of 1539 SQL Reference Manual SQL Data Types => select 0/0; ?column? ---------------------0.000000000000000000 (1 row) => select 0.0/0; ?column? ----------------------0.0000000000000000000 => select 0 // 0; ?column? ---------0 Dividing zero as a FLOAT by zero returns NaN: => select 0.0::float/0; ?column? ---------NaN => select 0.0::float//0; ?column? ---------NaN Dividing a non-zero FLOAT by zero returns Infinity: => select 2.0::float/0; ?column? ---------Infinity => select 200.0::float//0; ?column? ---------Infinity All other division-by-zero operations return an error: => select 1/0; ERROR 3117: Division by => select 200/0; ERROR 3117: Division by => select 200.0/0; ERROR 3117: Division by => select 116.43 // 0; ERROR 3117: Division by zero zero zero zero Add, subtract, and multiply operations ignore overflow. Sum and average operations use 128-bit arithmetic internally. SUM() reports an error if the final result overflows, suggesting the use of SUM_ FLOAT(INT), which converts the 128-bit sum to a FLOAT8. For example: => CREATE TEMP TABLE t (i INT); HP Vertica Analytic Database (7.0.x) Page 157 of 1539 SQL Reference Manual SQL Data Types => => => => => => INSERT INTO t VALUES (1<<62); INSERT INTO t VALUES (1<<62); INSERT INTO t VALUES (1<<62); INSERT INTO t VALUES (1<<62); INSERT INTO t VALUES (1<<62); SELECT SUM(i) FROM t; ERROR: sum() overflowed HINT: try sum_float() instead => SELECT SUM_FLOAT(i) FROM t; sum_float --------------------2.30584300921369e+19 Data Type Coercion HP Vertica supports two types of data type casting: l Implicit casting—Occurs when the expression automatically converts the data from one type to another. l Explicit casting—Occurs when you write a SQL statement that specifies the target data type for the conversion. The ANSI SQL-92 standard supports implicit casting between similar data types: l Number types l CHAR, VARCHAR, LONG VARCHAR l BINARY, VARBINARY, LONG VARBINARY HP Vertica supports two types of non-standard implicit casts: l From CHAR to FLOAT, to match the one from VARCHAR to FLOAT. The following example converts the CHAR '3' to a FLOAT so it can add the number 334 to the FLOAT result of the second expression: => SELECT '3' + 4.33::NUMERIC(3,2); ?column? ---------7.33 (1 row) l Between DATE and TIMESTAMP. The following example DATE to a TIMESTAMP and calculates the time 6 hours, 6 minutes, and 6 seconds back from 12:00 AM: => SELECT DATE('now') - INTERVAL '6:6:6'; ?column? HP Vertica Analytic Database (7.0.x) Page 158 of 1539 SQL Reference Manual SQL Data Types --------------------2013-07-30 17:53:54 (1 row) When there is no ambiguity about the data type of an expression value, it is implicitly coerced to match the expected data type. In the following command, the quoted string constant '2' is implicitly coerced into an INTEGER value so that it can be the operand of an arithmetic operator (addition): => SELECT 2 + '2'; ?column? ---------4 (1 row) A concatenate operation explicitly takes arguments of any data type. In the following example, the concatenate operation implicitly coerces the arithmetic expression 2 + 2 and the INTEGER constant 2 to VARCHAR values so that they can be concatenated. => SELECT 2 + 2 || 2; ?column? ---------42 (1 row) Another example is to first get today's date: => SELECT DATE 'now'; ?column? -----------2013-07-31 (1 row) The following command converts DATE to a TIMESTAMP and adds a day and a half to the results by using INTERVAL: => SELECT DATE 'now' + INTERVAL '1 12:00:00'; ?column? --------------------2013-07-31 12:00:00 (1 row) Most implicit casts stay within their relational family and go in one direction, from less detailed to more detailed. For example: l DATE to TIMESTAMP/TZ l INTEGER to NUMERIC to FLOAT l CHAR to FLOAT HP Vertica Analytic Database (7.0.x) Page 159 of 1539 SQL Reference Manual SQL Data Types l CHAR to VARCHAR l CHAR and/or VARCHAR to FLOAT l CHAR to LONG VARCHAR l VARCHAR to LONG VARCHAR l BINARY to VARBINARY l BINARY to LONG VARBINARY l VARBINARY to LONG VARBINARY More specifically, data type coercion works in this manner in HP Vertica: Type Direction Type Notes INT8 > FLOAT8 Implicit, can lose significance FLOAT8 > INT8 Explicit, rounds VARCHAR <-> CHAR Implicit, adjusts trailing spaces VARBINARY <-> BINARY Implicit, adjusts trailing NULs VARCHAR LONG VARCHAR Implicit, adjusts trailing spaces > VARBINARY > LONG VARBINARY Implicit, adjusts trailing NULs No other types cast to or from LONGVARBINARY, VARBINARY, or BINARY. In the following list, means one these types: INT8, FLOAT8, DATE, TIME, TIMETZ, TIMESTAMP, TIMESTAMPTZ, INTERVAL. l -> VARCHAR—implicit l VARCHAR -> —explicit, except that VARCHAR->FLOAT is implicit l <-> CHAR—explicit l DATE -> TIMESTAMP/TZ—implicit l TIMESTAMP/TZ -> DATE—explicit, loses time-of-day l TIME -> TIMETZ—implicit, adds local timezone l TIMETZ -> TIME—explicit, loses timezone l TIME -> INTERVAL—implicit, day to second with days=0 l INTERVAL -> TIME—explicit, truncates non-time parts HP Vertica Analytic Database (7.0.x) Page 160 of 1539 SQL Reference Manual SQL Data Types l TIMESTAMP <-> TIMESTAMPTZ—implicit, adjusts to local timezone l TIMESTAMP/TZ -> TIME—explicit, truncates non-time parts l TIMESTAMPTZ -> TIMETZ—explicit l VARBINARY -> LONG VARBINARY—implicit l LONG VARBINARY -> VARBINARY—explicit l VARCHAR -> LONG VARCHAR—implicit l LONG VARCHAR -> VARCHAR—explicit Important: Implicit casts from INTEGER, FLOAT, and NUMERIC to VARCHAR are not supported. If you need that functionality, write an explicit cast: CAST(x AS data-type-name) or x::data-type-name The following example casts a FLOAT to an INTEGER: => SELECT(FLOAT '123.5')::INT; ?column? ---------124 (1 row) String-to-numeric data type conversions accept formats of quoted constants for scientific notation, binary scaling, hexadecimal, and combinations of numeric-type literals: l Scientific notation: => SELECT FLOAT '1e10'; ?column? ------------10000000000 (1 row) l BINARY scaling: => SELECT NUMERIC '1p10'; ?column? ---------1024 (1 row) l Hexadecimal: HP Vertica Analytic Database (7.0.x) Page 161 of 1539 SQL Reference Manual SQL Data Types => SELECT NUMERIC '0x0abc'; ?column? ---------2748 (1 row) Examples The following example casts three strings as NUMERICs: => SELECT NUMERIC '12.3e3', '12.3p10'::NUMERIC, CAST('0x12.3p-10e3' AS NUMERIC); ?column? | ?column? | ?column? ----------+----------+------------------12300 | 12595.2 | 17.76123046875000 (1 row) This example casts a VARBINARY string into a LONG VARBINARY data type: => SELECT B'101111000'::LONG VARBINARY; ?column? ---------\001x (1 row) The following example concatenates a CHAR with a LONG VARCHAR, resulting in a LONG VARCHAR: => \set s ''''`cat longfile.txt`'''' => SELECT length ('a' || :s ::LONG VARCHAR); length ---------65002 (1 row) The following example casts a combination of NUMERIC and INTEGER data into a NUMERIC result: => SELECT (18. + 3./16)/1024*1000; ?column? ----------------------------------------17.761230468750000000000000000000000000 (1 row) Note: In SQL expressions, pure numbers between (–2^63–1) and (2^63–1) are INTEGERs. Numbers with decimal points are NUMERIC. HP Vertica Analytic Database (7.0.x) Page 162 of 1539 SQL Reference Manual SQL Data Types See Also l Data Type Coercion Chart l Data Type Coercion Operators (CAST) Data Type Coercion Chart Conversion Types The following table defines all possible type conversions that HP Vertica supports. The values across the top row are the data types you want, and the values down the first column on the left are the data types that you have. Conversion Types Data Types Implicit Explicit BOOLEAN Assignment Assignment without numeric meaning Conversion without explicit casting INTEGER LONG VARCHAR VARCHAR CHAR INTEGER BOOLEAN NUMERIC FLOAT INTERVAL DAY/SECOND INTERVAL YEAR/MONTH LONG VARCHAR VARCHAR CHAR NUMERIC FLOAT INTEGER LONG VARCHAR VARCHAR CHAR INTEGER NUMERIC LONG VARCHAR VARCHAR CHAR FLOAT HP Vertica Analytic Database (7.0.x) NUMERIC Page 163 of 1539 SQL Reference Manual SQL Data Types Conversion Types Conversion without explicit casting Data Types Implicit Explicit LONG VARCHAR FLOAT CHAR BOOLEAN INTEGER NUMERIC VARCHAR TIMESTAMP TIMESTAMP WITH TIME ZONE DATE TIME TIME WITH TIME ZONE INTERVAL DAY/SECOND INTERVAL YEAR/MONTH LONG VARBINARY LONG VARCHAR VARCHAR FLOAT LONG VARCHAR CHAR BOOLEAN INTEGER NUMERIC TIMESTAMP TIMESTAMP WITH TIME ZONE DATE TIME TIME WITH TIME ZONE INTERVAL DAY/SECOND INTERVAL YEAR/MONTH VARCHAR CHAR FLOAT LONG VARCHAR VARCHAR BOOLEAN INTEGER NUMERIC TIMESTAMP TIMESTAMP WITH TIME ZONE DATE TIME TIME WITH TIME ZONE INTERVAL DAY/SECOND INTERVAL YEAR/MONTH CHAR HP Vertica Analytic Database (7.0.x) Assignment Assignment without numeric meaning Page 164 of 1539 SQL Reference Manual SQL Data Types Conversion Types Explicit Assignment Assignment without numeric meaning Conversion without explicit casting Data Types Implicit TIMESTAMP TIMESTAMP WITH TIME ZONE LONG CHAR VARCHAR CHAR DATE TIME TIMESTAMP TIMESTAMP WITH TIME ZONE TIMESTAMP LONG CHAR VARCHAR CHAR DATE TIME TIME WITH TIME ZONE TIMESTAMP WITH TIME ZONE DATE TIMESTAMP LONG CHAR VARCHAR CHAR TIMESTAMP WITH TIME ZONE TIME TIME WITH TIME ZONE TIMESTAMP TIMESTAMP WITH TIME ZONE INTERVAL DAY/SECOND LONG CHAR VARCHAR CHAR TIME TIME WITH TIME ZONE TIMESTAMP TIMESTAMP WITH TIME ZONE LONG CHAR VARCHAR CHAR TIME TIME WITH TIME ZONE INTERVAL DAY/SECOND TIME INTEGER LONG CHAR VARCHAR CHAR INTERVAL DAY/SECOND INTEGER LONG CHAR VARCHAR CHAR INTERVAL YEAR/MONTH INTERVAL YEAR/MONTH LONG VARBINARY VARBINARY LONG VARBINARY VARBINARY LONG VARBINARY BINARY VARBINARY BINARY VARBINARY BINARY HP Vertica Analytic Database (7.0.x) Page 165 of 1539 SQL Reference Manual SQL Data Types Notes l Implicit conversion converts the source data to the target column's data type when what needs to be converted is clear. For example, with "INT + NUMERIC -> NUMERIC", the integer is implicitly cast to numeric(18,0); another precision/scale conversion may occur as part of the add. l In an Assignment conversion, coercion implicitly occurs when values are assigned to database columns in an INSERT or UPDATE..SET command. For example, in a statement that includes INSERT ... VALUES('2.5'), where the target column is NUMERIC(18,5), a cast from VARCHAR to NUMERIC(18,5) is inferred. l In Explicit conversion, the source data requires explicit casting to the target column's data type. l HP Vertica supports a conversion of data types without explicit casting, such as NUMERIC (10,6) -> NUMERIC(18,4). l In an assignment without numeric meaning, the value is subject to CHAR/VARCHAR/LONG VARCHAR comparisons. See Also l Data Type Coercion l Data Type Coercion Operators (CAST) HP Vertica Analytic Database (7.0.x) Page 166 of 1539 SQL Reference Manual SQL Functions SQL Functions Functions return information from the database and are allowed anywhere an expression is allowed. The exception is HP Vertica-specific functions, which are not allowed everywhere. Some functions could produce different results on different invocations with the same set of arguments. The following three categories of functions are defined based on their behavior: When run with a given set of arguments, immutable functions always produce the same result. The function is independent of any environment or session settings, such as locale. For example, 2+2 always equals 4. Another immutable function is AVG(). Some immutable functions can take an optional stable argument; in this case they are treated as stable functions. l Immutable (invariant): When run with a given set of arguments, stable functions produce the same result within a single query or scan operation. However, a stable function could produce different results when issued under a different environment, such as a change of locale and time zone. Expressions that could give different results in the future are also stable, for example SYSDATE() or 'today'. l Stable: Regardless of the arguments or environment, volatile functions can return different results on multiple invocations. RANDOM() is one example. l Volatile: This chapter describes the functions that HP Vertica supports. l Each function is annotated with behavior type as immutable, stable or volatile. l All HP Vertica-specific functions can be assumed to be volatile and are not annotated individually. HP Vertica Analytic Database (7.0.x) Page 167 of 1539 SQL Reference Manual SQL Functions Aggregate Functions Note: All functions in this section that have an analytic function counterpart are appended with [Aggregate] to avoid confusion between the two. Aggregate functions summarize data over groups of rows from a query result set. The groups are specified using the GROUP BY clause. They are allowed only in the select list and in the HAVING and ORDER BY clauses of a SELECT statement (as described in Aggregate Expressions). Notes l Except for COUNT, these functions return a null value when no rows are selected. In particular, SUM of no rows returns NULL, not zero. l In some cases you can replace an expression that includes multiple aggregates with an single aggregate of an expression. For example SUM(x) + SUM(y) can be expressed as as SUM(x+y) (where x and y are NOT NULL). l HP Vertica does not support nested aggregate functions. You can also use some of the simple aggregate functions as analytic (window) functions. See Analytic Functions for details. See also Using SQL Analytics in the Programmer's Guide. APPROXIMATE_COUNT_DISTINCT Returns the number of rows in the data set that have distinct non-NULL values. Behavior Type Immutable Syntax APPROXIMATE_COUNT_DISTINCT ( expr [, error_tolerance ] ) Parameters expr Value to be evaluated using any data type that supports equality comparison. HP Vertica Analytic Database (7.0.x) Page 168 of 1539 SQL Reference Manual SQL Functions error_tolerance NUMERIC that represents the desired error tolerance, distributed around the exact COUNT(DISTINCT) value. l Default value: 1.0. This value typically gives a 0.4% chance of a getting a count within three standard deviations of the exact count. l Minimum value: 0.078. l There is no maximum value, but anything greater than or equal to 5 is implemented with 5% accuracy. For detailed information about the error tolerance, see the following Notes section. Notes l The expected value that APPROXIMATE_COUNT_DISTINCT(x [, error_tolerance]) returns is equal to COUNT(DISTINCT x), with an error that is lognormally distributed with standard deviation s. You can control the standard deviation directly by setting the error_tolerance. The error_tolerance is defined as 2.17 standard deviations, which corresponds to a 97% confidence interval. For example, setting the error_tolerance to 1% (the default) corresponds to a standard deviation s = (1 / 100) / 2.17 = 0.0046. If you specify an error_tolerance of 1, APPROXIMATE_COUNT_DISTINCT(x) returns a value between COUNT(DISTINCT x) / 1.01 and 1.01 * COUNT(DISTINCT x), 97% of the time. Similarly, specifying error_tolerance = 5 (percent) constrains the value returned by APPROXIMATE_COUNT_DISTINCT(x) to be between COUNT(DISTINCT x) / 1.05 and 1.05 * COUNT(DISTINCT x) 97% of the time. The remaining 3% of the time, the errors are larger than the specified error_tolerance. A 99% confidence interval corresponds to s = 2.58 standard deviations. To set an error_ tolerance corresponding to a 99% confidence level (instead of a 97% confidence level), multiply the error_tolerance by 2.17 / 2.58 = 0.841. For example, if you want APPROXIMATE_COUNT_ DISTINCT(x) to return a value between COUNT(DISTINCT x) / 1.05 and 1.05 * COUNT (DISTINCT x) 99% of the time, specify the error_tolerance as 5 * 0.841 = 4.2. l The maximum number of distinct values is in the range of 0 to approximately 2^47, or 1.4*10^14. l APPROXIMATE_COUNT_DISTINCT cannot appear in the same query block with DISTINCT aggregates. Examples The following query counts total number of distinct values in the product_key column of the store.store_sales_fact table: => \timing HP Vertica Analytic Database (7.0.x) Page 169 of 1539 SQL Reference Manual SQL Functions Timing is on. => SELECT COUNT(DISTINCT product_key) FROM store.store_sales_fact; COUNT ------19982 (1 row) Time: First fetch (1 row): 16.839 ms. All rows formatted: 16.866 ms The next query counts the approximate number of distinct values in the product_key column with various error tolerances. The smaller the error_tolerance, the closer the approximation. => \timing Timing is on. => SELECT APPROXIMATE_COUNT_DISTINCT(product_key,5) AS five_pct_accuracy, APPROXIMATE_COUNT_DISTINCT(product_key,1) AS one_pct_accuracy, APPROXIMATE_COUNT_DISTINCT(product_key,.1) AS point_one_pct_accuracy FROM store.store_sales_fact; five_pct_accuracy | one_pct_accuracy | point_one_pct_accuracy -------------------+------------------+-----------------------19431 | 19921 | 19980 (1 row) Time: First fetch (1 row): 262.580 ms. All rows formatted: 262.616 ms The following query counts the distinct values in the date_key and product_key columns. => \timing Timing is on. => SELECT COUNT (DISTINCT date_key), COUNT (DISTINCT product_key) FROM store.store_sales_fact; count | count -------+------1826 | 19982 (1 row) Time: First fetch (1 row): 207.431 ms. All rows formatted: 207.468 ms See Also l APPROXIMATE_COUNT_DISTINCT_SYNOPSIS l APPROXIMATE_COUNT_DISTINCT_OF_SYNOPSIS l COUNT [Aggregate] APPROXIMATE_COUNT_DISTINCT_OF_SYNOPSIS Returns the number of rows in the synopsis object created by APPROXIMATE_COUNT_ DISTINCT_SYNOPSIS that have distinct non-NULL values. HP Vertica Analytic Database (7.0.x) Page 170 of 1539 SQL Reference Manual SQL Functions Behavior Type Immutable Syntax APPROXIMATE_COUNT_DISTINCT_OF_SYNOPSIS ( expr [, maximum_error_percent ] ) Parameters expr Value to be evaluated using any data type that supports equality comparison. error_ NUMERIC that represents the desired error tolerance, distributed around the exact COUNT(DISTINCT) value. l Default value: 1.0. This value typically gives a 0.4% chance of a getting a count within three standard deviations of the exact count. l Minimum value: 0.078. l There is no maximum value, but anything greater than or equal to 5 is implemented with 5% accuracy. For detailed information about the error tolerance, see the following Notes section. Notes l The expected value that APPROXIMATE_COUNT_DISTINCT(x [, error_tolerance]) returns is equal to COUNT(DISTINCT x), with an error that is lognormally distributed with standard deviation s. You can control the standard deviation directly by setting the error_tolerance. The error_tolerance is defined as 2.17 standard deviations, which corresponds to a 97% confidence interval. For example, setting the error_tolerance to 1% (the default) corresponds to a standard deviation s = (1 / 100) / 2.17 = 0.0046. If you specify an error_tolerance of 1, APPROXIMATE_COUNT_DISTINCT(x) returns a value between COUNT(DISTINCT x) / 1.01 and 1.01 * COUNT(DISTINCT x), 97% of the time. Similarly, specifying error_tolerance = 5 (percent) constrains the value returned by APPROXIMATE_COUNT_DISTINCT(x) to be between COUNT(DISTINCT x) / 1.05 and 1.05 * HP Vertica Analytic Database (7.0.x) Page 171 of 1539 SQL Reference Manual SQL Functions COUNT(DISTINCT x) 97% of the time. The remaining 3% of the time, the errors are larger than the specified error_tolerance. A 99% confidence interval corresponds to s = 2.58 standard deviations. To set an error_ tolerance corresponding to a 99% confidence level (instead of a 97% confidence level), multiply the error_tolerance by 2.17 / 2.58 = 0.841. For example, if you want APPROXIMATE_COUNT_ DISTINCT(x) to return a value between COUNT(DISTINCT x) / 1.05 and 1.05 * COUNT (DISTINCT x) 99% of the time, specify the error_tolerance as 5 * 0.841 = 4.2. l The maximum number of distinct values is in the range of 0 to approximately 2^47, or 1.4*10^14. l APPROXIMATE_COUNT_DISTINCT cannot appear in the same query block with DISTINCT aggregates. Examples The following example creates the synopsis and then calculates an approximate count of distinct values in the synopsis. => \timing Timing is on. => SELECT product_version, APPROXIMATE_COUNT_DISTINCT(product_key) FROM store.store_sales_fact GROUP BY product_version; product_version | ApproxCountDistinct -----------------+--------------------1 | 19921 2 | 15958 3 | 11895 4 | 7935 5 | 3993 (5 rows) Time: First fetch (5 rows): 2826.318 ms. All rows formatted: 2826.358 ms => CREATE TABLE my_summary AS SELECT product_version, APPROXIMATE_COUNT_DISTINCT_SYNOPSIS (product_key) syn FROM store.store_sales_fact GROUP BY product_version; CREATE TABLE => SELECT APPROXIMATE_COUNT_DISTINCT_OF_SYNOPSIS FROM my_summary; ApproxCountDistinctOfSynopsis ------------------------------19963 (1 row) Time: First fetch (1 row): 42.994 ms. All rows formatted: 43.021 ms => HP Vertica Analytic Database (7.0.x) Page 172 of 1539 SQL Reference Manual SQL Functions See Also l APPROXIMATE_COUNT_DISTINCT l APPROXIMATE_COUNT_DISTINCT_SYNOPSIS l COUNT [Aggregate] APPROXIMATE_COUNT_DISTINCT_SYNOPSIS Returns a subset of the data set, known as a synopsis, as a VARBINARY or LONG VARBINARY. Save the synopsis as an HP Vertica table for use by APPROXIMATE_COUNT_DISTINCT_OF_ SYNOPSIS. Behavior Type Immutable Syntax APPROXIMATE_COUNT_DISTINCT_SYNOPSIS ( expr ) Parameters expr Value to be evaluated using any data type that supports equality comparison. Notes l The maximum number of distinct values is in the range of 0 to approximately 2^47, or 1.4*10^14. l APPROXIMATE_COUNT_DISTINCT_SYNOPSIS cannot appear in the same query block with DISTINCT aggregates. Example In the following example, the query creates a table that contains the synopsis of the product_key data. The my_summary table is used in the example for APPROXIMATE_COUNT_DISTINCT_OF_ SYNOPSIS. => CREATE TABLE my_summary AS SELECT tender_type, APPROXIMATE_COUNT_DISTINCT_SYNOPSIS(product_key) syn FROM store.store_sales_fact HP Vertica Analytic Database (7.0.x) Page 173 of 1539 SQL Reference Manual SQL Functions GROUP BY tender_type; CREATE TABLE Time: First fetch (0 rows): 3216.056 ms. All rows formatted: 3216.069 ms See Also l APPROXIMATE_COUNT_DISTINCT l APPROXIMATE_COUNT_DISTINCT_OF_SYNOPSIS l COUNT [Aggregate] AVG [Aggregate] Computes the average (arithmetic mean) of an expression over a group of rows. It returns a DOUBLE PRECISION value for a floating-point expression. Otherwise, the return value is the same as the expression data type. Behavior Type Immutable Syntax AVG ( [ ALL | DISTINCT ] expression ) Parameters ALL Invokes the aggregate function for all rows in the group (default). DISTINCT Invokes the aggregate function for all distinct non-null values of the expression found in the group. expression The value whose average is calculated over a set of rows. Can be any expression resulting in DOUBLE PRECISION. Notes The AVG() aggregate function is different from the AVG() analytic function, which computes an average of an expression over a group of rows within a window. Examples The following example returns the average income from the customer table: HP Vertica Analytic Database (7.0.x) Page 174 of 1539 SQL Reference Manual SQL Functions => SELECT AVG(annual_income) FROM customer_dimension; avg -------------2104270.6485 (1 row) See Also l AVG [Analytic] l COUNT [Aggregate] l SUM [Aggregate] l Numeric Data Types BIT_AND Takes the bitwise AND of all non-null input values. If the input parameter is NULL, the return value is also NULL. Behavior Type Immutable Syntax BIT_AND ( expression ) Parameters expression The [BINARY |VARBINARY] input value to be evaluated. BIT_AND() operates on VARBINARY types explicitly and on BINARY types implicitly through casts. Notes l The function returns the same value as the argument data type. l For each bit compared, if all bits are 1, the function returns 1; otherwise it returns 0. l If the columns are different lengths, the return values are treated as though they are all equal in length and are right-extended with zero bytes. For example, given a group containing the hex values 'ff', null, and 'f', the function ignores the null value and extends the value 'f' to 'f0'.. HP Vertica Analytic Database (7.0.x) Page 175 of 1539 SQL Reference Manual SQL Functions Example This example uses the following schema, which creates table t with a single column of VARBINARY data type: => => => => CREATE INSERT INSERT INSERT TABLE t ( c VARBINARY(2) ); INTO t values(HEX_TO_BINARY('0xFF00')); INTO t values(HEX_TO_BINARY('0xFFFF')); INTO t values(HEX_TO_BINARY('0xF00F')); Query table t to see column c output: => SELECT TO_HEX(c) FROM t; TO_HEX -------ff00 ffff f00f (3 rows) Query table t to get the AND value for column c: SELECT TO_HEX(BIT_AND(c)) FROM t; TO_HEX -------f000 (1 row) The function is applied pairwise to all values in the group, resulting in f000, which is determined as follows: 1. ff00 (record 1) is compared with ffff (record 2), which results in ff00. 2. The result from the previous comparison is compared with f00f (record 3), which results in f000. See Also l Binary Data Types BIT_OR Takes the bitwise OR of all non-null input values. If the input parameter is NULL, the return value is also NULL. Behavior Type Immutable HP Vertica Analytic Database (7.0.x) Page 176 of 1539 SQL Reference Manual SQL Functions Syntax BIT_OR ( expression ) Parameters expression The [BINARY |VARBINARY] input value to be evaluated. BIT_OR() operates on VARBINARY types explicitly and on BINARY types implicitly through casts. Notes l The function returns the same value as the argument data type. l For each bit compared, if any bit is 1, the function returns 1; otherwise it returns 0. l If the columns are different lengths, the return values are treated as though they are all equal in length and are right-extended with zero bytes. For example, given a group containing the hex values 'ff', null, and 'f', the function ignores the null value and extends the value 'f' to 'f0'. Example This example uses the following schema, which creates table t with a single column of VARBINARY data type: => => => => CREATE INSERT INSERT INSERT TABLE t ( c VARBINARY(2) ); INTO t values(HEX_TO_BINARY('0xFF00')); INTO t values(HEX_TO_BINARY('0xFFFF')); INTO t values(HEX_TO_BINARY('0xF00F')); Query table t to see column c output: => SELECT TO_HEX(c) FROM t; TO_HEX -------ff00 ffff f00f (3 rows) Query table t to get the OR value for column c: SELECT TO_HEX(BIT_OR(c)) FROM t; TO_HEX -------ffff (1 row) HP Vertica Analytic Database (7.0.x) Page 177 of 1539 SQL Reference Manual SQL Functions The function is applied pairwise to all values in the group, resulting in ffff, which is determined as follows: 1. ff00 (record 1) is compared with ffff, which results in ffff. 2. The ff00 result from the previous comparison is compared with f00f (record 3), which results in ffff. See Also l Binary Data Types BIT_XOR Takes the bitwise XOR of all non-null input values. If the input parameter is NULL, the return value is also NULL. Behavior Type Immutable Syntax BIT_XOR ( expression ) Parameters expression The [BINARY | VARBINARY] input value to be evaluated. BIT_XOR() operates on VARBINARY types explicitly and on BINARY types implicitly through casts. Notes l The function returns the same value as the argument data type. l For each bit compared, if there are an odd number of arguments with set bits, the function returns 1; otherwise it returns 0. l If the columns are different lengths, the return values are treated as though they are all equal in length and are right-extended with zero bytes. For example, given a group containing the hex values 'ff', null, and 'f', the function ignores the null value and extends the value 'f' to 'f0'. Example First create a sample table and projections with binary columns: HP Vertica Analytic Database (7.0.x) Page 178 of 1539 SQL Reference Manual SQL Functions This example uses the following schema, which creates table t with a single column of VARBINARY data type: => => => => CREATE INSERT INSERT INSERT TABLE t ( c VARBINARY(2) ); INTO t values(HEX_TO_BINARY('0xFF00')); INTO t values(HEX_TO_BINARY('0xFFFF')); INTO t values(HEX_TO_BINARY('0xF00F')); Query table t to see column c output: => SELECT TO_HEX(c) FROM t; TO_HEX -------ff00 ffff f00f (3 rows) Query table t to get the XOR value for column c: SELECT TO_HEX(BIT_XOR(c)) FROM t; TO_HEX -------f0f0 (1 row) See Also l Binary Data Types CORR Returns the coefficient of correlation of a set of expression pairs (expression1 and expression2). The return value is of type DOUBLE PRECISION. The function eliminates expression pairs where either expression in the pair is NULL. If no rows remain, the function returns NULL.Syntax SELECT CORR (expression1,expression2) Parameters expression1 The dependent expression. Is of type DOUBLE PRECISION. expression2 The independent expression. Is of type DOUBLE PRECISION. Example => SELECT CORR (Annual_salary, Employee_age) FROM employee_dimension; HP Vertica Analytic Database (7.0.x) Page 179 of 1539 SQL Reference Manual SQL Functions CORR ----------------------0.00719153413192422 (1 row) COUNT [Aggregate] Returns the number of rows in each group of the result set for which the expression is not NULL. The return value is a BIGINT. The COUNT() aggregate function is different from the COUNT() analytic function. The COUNT() analytic function returns the number over a group of rows within a window. When an approximate count of the number of distinct values is sufficient, use the APPROXIMATE_COUNT_DISTINCT function. If you want to combine the data in different ways, use APPROXIMATE_COUNT_DISTINCT_SYNOPSIS together with APPROXIMATE_COUNT_ DISTINCT_OF_SYNOPSIS. Behavior Type Immutable Syntax COUNT ( [ * ] [ ALL | DISTINCT ] expression ) Parameters * Indicates that the count does not apply to any specific column or expression in the select list. Requires a FROM Clause. ALL Invokes the aggregate function for all rows in the group (default). DISTINCT Invokes the aggregate function for all distinct non-null values of the expression found in the group. expression Returns the number of rows in each group for which the expression is not null. Can be any expression resulting in BIGINT. Examples The following query returns the number of distinct values in the primary_key column of the date_ dimension table: => SELECT COUNT (DISTINCT date_key) FROM date_dimension; COUNT HP Vertica Analytic Database (7.0.x) Page 180 of 1539 SQL Reference Manual SQL Functions ------1826 (1 row) This example returns all distinct values of evaluating the expression x+y for all records of fact. => SELECT COUNT (DISTINCT date_key + product_key) FROM inventory_fact; COUNT ------21560 (1 row) You can create an equivalent query using the LIMIT keyword to restrict the number of rows returned: => SELECT COUNT(date_key + product_key) FROM inventory_fact GROUP BY date_key LIMIT 10; COUNT ------173 31 321 113 286 84 244 238 145 202 (10 rows) This query returns the number of distinct values of date_key in all records with the specific distinct product_key value. => SELECT product_key, COUNT (DISTINCT date_key) FROM inventory_fact GROUP BY product_key LIMIT 10; product_key | count -------------+------1 | 12 2 | 18 3 | 13 4 | 17 5 | 11 6 | 14 7 | 13 8 | 17 9 | 15 10 | 12 (10 rows) This query counts each distinct product_key value in inventory_fact table with the constant 1. HP Vertica Analytic Database (7.0.x) Page 181 of 1539 SQL Reference Manual SQL Functions => SELECT product_key, COUNT (DISTINCT product_key) FROM inventory_fact GROUP BY product_key LIMIT 10; product_key | count -------------+------1 | 1 2 | 1 3 | 1 4 | 1 5 | 1 6 | 1 7 | 1 8 | 1 9 | 1 10 | 1 (10 rows) This query selects each distinct date_key value and counts the number of distinct product_key values for all records with the specific product_key value. It then sums the qty_in_stock values in all records with the specific product_key value and groups the results by date_key. => SELECT date_key, COUNT (DISTINCT product_key), SUM(qty_in_stock) FROM inventory_fact GROUP BY date_key LIMIT 10; date_key | count | sum ----------+-------+-------1 | 173 | 88953 2 | 31 | 16315 3 | 318 | 156003 4 | 113 | 53341 5 | 285 | 148380 6 | 84 | 42421 7 | 241 | 119315 8 | 238 | 122380 9 | 142 | 70151 10 | 202 | 95274 (10 rows) This query selects each distinct product_key value and then counts the number of distinct date_ key values for all records with the specific product_key value. It also counts the number of distinct warehouse_key values in all records with the specific product_key value. => SELECT product_key, COUNT (DISTINCT date_key), COUNT (DISTINCT warehouse_key) FROM inventory_fact GROUP BY product_key LIMIT 15; product_key | count | count -------------+-------+------1 | 12 | 12 2 | 18 | 18 3 | 13 | 12 4 | 17 | 18 5 | 11 | 9 6 | 14 | 13 7 | 13 | 13 8 | 17 | 15 HP Vertica Analytic Database (7.0.x) Page 182 of 1539 SQL Reference Manual SQL Functions 9 10 11 12 13 14 15 | | | | | | | 15 12 11 13 9 13 18 | | | | | | | 14 12 11 12 7 13 17 (15 rows) This query selects each distinct product_key value, counts the number of distinct date_key and warehouse_key values for all records with the specific product_key value, and then sums all qty_ in_stock values in records with the specific product_key value. It then returns the number of product_version values in records with the specific product_key value. => SELECT product_key, COUNT (DISTINCT date_key), COUNT (DISTINCT warehouse_key), SUM (qty_in_stock), COUNT (product_version) FROM inventory_fact GROUP BY product_key LIMIT 15; product_key | count | count | sum | count -------------+-------+-------+-------+------1 | 12 | 12 | 5530 | 12 2 | 18 | 18 | 9605 | 18 3 | 13 | 12 | 8404 | 13 4 | 17 | 18 | 10006 | 18 5 | 11 | 9 | 4794 | 11 6 | 14 | 13 | 7359 | 14 7 | 13 | 13 | 7828 | 13 8 | 17 | 15 | 9074 | 17 9 | 15 | 14 | 7032 | 15 10 | 12 | 12 | 5359 | 12 11 | 11 | 11 | 6049 | 11 12 | 13 | 12 | 6075 | 13 13 | 9 | 7 | 3470 | 9 14 | 13 | 13 | 5125 | 13 15 | 18 | 17 | 9277 | 18 (15 rows) The following example returns the number of warehouses from the warehouse dimension table: => SELECT COUNT(warehouse_name) FROM warehouse_dimension; COUNT ------100 (1 row) This next example returns the total number of vendors: => SELECT COUNT(*) FROM vendor_dimension; COUNT ------50 HP Vertica Analytic Database (7.0.x) Page 183 of 1539 SQL Reference Manual SQL Functions (1 row) See Also l Analytic Functions l AVG [Aggregate] l SUM [Aggregate] l Using SQL Analytics l APPROXIMATE_COUNT_DISTINCT l APPROXIMATE_COUNT_DISTINCT_SYNOPSIS l APPROXIMATE_COUNT_DISTINCT_OF_SYNOPSIS COVAR_POP Returns the population covariance for a set of expression pairs (expression1 and expression2). The return value is of type DOUBLE PRECISION. The function eliminates expression pairs where either expression in the pair is NULL. If no rows remain, the function returns NULL. Syntax SELECT COVAR_POP (expression1,expression2) Parameters expression1 The dependent expression, type DOUBLE PRECISION. expression2 The independent expression, type DOUBLE PRECISION. Example => SELECT COVAR_POP (Annual_salary, Employee_age) FROM employee_dimension; COVAR_POP -------------------9032.34810730019 (1 row) HP Vertica Analytic Database (7.0.x) Page 184 of 1539 SQL Reference Manual SQL Functions COVAR_SAMP Returns the sample covariance for a set of expression pairs (expression1 and expression2). The return value is of type DOUBLE PRECISION. The function eliminates expression pairs where either expression in the pair is NULL. If no rows remain, the function returns NULL. Syntax COVAR_SAMP (expression1,expression2) Parameters expression1 The dependent expression, type DOUBLE PRECISION. expression2 The independent expression, type DOUBLE PRECISION. Example => SELECT COVAR_SAMP (Annual_salary, Employee_age) FROM employee_dimension; COVAR_SAMP -------------------9033.25143244343 (1 row) MAX [Aggregate] Returns the greatest value of an expression over a group of rows. The return value is the same as the expression data type. Behavior Type Immutable Syntax MAX ( [ ALL | DISTINCT ] expression ) Parameters ALL | DISTINCT These parameters have no meaning in this context. expression Any expression for which the maximum value is calculated, typically a column reference. HP Vertica Analytic Database (7.0.x) Page 185 of 1539 SQL Reference Manual SQL Functions Notes The MAX() aggregate function is different from the MAX() analytic function, which returns the maximum value of an expression over a group of rows within a window. Example This example returns the largest value (dollar amount) of the sales_dollar_amount column. => SELECT MAX(sales_dollar_amount) AS highest_sale FROM store.store_sales_fact; highest_sale -------------600 (1 row) See Also l Analytic Functions l MIN [Aggregate] MIN [Aggregate] Returns the smallest value of an expression over a group of rows. The return value is the same as the expression data type. Behavior Type Immutable Syntax MIN ( [ ALL | DISTINCT ] expression ) Parameters ALL | DISTINCT Are meaningless in this context. expression Any expression for which the minimum value is calculated, typically a column reference. Notes The MIN() aggregate function is different from the MIN() analytic function, which returns the minimum value of an expression over a group of rows within a window. HP Vertica Analytic Database (7.0.x) Page 186 of 1539 SQL Reference Manual SQL Functions Example This example returns the lowest salary from the employee dimension table. => SELECT MIN(annual_salary) AS lowest_paid FROM employee_dimension; lowest_paid ------------1200 (1 row) See Also l Analytic Functions l MAX [Aggregate] l Using SQL Analytics REGR_AVGX Returns the average of the independent expression in an expression pair (expression1 and expression2). The return value is of type DOUBLE PRECISION. The function eliminates expression pairs where either expression in the pair is NULL. If no rows remain, the function returns NULL. Syntax SELECT REGR_AVGX (expression1,expression2) Parameters expression1 The dependent expression. Is of type DOUBLE PRECISION. expression2 The independent expression. Is of type DOUBLE PRECISION. Example => SELECT REGR_AVGX (Annual_salary, Employee_age) FROM employee_dimension; REGR_AVGX ----------39.321 (1 row) HP Vertica Analytic Database (7.0.x) Page 187 of 1539 SQL Reference Manual SQL Functions REGR_AVGY Returns the average of the dependent expression in an expression pair (expression1 and expression2). The return value is of type DOUBLE PRECISION. The function eliminates expression pairs where either expression in the pair is NULL. If no rows remain, the function returns NULL. Syntax REGR_AVGY (expression1,expression2) Parameters expression1 The dependent expression, type DOUBLE PRECISION. expression2 The independent expression, type DOUBLE PRECISION. Example => SELECT REGR_AVGY (Annual_salary, Employee_age) FROM employee_dimension; REGR_AVGY -----------58354.4913 (1 row) REGR_COUNT Returns the number of expression pairs (expression1 and expression2). The return value is of type INTEGER. The function eliminates expression pairs where either expression in the pair is NULL. If no rows remain, the function returns 0. Syntax SELECT REGR_COUNT (expression1, expression2) Parameters expression1 The dependent expression, type DOUBLE PRECISION. expression2 The independent expression, type DOUBLE PRECISION. HP Vertica Analytic Database (7.0.x) Page 188 of 1539 SQL Reference Manual SQL Functions Example => SELECT REGR_COUNT (Annual_salary, Employee_age) FROM employee_dimension; REGR_COUNT -----------10000 (1 row) REGR_INTERCEPT Returns the y-intercept of the regression line determined by a set of expression pairs (expression1 and expression2). The return value is of type DOUBLE PRECISION. The function eliminates expression pairs where either expression in the pair is NULL. If no rows remain, the function returns NULL. Syntax SELECT REGR_INTERCEPT (expression1,expression2) Parameters expression1 The dependent expression, type DOUBLE PRECISION. expression2 The independent expression, type DOUBLE PRECISION. Example => SELECT REGR_INTERCEPT (Annual_salary, Employee_age) FROM employee_dimension; REGR_INTERCEPT -----------------59929.5490163437 (1 row) REGR_R2 Returns the square of the correlation coefficient of a set of expression pairs (expression1 and expression2). The return value is of type DOUBLE PRECISION. The function eliminates expression pairs where either expression in the pair is NULL. If no rows remain, the function returns NULL. Syntax SELECT REGR_R2 (expression1,expression2) HP Vertica Analytic Database (7.0.x) Page 189 of 1539 SQL Reference Manual SQL Functions Parameters expression1 The dependent expression, type DOUBLE PRECISION. expression2 The independent expression, type DOUBLE PRECISION. Example => SELECT REGR_R2 (Annual_salary, Employee_age) FROM employee_dimension; REGR_R2 ---------------------5.17181631706311e-05 (1 row) REGR_SLOPE Returns the slope of the regression line, determined by a set of expression pairs (expression1 and expression2). The return value is of type DOUBLE PRECISION. The function eliminates expression pairs where either expression in the pair is NULL. If no rows remain, the function returns NULL. Syntax SELECT REGR_SLOPE (expression1,expression2) Parameters expression1 The dependent expression, type DOUBLE PRECISION. expression2 The independent expression, type DOUBLE PRECISION. Example => SELECT REGR_SLOPE (Annual_salary, Employee_age) FROM employee_dimension; REGR_SLOPE ------------------40.056400303749 (1 row) REGR_SXX Returns the sum of squares of the independent expression in an expression pair (expression1 and expression2). The return value is of type DOUBLE PRECISION. The function eliminates HP Vertica Analytic Database (7.0.x) Page 190 of 1539 SQL Reference Manual SQL Functions expression pairs where either expression in the pair is NULL. If no rows remain, the function returns NULL. Syntax SELECT REGR_SXX (expression1,expression2) Parameters expression1 The dependent expression, type DOUBLE PRECISION. expression2 The independent expression, type DOUBLE PRECISION. Example => SELECT REGR_SXX (Annual_salary, Employee_age) FROM employee_dimension; REGR_SXX -----------2254907.59 (1 row) REGR_SXY Returns the sum of products of the independent expression multiplied by the dependent expression in an expression pair (expression1 and expression2). The return value is of type DOUBLE PRECISION. The function eliminates expression pairs where either expression in the pair is NULL. If no rows remain, the function returns NULL. Syntax SELECT REGR_SXY (expression1,expression2) Parameters expression1 The dependent expression, type DOUBLE PRECISION. expression2 The independent expression, type DOUBLE PRECISION. Example => SELECT REGR_SXY (Annual_salary, Employee_age) FROM employee_dimension; REGR_SXY -------------------90323481.0730019 HP Vertica Analytic Database (7.0.x) Page 191 of 1539 SQL Reference Manual SQL Functions (1 row) REGR_SYY Returns the sum of squares of the dependent expression in an expression pair (expression1 and expression2). The return value is of type DOUBLE PRECISION. The function eliminates expression pairs where either expression in the pair is NULL. If no rows remain, the function returns NULL. Syntax SELECT REGR_SYY (expression1,expression2) Parameters expression1 The dependent expression, type DOUBLE PRECISION. expression2 The independent expression, type DOUBLE PRECISION. Example => SELECT REGR_SYY (Annual_salary, Employee_age) FROM employee_dimension; REGR_SYY -----------------69956728794707.2 (1 row) STDDEV [Aggregate] Note: The non-standard function STDDEV() is provided for compatibility with other databases. It is semantically identical to STDDEV_SAMP(). Evaluates the statistical sample standard deviation for each member of the group. The STDDEV() return value is the same as the square root of the VAR_SAMP() function: STDDEV(expression) = SQRT(VAR_SAMP(expression)) When VAR_SAMP() returns NULL, this function returns NULL. Behavior Type Immutable HP Vertica Analytic Database (7.0.x) Page 192 of 1539 SQL Reference Manual SQL Functions Syntax STDDEV ( expression ) Parameters expression Any NUMERIC data type or any non-numeric data type that can be implicitly converted to a numeric data type. The function returns the same data type as the numeric data type of the argument. Notes The STDDEV() aggregate function is different from the STDDEV() analytic function, which computes the statistical sample standard deviation of the current row with respect to the group of rows within a window. Examples The following example returns the statistical sample standard deviation for each household ID from the customer_dimension table of the VMart example database: => SELECT STDDEV(household_id) FROM customer_dimension; STDDEV ----------------8651.5084240071 See Also l Analytic Functions l STDDEV_SAMP [Aggregate] l Using SQL Analytics STDDEV_POP [Aggregate] Evaluates the statistical population standard deviation for each member of the group. The STDDEV_ POP() return value is the same as the square root of the VAR_POP() function STDDEV_POP(expression) = SQRT(VAR_POP(expression)) When VAR_SAMP() returns NULL, this function returns NULL. HP Vertica Analytic Database (7.0.x) Page 193 of 1539 SQL Reference Manual SQL Functions Behavior Type Immutable Syntax STDDEV_POP ( expression ) Parameters expression Any NUMERIC data type or any non-numeric data type that can be implicitly converted to a numeric data type. The function returns the same data type as the numeric data type of the argument. Notes The STDDEV_POP() aggregate function is different from the STDDEV_POP() analytic function, which evaluates the statistical population standard deviation for each member of the group of rows within a window. Examples The following example returns the statistical population standard deviation for each household ID in the customer table. => SELECT STDDEV_POP(household_id) FROM customer_dimension; STDDEV_POP -----------------8651.41895973367 (1 row) See Also l Analytic Functions l Using SQL Analytics STDDEV_SAMP [Aggregate] Evaluates the statistical sample standard deviation for each member of the group. The STDDEV_ SAMP() return value is the same as the square root of the VAR_SAMP() function: STDDEV_SAMP(expression) = SQRT(VAR_SAMP(expression)) When VAR_SAMP() returns NULL, this function returns NULL. HP Vertica Analytic Database (7.0.x) Page 194 of 1539 SQL Reference Manual SQL Functions Behavior Type Immutable Syntax STDDEV_SAMP ( expression ) Parameters expression Any NUMERIC data type or any non-numeric data type that can be implicitly converted to a numeric data type. The function returns the same data type as the numeric data type of the argument. Notes l STDDEV_SAMP() is semantically identical to the non-standard function, STDDEV(), which is provided for compatibility with other databases. l The STDDEV_SAMP() aggregate function is different from the STDDEV_SAMP() analytic function, which computes the statistical sample standard deviation of the current row with respect to the group of rows within a window. Examples The following example returns the statistical sample standard deviation for each household ID from the customer dimension table. => SELECT STDDEV_SAMP(household_id) FROM customer_dimension; stddev_samp -----------------8651.50842400771 (1 row) See Also l Analytic Functions l STDDEV [Aggregate] l Using SQL Analytics HP Vertica Analytic Database (7.0.x) Page 195 of 1539 SQL Reference Manual SQL Functions SUM [Aggregate] Computes the sum of an expression over a group of rows. It returns a DOUBLE PRECISION value for a floating-point expression. Otherwise, the return value is the same as the expression data type. Behavior Type Immutable Syntax SUM ( [ ALL | DISTINCT ] expression ) Parameters ALL Invokes the aggregate function for all rows in the group (default) DISTINCT Invokes the aggregate function for all distinct non-null values of the expression found in the group expression Any NUMERIC data type or any non-numeric data type that can be implicitly converted to a numeric data type. The function returns the same data type as the numeric data type of the argument. Notes l The SUM() aggregate function is different from the SUM() analytic function, which computes the sum of an expression over a group of rows within a window. l If you encounter data overflow when using SUM(), use SUM_FLOAT() which converts the data to a floating point. Example This example returns the total sum of the product_cost column. => SELECT SUM(product_cost) AS cost FROM product_dimension; cost --------9042850 (1 row) HP Vertica Analytic Database (7.0.x) Page 196 of 1539 SQL Reference Manual SQL Functions See Also l AVG [Aggregate] l COUNT [Aggregate] Numeric Data Types l l SUM_FLOAT [Aggregate] Computes the sum of an expression over a group of rows. It returns a DOUBLE PRECISION value for the expression, regardless of the expression type. Behavior Type Immutable Syntax SUM_FLOAT ( [ ALL | DISTINCT ] expression ) Parameters ALL Invokes the aggregate function for all rows in the group (default). DISTINCT Invokes the aggregate function for all distinct non-null values of the expression found in the group. expression Any expression whose result is type DOUBLE PRECISION. Example The following example returns the floating-point sum of the average price from the product table: => SELECT SUM_FLOAT(average_competitor_price) AS cost FROM product_dimension; cost ---------18181102 (1 row) HP Vertica Analytic Database (7.0.x) Page 197 of 1539 SQL Reference Manual SQL Functions VAR_POP [Aggregate] Evaluates the population variance for each member of the group. This is defined as the sum of squares of the difference of expression from the mean of expression, divided by the number of rows remaining. (SUM(expression*expression) - SUM(expression)*SUM(expression) /COUNT(expression)) / COUNT (expression) Behavior Type Immutable Syntax VAR_POP ( expression ) Parameters expression Any NUMERIC data type or any non-numeric data type that can be implicitly converted to a numeric data type. The function returns the same data type as the numeric data type of the argument. Notes The VAR_POP() aggregate function is different from the VAR_POP() analytic function, which computes the population variance of the current row with respect to the group of rows within a window. Examples The following example returns the population variance for each household ID in the customer table. => SELECT VAR_POP(household_id) FROM customer_dimension; var_pop -----------------74847050.0168393 (1 row) VAR_SAMP [Aggregate] Evaluates the sample variance for each row of the group. This is defined as the sum of squares of the difference of expression from the mean of expression, divided by the number of rows remaining minus 1 (one). HP Vertica Analytic Database (7.0.x) Page 198 of 1539 SQL Reference Manual SQL Functions (SUM(expression*expression) - SUM(expression) *SUM(expression) /COUNT(expression)) / (COU NT(expression) -1) Behavior Type Immutable Syntax VAR_SAMP ( expression ) Parameters expression Any NUMERIC data type or any non-numeric data type that can be implicitly converted to a numeric data type. The function returns the same data type as the numeric data type of the argument. Notes l VAR_SAMP() is semantically identical to the non-standard function, VARIANCE(), which is provided for compatibility with other databases. l The VAR_SAMP() aggregate function is different from the VAR_SAMP() analytic function, which computes the sample variance of the current row with respect to the group of rows within a window. Examples The following example returns the sample variance for each household ID in the customer table. => SELECT VAR_SAMP(household_id) FROM customer_dimension; var_samp -----------------74848598.0106764 (1 row) See Also Analytic Functions l VARIANCE [Aggregate] l l HP Vertica Analytic Database (7.0.x) Page 199 of 1539 SQL Reference Manual SQL Functions VARIANCE [Aggregate] Note: The non-standard function VARIANCE() is provided for compatibility with other databases. It is semantically identical to VAR_SAMP(). Evaluates the sample variance for each row of the group. This is defined as the sum of squares of the difference of expression from the mean of expression, divided by the number of rows remaining minus 1 (one). (SUM(expression*expression) - SUM(expression) *SUM(expression) /COUNT(expression)) / (COU NT(expression) -1) Behavior Type Immutable Syntax VARIANCE ( expression ) Parameters expression Any NUMERIC data type or any non-numeric data type that can be implicitly converted to a numeric data type. The function returns the same data type as the numeric data type of the argument. Notes The VARIANCE() aggregate function is different from the VARIANCE() analytic function, which computes the sample variance of the current row with respect to the group of rows within a window. Examples The following example returns the sample variance for each household ID in the customer table. => SELECT VARIANCE(household_id) FROM customer_dimension; variance -----------------74848598.0106764 (1 row) HP Vertica Analytic Database (7.0.x) Page 200 of 1539 SQL Reference Manual SQL Functions See Also Analytic Functions l VAR_SAMP [Aggregate] l l HP Vertica Analytic Database (7.0.x) Page 201 of 1539 SQL Reference Manual SQL Functions Analytic Functions Note: All analytic functions in this section that have an aggregate counterpart are appended with [Analytics] in the heading to avoid confusion between the two. HP Vertica analytics are SQL functions based on the ANSI 99 standard. These functions handle complex analysis and reporting tasks such as: l Rank the longest-standing customers in a particular state l Calculate the moving average of retail volume over a specified time l Find the highest score among all students in the same grade l Compare the current sales bonus each salesperson received against his or her previous bonus Analytic functions return aggregate results but they do not group the result set. They return the group value multiple times, once per record. You can sort these group values, or partitions, using a window ORDER BY clause, but the order affects only the function result set, not the entire query result set. This ordering concept is described more fully later. Analytic Function Syntax ANALYTIC_FUNCTION( argument-1, ..., argument-n ) OVER( [ window_partition_clause ] [ window_order_clause ] [ window_frame_clause ] ) Analytic Syntactic Construct ANALYTIC_FUNCTION() HP Vertica provides a number of analytic functions that allow advanced data manipulation and analysis. Each of these functions takes one or more arguments. OVER(...) Specifies partitioning, ordering, and window framing for the function— important elements that determine what data the analytic function takes as input with respect to the current row. The OVER() clause is evaluated after the FROM, WHERE, GROUP BY, and HAVING clauses. The SQL OVER() clause must follow the analytic function. HP Vertica Analytic Database (7.0.x) Page 202 of 1539 SQL Reference Manual SQL Functions window_partition_clause Groups the rows in the input table by a given list of columns or expressions. The window_partition_clause is optional; if you omit it, the rows are not grouped, and the analytic function applies to all rows in the input set as a single partition. See window_partition_clause. window_order_clause Sorts the rows specified by the OVER() operator and supplies the ordered set of rows to the analytic function. If the partition clause is present, the window_order_clause applies within each partition. The order clause is optional. If you do not use it, the selection set is not sorted. See window_order_clause. window_frame_clause Used by only some analytic functions. If you include the frame clause in the OVER() statement, which specifies the beginning and end of the window relative to the current row, the analytic function applies only to a subset of the rows defined by the partition clause. This subset changes as the rows in the partition change (called a moving window). See window_frame_clause. Notes Analytic functions: l Require the OVER() clause. However, depending on the function, the window_frame_clause and window_order_clause might not apply. For example, when used with analytic aggregate functions like SUM(x), you can use the OVER() clause without supplying any of the windowing clauses; in this case, the aggregate returns the same aggregated value for each row of the result set. l Are allowed only in the SELECT and ORDER BY clauses. l Can be used in a subquery or in the parent query but cannot be nested; for example, the following query is not allowed: => SELECT MEDIAN(RANK() OVER(ORDER BY sal) OVER()). l WHERE, GROUP BY and HAVING operators are technically not part of the analytic function; however, they determine on which rows the analytic functions operate. See Also l Using SQL Analytics l Optimizing GROUP BY Queries HP Vertica Analytic Database (7.0.x) Page 203 of 1539 SQL Reference Manual SQL Functions window_partition_clause Window partitioning is optional. When specified, the window_partition_clause divides the rows in the input based on user-provided expressions, such as aggregation functions like SUM(x). Window partitioning is similar to the GROUP BY clause except that it returns only one result row per input row. If you omit the window_partition_clause, all input rows are treated as a single partition. The analytic function is computed per partition and starts over again (resets) at the beginning of each subsequent partition. The window_partition_clause is specified within the OVER() clause. Syntax OVER ( PARTITION BY expression [ , ... ] ) Parameters expression Expression on which to sort the partition on. May involve columns, constants, or an arbitrary expression formed on columns. For examples, see Window Partitioning in the Programmer's Guide. window_order_clause Sorts the rows specified by the OVER() clause and specifies whether data is sorted in ascending or descending order as well as the placement of null values. For example: ORDER BY expr_list [ ASC | DESC ] [ NULLS { FIRST | LAST | AUTO ] The ordering of the data affects the results. Using ORDER BY in an OVER clause changes the default window to RANGE UNBOUNDED PRECEDING AND CURRENT ROW, which is described in the window_frame_clause. The following table shows the default null placement, with bold clauses to indicate what is implicit: Ordering Null placement ORDER BY column1 ORDER BY a ASC NULLS LAST ORDER BY column1 ASC ORDER BY a ASC NULLS LAST ORDER BY column1 DESC ORDER BY a DESC NULLS FIRST Because the window_order_clause is different from a query's final ORDER BY clause, window ordering might not guarantee the final result order; it specifies only the order within a window result set, supplying the ordered set of rows to the window_frame_clause (if present), to the analytic function, or to both. Use the SQL ORDER BY clause to guarantee ordering of the final result set. HP Vertica Analytic Database (7.0.x) Page 204 of 1539 SQL Reference Manual SQL Functions Syntax OVER ( ORDER BY expression [ { ASC | DESC } ] ... [ NULLS { FIRST | LAST | AUTO } ] [, expression ...] ) Parameters expression Expression on which to sort the partition, which may involve columns, constants, or an arbitrary expression formed on columns. ASC | DESC Specifies the ordering sequence as ascending (default) or descending. NULLS { FIRST | LAST | AUTO } Indicates the position of nulls in the ordered sequence as either first or last. The order makes nulls compare either high or low with respect to non-null values. If the sequence is specified as ascending order, ASC NULLS FIRST implies that nulls are smaller than other non-null values. ASC NULLS LAST implies thatnulls are larger than non-null values. The opposite is true for descending order. If you specify NULLS AUTO, HP vertica chooses the most efficient placement of nulls (for example, either NULLS FIRST or NULLS LAST), based on your query. The default is ASC NULLS LAST and DESC NULLS FIRST. For more information, see l l "NULL Placement By Analytic Functions" on page 1 "Designing Tables to Minimize Run-Time Sorting of NULL Values in Analytic Functions" on page 1 The following analytic functions require the window_order_clause: l RANK() / DENSE_RANK() l LEAD() / LAG() l PERCENT_RANK() / CUME_DIST() l NTILE() You can also use the window_order_clause with aggregation functions, such as SUM(x). The ORDER BY clause is optional for the ROW_NUMBER() function. The ORDER BY clause is not allowed with the following functions: HP Vertica Analytic Database (7.0.x) Page 205 of 1539 SQL Reference Manual SQL Functions l PERCENTILE_CONT() / PERCENTILE_DISC() l MEDIAN() For examples, see Window Ordering in the Programmer's Guide. window_frame_clause Allowed for some analytic functions in the analytic OVER() clause, window framing represents a unique construct, called a moving window. It defines which values in the partition are evaluated relative to the current row. You specify a window frame in terms of either logical intervals (such as time) using the RANGE keyword or on a physical number of rows before and/or after the current row using the ROWS keyword. The CURRENT ROW is the next row for which the analytic function computes results. As the current row advances, the window boundaries are recomputed (move) along with it, determining which rows fall into the current window. An analytic function with a window frame specification is computed for each row based on the rows that fall into the window relative to that row. An analytic function with a window frame specification is computed for each row based on the rows that fall into the window relative to that row. If you omit the window_frame_clause, the default window is RANGE UNBOUNDED PRECEDING AND CURRENT ROW. Syntax { ROWS | RANGE } { { BETWEEN { UNBOUNDED PRECEDING | CURRENT ROW | constant-value { PRECEDING | FOLLOWING } } AND { UNBOUNDED FOLLOWING | CURRENT ROW | constant-value { PRECEDING | FOLLOWING } } } | { { UNBOUNDED PRECEDING | CURRENT ROW | constant-value PRECEDING } } } HP Vertica Analytic Database (7.0.x) Page 206 of 1539 SQL Reference Manual SQL Functions Parameters ROWS | RANGE The ROWS and RANGE keywords define the window frame type. ROWS specifies a window as a physical offset and defines the window's start and end point by the number of rows before or after the current row. The value can be INTEGER data type only. RANGE specifies the window as a logical offset, such as time. The range value must match the window_ order_clause data type, which can be NUMERIC, DATE/TIME, FLOAT or INTEGER. Note: The value returned by an analytic function with a logical offset is always deterministic. However, the value returned by an analytic function with a physical offset could produce nondeterministic results unless the ordering expression results in a unique ordering. You might have to specify multiple columns in the window_order_clause to achieve this unique ordering. BETWEEN ... AND Specifies a start point and end point for the window. The first expression (before AND) defines the start point and the second expression (after AND) defines the end point. Note: If you use the keyword BETWEEN, you must also use AND. UNBOUNDED PRECEDING Within a partition, indicates that the window frame starts at the first row of the partition. This start-point specification cannot be used as an end-point specification, and the default is RANGE UNBOUNDED PRECEDING AND CURRENT ROW UNBOUNDED FOLLOWING Within a partition, indicates that the window frame ends at the last row of the partition. This end-point specification cannot be used as a start-point specification. HP Vertica Analytic Database (7.0.x) Page 207 of 1539 SQL Reference Manual SQL Functions CURRENT ROW As a start point, CURRENT ROW specifies that the window begins at the current row or value, depending on whether you have specified ROW or RANGE, respectively. In this case, the end point cannot be constant-value PRECEDING. As an end point, CURRENT ROW specifies that the window ends at the current row or value, depending on whether you have specified ROW or RANGE, respectively. In this case the start point cannot be constant-value FOLLOWING. HP Vertica Analytic Database (7.0.x) Page 208 of 1539 SQL Reference Manual SQL Functions constant-value {PRECEDING | FOLLOWING } For RANGE or ROW: l If constant-value FOLLOWING is the start point, the end point must be constant-value FOLLOWING. l If constant-value PRECEDING is the end point, the start point must be constant-value PRECEDING. l If you specify a logical window that is defined by a time interval in NUMERIC format, you might need to use conversion functions. If you specified ROWS: l constant-value is a physical offset. It must be a constant or expression and must evaluate to an INTEGER data type value. l If constant-value is part of the start point, it must evaluate to a row before the end point. If you specified RANGE: l constant-value is a logical offset. It must be a constant or expression that evaluates to a positive numeric value or an INTERVAL literal. l If constant-value evaluates to a NUMERIC value, the ORDER BY column type must be a NUMERIC data type.. l If the constant-value evaluates to an INTERVAL DAY TO SECOND subtype, the ORDER BY column type can only be TIMESTAMP, TIME, DATE, or INTERVAL DAY TO SECOND. l If the constant-value evaluates to an INTERVAL YEAR TO MONTH, the ORDER BY column type can only be TIMESTAMP, DATE, or INTERVAL YEAR TO MONTH. l You can specify only one expression in the window_order_clause. Window Aggregates Analytic functions that take the window_frame_clause are called window aggregates, and they return information such as moving averages and cumulative results. To use the following functions HP Vertica Analytic Database (7.0.x) Page 209 of 1539 SQL Reference Manual SQL Functions as window (analytic) aggregates, instead of basic aggregates, specify both an ORDER BY clause (window_order_clause) and a moving window (window_frame_clause) in the OVER() clause. Used by only some analytic functions. If you include the frame clause in the OVER() statement, which specifies the beginning and end of the window relative to the current row, the analytic function applies only to a subset of the rows defined by the partition clause. This subset changes as the rows in the partition change (called a moving window). See window_frame_clause. Sorts the rows specified by the window_partition_clause and supplies an ordered set of rows to the window_frame_clause (if present), to the analytic function, or to both. The window_order_ clause specifies whether data is returned in ascending or descending order and specifies where null values appear in the sorted result as either first or last. The ordering of the data affects the results. Note: The window_order_clause does not guarantee the order of the SQL result. Use the SQL ORDER BY clause to guarantee the ordering of the final result set. Within a partition, UNBOUNDED PRECEDING/FOLLOWING means beginning/end of partition. If you omit the window_frame_clause but you specify the window_order_clause, the system provides the default window of RANGE BETWEEN UNBOUNDED PRECEDING AND CURRENT ROW. The following analytic functions take the window_frame_clause: l AVG() l COUNT() l MAX() and MIN() l SUM() l STDDEV(), STDDEV_POP(), and STDDEV_SAMP() l VARIANCE(), VAR_POP(), and VAR_SAMP() Note: FIRST_VALUE and LAST_VALUE functions also accept the window_frame_clause, but they are analytic functions only and have no aggregate counterpart. EXPONENTIAL_ MOVING_AVERAGE, LAG, and LEAD analytic functions do not take the window_frame_ clause. If you use a window aggregate with an empty OVER() clause, there is no moving window, and the function is used as a reporting function, where the entire input is treated as one partition. The value returned by an analytic function with a logical offset is always deterministic. However, the value returned by an analytic function with a physical offset could produce nondeterministic results unless the ordering expression results in a unique ordering. You might have to specify multiple columns in the window_order_clause to achieve this unique ordering. See Window Framing in the Programmer's Guide for examples. HP Vertica Analytic Database (7.0.x) Page 210 of 1539 SQL Reference Manual SQL Functions named_windows You can use the WINDOW clause to name your windows and avoid typing long OVER() clause syntax. The window_partition_clause is defined in the named window specification, not in the OVER() clause, and a window definition cannot contain a window_frame_clause. Each window defined in the window_definition_clause must have a unique name. Syntax WINDOW window_name AS ( window_definition_clause ); Parameters window_name User-supplied name of the analytics window. window_definition_clause [ window_partition_clause ] [ window_order_clause ] Examples In the following example, RANK() and DENSE_RANK() use the partitioning and ordering specifications in the window definition for a window named w: => SELECT RANK() OVER w , DENSE_RANK() OVER w FROM employee_dimension WINDOW w AS (PARTITION BY employee_region ORDER by annual_salary); Though analytic functions can reference a named window to inherit the window_partition_clause, you can use OVER() to define your own window_order_clause, but only if the window_definition_ clause did not already define it. Because ORDER by annual_salary was already defined in the WINDOW clause in the previous example, the following query would return an error. => SELECT RANK() OVER(w ORDER BY annual_salary ASC), DENSE_RANK() OVER(w ORDER BY annual_salary DESC) FROM employee_dimension WINDOW w AS (PARTITION BY employee_region); ERROR: cannot override ORDER BY clause of window "w" You can reference window names within their scope only. For example, because named window w1 in the following query is defined before w2, w2 is within the scope of w1: => SELECT RANK() OVER(w1 ORDER BY sal DESC), RANK() OVER w2 FROM EMP WINDOW w1 AS (PARTITION BY deptno), w2 AS (w1 ORDER BY sal); HP Vertica Analytic Database (7.0.x) Page 211 of 1539 SQL Reference Manual SQL Functions AVG [Analytic] Computes an average of an expression in a group within a window. Behavior Type Immutable Syntax AVG ... ... ... ( [ [ [ expression ) OVER ( window_partition_clause ] window_order_clause ] window_frame_clause ] ) Parameters expression The value whose average is calculated over a set of rows. Can be any expression resulting in DOUBLE PRECISION. OVER(...) See Analytic Functions. Notes AVG() takes as an argument any numeric data type or any non-numeric data type that can be implicitly converted to a numeric data type. The function returns the same data type as the argument's numeric data type. Examples The following query finds the sales for that calendar month and returns a running/cumulative average (sometimes called a moving average) using the default window of RANGE UNBOUNDED PRECEDING AND CURRENT ROW: => SELECT calendar_month_number_in_year, SUM(product_price) AS sales, AVG(SUM(product_price)) OVER (ORDER BY calendar_month_number_in_year) FROM product_dimension, date_dimension, inventory_fact WHERE date_dimension.date_key = inventory_fact.date_key AND product_dimension.product_key = inventory_fact.product_key GROUP BY calendar_month_number_in_year; calendar_month_number_in_year | sales | ?column? -------------------------------+----------+-----------------1 | 23869547 | 23869547 2 | 19604661 | 21737104 3 | 22877913 | 22117373.6666667 4 | 22901263 | 22313346 5 | 23670676 | 22584812 HP Vertica Analytic Database (7.0.x) Page 212 of 1539 SQL Reference Manual SQL Functions 6 7 8 9 10 11 12 | | | | | | | 22507600 21514089 24860684 21687795 23648921 21115910 24708317 | | | | | | | 22571943.3333333 22420821.2857143 22725804.125 22610469.7777778 22714314.9 22569005.3636364 22747281.3333333 (12 rows) To return a moving average that is not a running (cumulative) average, the window should specify ROWS BETWEEN 2 PRECEDING AND 2 FOLLOWING: => SELECT calendar_month_number_in_year, SUM(product_price) AS sales, AVG(SUM(product_price)) OVER (ORDER BY calendar_month_number_in_year ROWS BETWEEN 2 PRECEDING AND 2 FOLLOWING) FROM product_dimension, date_dimension, inventory_fact WHERE date_dimension.date_key = inventory_fact.date_key AND product_dimension.product_key = inventory_fact.product_key GROUP BY calendar_month_number_in_year; See Also l AVG [Aggregate] l COUNT [Analytic] l SUM [Analytic] l Using SQL Analytics CONDITIONAL_CHANGE_EVENT [Analytic] Assigns an event window number to each row, starting from 0, and increments by 1 when the result of evaluating the argument expression on the current row differs from that on the previous row. Behavior Type Immutable Syntax CONDITIONAL_CHANGE_EVENT ( expression ) OVER ( ... [ window_partition_clause ] ... window_order_clause ) HP Vertica Analytic Database (7.0.x) Page 213 of 1539 SQL Reference Manual SQL Functions Parameters expression SQL scalar expression that is evaluated on an input record. The result of expression can be of any data type. OVER(...) See Analytic Functions. Notes The analytic window_order_clause is required but the window_partition_clause is optional. Example => SELECT CONDITIONAL_CHANGE_EVENT(bid) OVER (PARTITION BY symbol ORDER BY ts) AS cce FROM TickStore; The system returns an error when no ORDER BY clause is present: => SELECT CONDITIONAL_CHANGE_EVENT(bid) OVER (PARTITION BY symbol) AS cce FROM TickStore; ERROR: conditional_change_event must contain an ORDER BY clause within its analytic clause For more examples, see Event-Based Windows in the Programmer's Guide. See Also l CONDITIONAL_TRUE_EVENT [Analytic] l ROW_NUMBER [Analytic] l Using Time Series Analytics l Event-Based Windows CONDITIONAL_TRUE_EVENT [Analytic] Assigns an event window number to each row, starting from 0, and increments the number by 1 when the result of the boolean argument expression evaluates true. For example, given a sequence of values for column a, as follows: ( 1, 2, 3, 4, 5, 6 ) CONDITIONAL_TRUE_EVENT(a > 3) returns 0, 0, 0, 1, 2, 3. HP Vertica Analytic Database (7.0.x) Page 214 of 1539 SQL Reference Manual SQL Functions Behavior Type: Immutable Syntax CONDITIONAL_TRUE_EVENT ( boolean-expression ) OVER ... ( [ window_partition_clause ] ... window_order_clause ) Parameters boolean-expression SQL scalar expression that is evaluated on an input record, type BOOLEAN. OVER(...) See Analytic Functions. Notes The analytic window_order_clause is required but the window_partition_clause is optional. Example > SELECT CONDITIONAL_TRUE_EVENT(bid > 10.6) OVER(PARTITION BY bid ORDER BY ts) AS cte FROM Tickstore; The system returns an error if the ORDER BY clause is omitted: > SELECT CONDITIONAL_TRUE_EVENT(bid > 10.6) OVER(PARTITION BY bid) AS cte FROM Tickstore; ERROR: conditional_true_event must contain an ORDER BY clause within its analytic clause For more examples, see Event-Based Windows in the Programmer's Guide. See Also l CONDITIONAL_CHANGE_EVENT [Analytic] l Using Time Series Analytics l Event-Based Windows HP Vertica Analytic Database (7.0.x) Page 215 of 1539 SQL Reference Manual SQL Functions COUNT [Analytic] Counts occurrences within a group within a window. If you specify * or some non-null constant, COUNT() counts all rows. Behavior Type Immutable Syntax COUNT ... [ ... [ ... [ ( expression ) OVER ( window_partition_clause ] window_order_clause ] window_frame_clause ] ) Parameters expression Returns the number of rows in each group for which the expression is not null. Can be any expression resulting in BIGINT. OVER(...) See Analytic Functions. Example Using the schema defined in Window Framing in the Programmer's Guide, the following COUNT function does not specify an order_clause or a frame_clause; otherwise it would be treated as a window aggregate. Think of the window of reporting aggregates as UNBOUNDED PRECEDING and UNBOUNDED FOLLOWING. => SELECT deptno, sal, empno, COUNT(sal) OVER (PARTITION BY deptno) AS count FROM emp; deptno | sal | empno | count --------+-----+-------+------10 | 101 | 1 | 2 10 | 104 | 4 | 2 20 | 110 | 10 | 6 20 | 110 | 9 | 6 20 | 109 | 7 | 6 20 | 109 | 6 | 6 20 | 109 | 8 | 6 20 | 109 | 11 | 6 30 | 105 | 5 | 3 30 | 103 | 3 | 3 30 | 102 | 2 | 3 Using ORDER BY sal creates a moving window query with default window: RANGE BETWEEN UNBOUNDED PRECEDING AND CURRENT ROW. HP Vertica Analytic Database (7.0.x) Page 216 of 1539 SQL Reference Manual SQL Functions => SELECT deptno, sal, empno, COUNT(sal) OVER (PARTITION BY deptno ORDER BY sal) AS count FROM emp; deptno | sal | empno | count --------+-----+-------+------10 | 101 | 1 | 1 10 | 104 | 4 | 2 20 | 100 | 11 | 1 20 | 109 | 7 | 4 20 | 109 | 6 | 4 20 | 109 | 8 | 4 20 | 110 | 10 | 6 20 | 110 | 9 | 6 30 | 102 | 2 | 1 30 | 103 | 3 | 2 30 | 105 | 5 | 3 Using the VMart schema, the following query finds the number of employees who make less than or equivalent to the hourly rate of the current employee. The query returns a running/cumulative average (sometimes called a moving average) using the default window of RANGE UNBOUNDED PRECEDING AND CURRENT ROW: => SELECT employee_last_name AS "last_name", hourly_rate, COUNT(*) OVER (ORDER BY hourly_rate) AS moving_count from employee_dimension; last_name | hourly_rate | moving_count ------------+-------------+-------------Gauthier | 6 | 4 Taylor | 6 | 4 Jefferson | 6 | 4 Nielson | 6 | 4 McNulty | 6.01 | 11 Robinson | 6.01 | 11 Dobisz | 6.01 | 11 Williams | 6.01 | 11 Kramer | 6.01 | 11 Miller | 6.01 | 11 Wilson | 6.01 | 11 Vogel | 6.02 | 14 Moore | 6.02 | 14 Vogel | 6.02 | 14 Carcetti | 6.03 | 19 ... To return a moving average that is not also a running (cumulative) average, the window should specify ROWS BETWEEN 2 PRECEDING AND 2 FOLLOWING: => SELECT employee_last_name AS "last_name", hourly_rate, COUNT(*) OVER (ORDER BY hourly_rate ROWS BETWEEN 2 PRECEDING AND 2 FOLLOWING) AS moving_count from employee_dimension; HP Vertica Analytic Database (7.0.x) Page 217 of 1539 SQL Reference Manual SQL Functions See Also l COUNT [Aggregate] l AVG [Analytic] l SUM [Analytic] l Using SQL Analytics CUME_DIST [Analytic] Calculates the cumulative distribution, or relative rank, of the current row with regard to other rows in the same partition within a window. CUME_DIST() returns a number greater then 0 and less then or equal to 1, where the number represents the relative position of the specified row within a group of N rows. For a row x (assuming ASC ordering), the CUME_DIST of x is the number of rows with values lower than or equal to the value of x, divided by the number of rows in the partition. In a group of three rows, for example, the cumulative distribution values returned would be 1/3, 2/3, and 3/3. Note: Because the result for a given row depends on the number of rows preceding that row in the same partition, HP recommends that you always specify a window_order_clause when you call this function. Behavior Type Immutable Syntax CUME_DIST ( ) OVER ( ... [ window_partition_clause ] ... window_order_clause ) Parameters OVER(...) See Analytic Functions. Notes The analytic window_order_clause is required but the window_partition_clause is optional. HP Vertica Analytic Database (7.0.x) Page 218 of 1539 SQL Reference Manual SQL Functions Examples The following example returns the cumulative distribution of sales for different transaction types within each month of the first quarter. => SELECT calendar_month_name AS month, tender_type, SUM(sales_quantity), CUME_DIST() OVER (PARTITION BY calendar_month_name ORDER BY SUM(sales_quantity)) AS CUME_DIST FROM store.store_sales_fact JOIN date_dimension USING(date_key) WHERE calendar_month_name IN ('January','February','March') AND tender_type NOT LIKE 'Other' GROUP BY calendar_month_name, tender_type; month | tender_type | SUM | CUME_DIST ----------+-------------+--------+----------March | Credit | 469858 | 0.25 March | Cash | 470449 | 0.5 March | Check | 473033 | 0.75 March | Debit | 475103 | 1 January | Cash | 441730 | 0.25 January | Debit | 443922 | 0.5 January | Check | 446297 | 0.75 January | Credit | 450994 | 1 February | Check | 425665 | 0.25 February | Debit | 426726 | 0.5 February | Credit | 430010 | 0.75 February | Cash | 430767 | 1 (12 rows) See Also l PERCENT_RANK [Analytic] l PERCENTILE_DISC [Analytic] l Using SQL Analytics DENSE_RANK [Analytic] Computes the relative rank of each row returned from a query with respect to the other rows, based on the values of the expressions in the window ORDER BY clause. The data within a group is sorted by the ORDER BY clause and then a numeric ranking is assigned to each row in turn starting with 1 and continuing from there. The rank is incremented every time the values of the ORDER BY expressions change. Rows with equal values receive the same rank (nulls are considered equal in this comparison). A DENSE_RANK() function returns a ranking number without any gaps, which is why it is called "DENSE." HP Vertica Analytic Database (7.0.x) Page 219 of 1539 SQL Reference Manual SQL Functions Behavior Type Immutable Syntax DENSE_RANK ( ) OVER ( ... [ window_partition_clause ] ... window_order_clause ) Parameters OVER(...) See Analytic Functions. Notes l The analytic window_order_clause is required but the window_partition_clause is optional. l The ranks are consecutive integers beginning with 1. The largest rank value is the number of unique values returned by the query. l The primary difference between DENSE_RANK() and RANK() is that RANK leaves gaps when ranking records whereas DENSE_RANK leaves no gaps. For example, N records occupy a particular position (say, a tie for rank X), RANK assigns all those records with rank X and skips the next N ranks, therefore the next assigned rank is X+N. DENSE_RANK places all the records in that position only—it does not skip any ranks. If there is a tie at the third position with two records having the same value, RANK and DENSE_ RANK place both the records in the third position, but RANK places the next record at the fifth position, while DENSE_RANK places the next record at the fourth position. l If you omit NULLS FIRST | LAST | AUTO, the ordering of the NULL values depends on the ASC or DESC arguments. NULL values are considered larger than any other value. If the ordering sequence is ASC, then nulls appear last; nulls appear first otherwise. Nulls are considered equal to other nulls and, therefore, the order in which nulls are presented is non-deterministic. Example The following example shows the difference between RANK and DENSE_RANK when ranking customers by their annual income. Notice that RANK has a tie at 10 and skips 11, while DENSE_RANK leaves no gaps in the ranking sequence: => SELECT customer_name, SUM(annual_income), RANK () OVER (ORDER BY TO_CHAR(SUM(annual_income),'100000') DESC) rank, DENSE_RANK () OVER (ORDER BY TO_CHAR(SUM(annual_income),'100000') DESC) dense_rank FROM customer_dimension GROUP BY customer_name LIMIT 15; HP Vertica Analytic Database (7.0.x) Page 220 of 1539 SQL Reference Manual SQL Functions customer_name | sum | rank | dense_rank ---------------------+-------+------+-----------Brian M. Garnett | 99838 | 1 | 1 Tanya A. Brown | 99834 | 2 | 2 Tiffany P. Farmer | 99826 | 3 | 3 Jose V. Sanchez | 99673 | 4 | 4 Marcus D. Rodriguez | 99631 | 5 | 5 Alexander T. Nguyen | 99604 | 6 | 6 Sarah G. Lewis | 99556 | 7 | 7 Ruth Q. Vu | 99542 | 8 | 8 Theodore T. Farmer | 99532 | 9 | 9 Daniel P. Li | 99497 | 10 | 10 Seth E. Brown | 99497 | 10 | 10 Matt X. Gauthier | 99402 | 12 | 11 Rebecca W. Lewis | 99296 | 13 | 12 Dean L. Wilson | 99276 | 14 | 13 Tiffany A. Smith | 99257 | 15 | 14 (15 rows) See Also l RANK [Analytic] l Using SQL Analytics EXPONENTIAL_MOVING_AVERAGE [Analytic] Calculates the exponential moving average of expression E with smoothing factor X. The exponential moving average (EMA) is calculated by adding the previous EMA value to the current data point scaled by the smoothing factor, as in the following formula, where: l EMA0 is the previous row's EMA value l X is the smoothing factor l E is the current data point: EMA = EMA0 + (X * (E - EMA0)) EXPONENTIAL_MOVING_AVERAGE() is different from a simple moving average in that it provides a more stable picture of changes to data over time. Behavior Type Immutable Syntax EXPONENTIAL_MOVING_AVERAGE ( E , X ) OVER ( ... [ window_partition_clause ] ... window_order_clause ) HP Vertica Analytic Database (7.0.x) Page 221 of 1539 SQL Reference Manual SQL Functions Parameters E The value whose average is calculated over a set of rows. Can be INTEGER, FLOAT or NUMERIC type and must be a constant. X A positive FLOAT value between 0 and 1 that is used as the smoothing factor. OVER(...) See Analytic Functions. Notes l The analytic window_order_clause is required but the window_partition_clause is optional. l There is no [Aggregate] equivalent of this function because of its unique semantics. l The EXPONENTIAL_MOVING_AVERAGE() function also works at the row level; for example, EMA assumes the data in a given column is sampled at uniform intervals. If the users' data points are sampled at non-uniform intervals, they should run the time series gap filling and interpolation (GFI) operations before EMA(). Examples The following example uses time series gap filling and interpolation (GFI) first in a subquery, and then performs an EXPONENTIAL_MOVING_AVERAGE operation on the subquery result. Create a simple four-column table: => CREATE TABLE ticker( time TIMESTAMP, symbol VARCHAR(8), bid1 FLOAT, bid2 FLOAT ); Insert some data, including nulls, so GFI can do its interpolation and gap filling: => => => => => => => => => => => INSERT INTO INSERT INTO INSERT INTO INSERT INTO INSERT INTO INSERT INTO INSERT INTO INSERT INTO INSERT INTO INSERT INTO COMMIT; ticker ticker ticker ticker ticker ticker ticker ticker ticker ticker VALUES VALUES VALUES VALUES VALUES VALUES VALUES VALUES VALUES VALUES ('2009-07-12 ('2009-07-12 ('2009-07-12 ('2009-07-12 ('2009-07-12 ('2009-07-12 ('2009-07-12 ('2009-07-12 ('2009-07-12 ('2009-07-12 03:00:00', 03:00:01', 03:00:02', 03:00:03', 03:00:04', 03:00:00', 03:00:01', 03:00:02', 03:00:03', 03:00:04', 'ABC', 'ABC', 'ABC', 'ABC', 'ABC', 'XYZ', 'XYZ', 'XYZ', 'XYZ', 'XYZ', 60.45, 60.44); 60.49, 65.12); 57.78, 59.25); null, 65.12); 67.88, null); 47.55, 40.15); 44.35, 46.78); 71.56, 75.78); 85.55, 70.21); 45.55, 58.65); Note: During gap filling and interpolation, HP Vertica takes the closest non null value on either HP Vertica Analytic Database (7.0.x) Page 222 of 1539 SQL Reference Manual SQL Functions side of the time slice and uses that value. For example, if you use a linear interpolation scheme and you do not specify IGNORE NULLS, and your data has one real value and one null, the result is null. If the value on either side is null, the result is null. See When Time Series Data Contains Nulls in the Programmer's Guide for details. Query the table that you just created to you can see the output: => SELECT * FROM ticker; time | symbol | bid1 | bid2 ---------------------+--------+-------+------2009-07-12 03:00:00 | ABC | 60.45 | 60.44 2009-07-12 03:00:01 | ABC | 60.49 | 65.12 2009-07-12 03:00:02 | ABC | 57.78 | 59.25 2009-07-12 03:00:03 | ABC | | 65.12 2009-07-12 03:00:04 | ABC | 67.88 | 2009-07-12 03:00:00 | XYZ | 47.55 | 40.15 2009-07-12 03:00:01 | XYZ | 44.35 | 46.78 2009-07-12 03:00:02 | XYZ | 71.56 | 75.78 2009-07-12 03:00:03 | XYZ | 85.55 | 70.21 2009-07-12 03:00:04 | XYZ | 45.55 | 58.65 (10 rows) The following query processes the first and last values that belong to each 2-second time slice in table trades' column a. The query then calculates the exponential moving average of expression fv and lv with a smoothing factor of 50%: => SELECT symbol, slice_time, fv, lv, EXPONENTIAL_MOVING_AVERAGE(fv, 0.5) OVER (PARTITION BY symbol ORDER BY slice_time) AS ema_first, EXPONENTIAL_MOVING_AVERAGE(lv, 0.5) OVER (PARTITION BY symbol ORDER BY slice_time) AS ema_last FROM ( SELECT symbol, slice_time, TS_FIRST_VALUE(bid1 IGNORE NULLS) as fv, TS_LAST_VALUE(bid2 IGNORE NULLS) AS lv FROM ticker TIMESERIES slice_time AS '2 seconds' OVER (PARTITION BY symbol ORDER BY time) ) AS sq; symbol | slice_time | fv | lv | ema_first | ema_last --------+---------------------+-------+-------+-----------+---------ABC | 2009-07-12 03:00:00 | 60.45 | 65.12 | 60.45 | 65.12 ABC | 2009-07-12 03:00:02 | 57.78 | 65.12 | 59.115 | 65.12 ABC | 2009-07-12 03:00:04 | 67.88 | 65.12 | 63.4975 | 65.12 XYZ | 2009-07-12 03:00:00 | 47.55 | 46.78 | 47.55 | 46.78 XYZ | 2009-07-12 03:00:02 | 71.56 | 70.21 | 59.555 | 58.495 XYZ | 2009-07-12 03:00:04 | 45.55 | 58.65 | 52.5525 | 58.5725 (6 rows) HP Vertica Analytic Database (7.0.x) Page 223 of 1539 SQL Reference Manual SQL Functions See Also l TIMESERIES Clause l Using Time Series Analytics l Using SQL Analytics FIRST_VALUE [Analytic] Allows the selection of the first value of a table or partition without having to use a self-join. If no window is specified for the current row, the default window is UNBOUNDED PRECEDING AND CURRENT ROW. Behavior Type Immutable Syntax FIRST_VALUE ( expression [ IGNORE NULLS ] ) OVER ( ... [ window_partition_clause ] ... [ window_order_clause ] ... [ window_frame_clause ] ) Parameters expression Expression to evaluate; for example, a constant, column, nonanalytic function, function expression, or expressions involving any of these. IGNORE NULLS Specifies to return the first non-null value in the set, or NULL if all values are NULL. OVER(...) See Analytic Functions. Notes l The FIRST_VALUE() function lets you select a table's first value (determined by the window_ order_clause) without having to use a self join. This function is useful when you want to use the first value as a baseline in calculations. l HP recommends that you use FIRST_VALUE with the window_order_clause to produce deterministic results. l If the first value in the set is null, then the function returns NULL unless you specify IGNORE NULLS. If you specify IGNORE NULLS, FIRST_VALUE returns the first non-null value in the set, or NULL if all values are null. HP Vertica Analytic Database (7.0.x) Page 224 of 1539 SQL Reference Manual SQL Functions Examples The following query, which asks for the first value in the partitioned day of week, illustrates the potential nondeterministic nature of the FIRST_VALUE function: => SELECT calendar_year, date_key, day_of_week, full_date_description, FIRST_VALUE(full_date_description) OVER(PARTITION BY calendar_month_number_in_year ORDER BY day_of_week) AS "first_value" FROM date_dimension WHERE calendar_year=2003 AND calendar_month_number_in_year=1; The first value returned is January 31, 2003; however, the next time the same query is run, the first value could be January 24 or January 3, or the 10th or 17th. The reason is because the analytic ORDER BY column (day_of_week) returns rows that contain ties (multiple Fridays). These repeated values make the ORDER BY evaluation result nondeterministic, because rows that contain ties can be ordered in any way, and any one of those rows qualifies as being the first value of day_of_week. calendar_year | date_key | day_of_week | full_date_description |first_value --------------+----------+-------------+-----------------------+----------2003 | 31 | Friday | January 31, 2003 | January 31, 2003 | 24 | Friday | January 24, 2003 | January 31, 2003 | 3 | Friday | January 3, 2003 | January 31, 2003 | 10 | Friday | January 10, 2003 | January 31, 2003 | 17 | Friday | January 17, 2003 | January 31, 2003 | 6 | Monday | January 6, 2003 | January 31, 2003 | 27 | Monday | January 27, 2003 | January 31, 2003 | 13 | Monday | January 13, 2003 | January 31, 2003 | 20 | Monday | January 20, 2003 | January 31, 2003 | 11 | Saturday | January 11, 2003 | January 31, 2003 | 18 | Saturday | January 18, 2003 | January 31, 2003 | 25 | Saturday | January 25, 2003 | January 31, 2003 | 4 | Saturday | January 4, 2003 | January 31, 2003 | 12 | Sunday | January 12, 2003 | January 31, 2003 | 26 | Sunday | January 26, 2003 | January 31, 2003 | 5 | Sunday | January 5, 2003 | January 31, 2003 | 19 | Sunday | January 19, 2003 | January 31, 2003 | 23 | Thursday | January 23, 2003 | January 31, 2003 | 2 | Thursday | January 2, 2003 | January 31, 2003 | 9 | Thursday | January 9, 2003 | January 31, 2003 | 16 | Thursday | January 16, 2003 | January 31, 2003 | 30 | Thursday | January 30, 2003 | January 31, 2003 | 21 | Tuesday | January 21, 2003 | January 31, 2003 | 14 | Tuesday | January 14, 2003 | January 31, 2003 | 7 | Tuesday | January 7, 2003 | January 31, 2003 | 28 | Tuesday | January 28, 2003 | January 31, 2003 | 22 | Wednesday | January 22, 2003 | January 31, 2003 | 29 | Wednesday | January 29, 2003 | January 31, 2003 | 15 | Wednesday | January 15, 2003 | January 31, 2003 | 1 | Wednesday | January 1, 2003 | January 31, 2003 | 8 | Wednesday | January 8, 2003 | January 31, (31 rows) HP Vertica Analytic Database (7.0.x) 2003 2003 2003 2003 2003 2003 2003 2003 2003 2003 2003 2003 2003 2003 2003 2003 2003 2003 2003 2003 2003 2003 2003 2003 2003 2003 2003 2003 2003 2003 2003 Page 225 of 1539 SQL Reference Manual SQL Functions Note: The day_of_week results are returned in alphabetical order because of lexical rules. The fact that each day does not appear ordered by the 7-day week cycle (for example, starting with Sunday followed by Monday, Tuesday, and so on) has no affect on results. To return deterministic results, modify the query so that it performs its analytic ORDER BY operations on a unique field, such as date_key: => SELECT calendar_year, date_key, day_of_week, full_date_description, FIRST_VALUE(full_date_description) OVER (PARTITION BY calendar_month_number_in_year ORDER BY date_key) AS "first_value" FROM date_dimension WHERE calendar_year=2003; Notice that the results return a first value of January 1 for the January partition and the first value of February 1 for the February partition. Also, there are no ties in the full_date_description column: calendar_year | date_key | day_of_week | full_date_description | first_value ---------------+----------+-------------+-----------------------+-----------2003 | 1 | Wednesday | January 1, 2003 | January 1, 2003 2003 | 2 | Thursday | January 2, 2003 | January 1, 2003 2003 | 3 | Friday | January 3, 2003 | January 1, 2003 2003 | 4 | Saturday | January 4, 2003 | January 1, 2003 2003 | 5 | Sunday | January 5, 2003 | January 1, 2003 2003 | 6 | Monday | January 6, 2003 | January 1, 2003 2003 | 7 | Tuesday | January 7, 2003 | January 1, 2003 2003 | 8 | Wednesday | January 8, 2003 | January 1, 2003 2003 | 9 | Thursday | January 9, 2003 | January 1, 2003 2003 | 10 | Friday | January 10, 2003 | January 1, 2003 2003 | 11 | Saturday | January 11, 2003 | January 1, 2003 2003 | 12 | Sunday | January 12, 2003 | January 1, 2003 2003 | 13 | Monday | January 13, 2003 | January 1, 2003 2003 | 14 | Tuesday | January 14, 2003 | January 1, 2003 2003 | 15 | Wednesday | January 15, 2003 | January 1, 2003 2003 | 16 | Thursday | January 16, 2003 | January 1, 2003 2003 | 17 | Friday | January 17, 2003 | January 1, 2003 2003 | 18 | Saturday | January 18, 2003 | January 1, 2003 2003 | 19 | Sunday | January 19, 2003 | January 1, 2003 2003 | 20 | Monday | January 20, 2003 | January 1, 2003 2003 | 21 | Tuesday | January 21, 2003 | January 1, 2003 2003 | 22 | Wednesday | January 22, 2003 | January 1, 2003 2003 | 23 | Thursday | January 23, 2003 | January 1, 2003 2003 | 24 | Friday | January 24, 2003 | January 1, 2003 2003 | 25 | Saturday | January 25, 2003 | January 1, 2003 2003 | 26 | Sunday | January 26, 2003 | January 1, 2003 2003 | 27 | Monday | January 27, 2003 | January 1, 2003 2003 | 28 | Tuesday | January 28, 2003 | January 1, 2003 2003 | 29 | Wednesday | January 29, 2003 | January 1, 2003 2003 | 30 | Thursday | January 30, 2003 | January 1, 2003 2003 | 31 | Friday | January 31, 2003 | January 1, 2003 2003 | 32 | Saturday | February 1, 2003 | February 1, 2003 2003 | 33 | Sunday | February 2, 2003 | February 1,2003 ... (365 rows) HP Vertica Analytic Database (7.0.x) Page 226 of 1539 SQL Reference Manual SQL Functions See Also l LAST_VALUE [Analytic] l TIME_SLICE l Using SQL Analytics LAG [Analytic] Returns the value of the input expression at the given offset before the current row within a window. Behavior Type Immutable Syntax LAG ( expression [, offset ] [, default ] ) OVER ( ... [ window_partition_clause ] ... window_order_clause ) Parameters expression Is the expression to evaluate; for example, a constant, column, non-analytic function, function expression, or expressions involving any of these. offset [Optional] Indicates how great is the lag. The default value is 1 (the previous row). The offset parameter must be (or can be evaluated to) a constant positive integer. default NULL. This optional parameter is the value returned if offset falls outside the bounds of the table or partition. Note: The default input argument must be a constant value or an expression that can be evaluated to a constant; its data type is coercible to that of the first argument. OVER(...) See Analytic Functions. HP Vertica Analytic Database (7.0.x) Page 227 of 1539 SQL Reference Manual SQL Functions Notes l The analytic window_order_clause is required but the window_partition_clause is optional. l The LAG() function returns values from the row before the current row, letting you access more than one row in a table at the same time. This is useful for comparing values when the relative positions of rows can be reliably known. It also lets you avoid the more costly self join, which enhances query processing speed. l See LEAD() for how to get the next rows. l Analytic functions, such as LAG(), cannot be nested within aggregate functions. Examples This example sums the current balance by date in a table and also sums the previous balance from the last day. Given the inputs that follow, the data satisfies the following conditions: l For each some_id, there is exactly 1 row for each date represented by month_date. l For each some_id, the set of dates is consecutive; that is, if there is a row for February 24 and a row for February 26, there would also be a row for February 25. l Each some_id has the same set of dates. => CREATE TABLE balances ( month_date DATE, current_bal INT, some_id INT); => => => => => => => => => INSERT INSERT INSERT INSERT INSERT INSERT INSERT INSERT INSERT INTO INTO INTO INTO INTO INTO INTO INTO INTO balances balances balances balances balances balances balances balances balances values values values values values values values values values ('2009-02-24', ('2009-02-25', ('2009-02-26', ('2009-02-24', ('2009-02-25', ('2009-02-26', ('2009-02-24', ('2009-02-25', ('2009-02-26', 10, 10, 10, 20, 20, 20, 30, 20, 30, 1); 1); 1); 2); 2); 2); 3); 3); 3); Now run the LAG() function to sum the current balance for each date and sum the previous balance from the last day: => SELECT month_date, SUM(current_bal) as current_bal_sum, SUM(previous_bal) as previous_bal_sum FROM (SELECT month_date, current_bal, LAG(current_bal, 1, 0) OVER (PARTITION BY some_id ORDER BY month_date) AS previous_bal FROM balances) AS subQ HP Vertica Analytic Database (7.0.x) Page 228 of 1539 SQL Reference Manual SQL Functions GROUP BY month_date ORDER BY month_date; month_date | current_bal_sum | previous_bal_sum ------------+-----------------+-----------------2009-02-24 | 60 | 0 2009-02-25 | 50 | 60 2009-02-26 | 60 | 50 (3 rows) Using the same example data, the following query would not be allowed because LAG() is nested inside an aggregate function: => SELECT month_date, SUM(current_bal) as current_bal_sum, SUM(LAG(current_bal, 1, 0) OVER (PARTITION BY some_id ORDER BY month_date)) AS previous_bal_sum FROM some_table GROUP BY month_date ORDER BY month_date; In the next example, which uses the VMart example database (see Introducing the VMart Example Database), the LAG() function first returns the annual income from the previous row, and then it calculates the difference between the income in the current row from the income in the previous row. Note: The vmart example database returns over 50,000 rows, so we'll limit the results to 20 records: => SELECT occupation, customer_key, customer_name, annual_income, LAG(annual_income, 1, 0) OVER (PARTITION BY occupation ORDER BY annual_income) AS prev_income, annual_income LAG(annual_income, 1, 0) OVER (PARTITION BY occupation ORDER BY annual_income) AS difference FROM customer_dimension ORDER BY occupation, customer_key LIMIT 20; occupation | customer_key | customer_name | annual_income | prev_income | differe nce ------------+--------------+----------------------+---------------+-------------+----------Accountant | 15 | Midori V. Peterson | 692610 | 692535 | 75 Accountant | 43 | Midori S. Rodriguez | 282359 | 280976 | 1 383 Accountant | 93 | Robert P. Campbell | 471722 | 471355 | 367 Accountant | 102 | Sam T. McNulty | 901636 | 901561 | 75 Accountant | 134 | Martha B. Overstreet | 705146 | 704335 | 811 Accountant | 165 | James C. Kramer | 376841 | 376474 | 367 Accountant | 225 | Ben W. Farmer | 70574 | 70449 | 125 Accountant | 270 | Jessica S. Lang | 684204 | 682274 | 1 930 Accountant | 273 | Mark X. Lampert | 723294 | 722737 | 557 Accountant | 295 | Sharon K. Gauthier | 29033 | 28412 | 621 Accountant | 338 | Anna S. Jackson | 816858 | 815557 | HP Vertica Analytic Database (7.0.x) Page 229 of 1539 SQL Reference Manual SQL Functions 1301 Accountant 277 Accountant 914 Accountant 226 Accountant 244 Accountant 620 Accountant 707 Accountant 383 Accountant 336 Accountant 662 (20 rows) | 377 | William I. Jones | 915149 | 914872 | | 438 | Joanna A. McCabe | 147396 | 144482 | 2 | 452 | Kim P. Brown | 126023 | 124797 | 1 | 467 | Meghan K. Carcetti | 810528 | 810284 | | 478 | Tanya E. Greenwood | 639649 | 639029 | | 511 | Midori P. Vogel | 187246 | 185539 | | 525 | Alexander K. Moore | 677433 | 677050 | | 550 | Sam P. Reyes | 735691 | 735355 | | 577 | Robert U. Vu | 616101 | 615439 | 1 Continuing with the Vmart database, the next example uses both LEAD() and LAG() to return the third row after the salary in the current row and fifth salary before the salary in the current row. => SELECT hire_date, employee_key, employee_last_name, LEAD(hire_date, 1) OVER (ORDER BY hire_date) AS "next_hired" , LAG(hire_date, 1) OVER (ORDER BY hire_date) AS "last_hired" FROM employee_dimension ORDER BY hire_date, employee_key; hire_date | employee_key | employee_last_name | next_hired | last_hired ------------+--------------+--------------------+------------+-----------1956-04-11 | 2694 | Farmer | 1956-05-12 | 1956-05-12 | 5486 | Winkler | 1956-09-18 | 1956-04-11 1956-09-18 | 5525 | McCabe | 1957-01-15 | 1956-05-12 1957-01-15 | 560 | Greenwood | 1957-02-06 | 1956-09-18 1957-02-06 | 9781 | Bauer | 1957-05-25 | 1957-01-15 1957-05-25 | 9506 | Webber | 1957-07-04 | 1957-02-06 1957-07-04 | 6723 | Kramer | 1957-07-07 | 1957-05-25 1957-07-07 | 5827 | Garnett | 1957-11-11 | 1957-07-04 1957-11-11 | 373 | Reyes | 1957-11-21 | 1957-07-07 1957-11-21 | 3874 | Martin | 1958-02-06 | 1957-11-11 (10 rows) The following example specifies arguments that use different data types; for example annual_ income(INT) and occupation(VARCHAR). The query returns an error: => SELECT customer_key, customer_name, occupation, annual_income, LAG (annual_income, 1, occupation) OVER (PARTITION BY occupation ORDER BY customer_key) LAG1 FROM customer_dimension ORDER BY 3, 1; ERROR: Third argument of lag could not be converted from type character varying to ty pe int8 HINT: You may need to add explicit type cast. HP Vertica Analytic Database (7.0.x) Page 230 of 1539 SQL Reference Manual SQL Functions See Also l LEAD [Analytic] l Using SQL Analytics LAST_VALUE [Analytic] Returns values of the expression from the last row of a window for the current row. If no window is specified for the current row, the default window is UNBOUNDED PRECEDING AND CURRENT ROW. Behavior Type Immutable Syntax LAST_VALUE ( expression [ IGNORE NULLS ] ) OVER ( ... [ window_partition_clause ] ... [ window_order_clause ] ... [ window_frame_clause ] ) Parameters expression Is the expression to evaluate; for example, a constant, column, nonanalytic function, function expression, or expressions involving any of these. IGNORE NULLS Returns the last non-null value in the set, or NULL if all values are NULL. OVER(...) See Analytic Functions. Notes l The LAST_VALUE() function lets you select a window's last value (determined by the window_ order_clause), without having to use a self join. This function is useful when you want to use the last value as a baseline in calculations. l LAST_VALUE() takes the last record from the partition after the analytic window_order_clause. The expression is then computed against the last record, and results are returned. l HP recommends that you use LAST_VALUE with the window_order_clause to produce deterministic results. Tip: Due to default window semantics, LAST_VALUE does not always return the last value of a partition. If you omit the window_frame_clause from the analytic clause, LAST_VALUE HP Vertica Analytic Database (7.0.x) Page 231 of 1539 SQL Reference Manual SQL Functions operates on this default window. Although results can seem non-intuitive by not returning the bottom of the current partition, it returns the bottom of the window, which continues to change along with the current input row being processed. If you want to return the last value of a partition, use UNBOUNDED PRECEDING AND UNBOUNDED FOLLOWING. See examples below. l If the last value in the set is null, then the function returns NULL unless you specify IGNORE NULLS. If you specify IGNORE NULLS, LAST_VALUE returns the fist non-null value in the set, or NULL if all values are null. Example Using the schema defined in Window Framing in the Programmer's Guide, the following query does not show the highest salary value by department; instead it shows the highest salary value by department by salary. => SELECT deptno, sal, empno, LAST_VALUE(sal) OVER (PARTITION BY deptno ORDER BY sal) AS lv FROM emp; deptno | sal | empno | lv --------+-----+-------+-------10 | 101 | 1 | 101 10 | 104 | 4 | 104 20 | 100 | 11 | 100 20 | 109 | 7 | 109 20 | 109 | 6 | 109 20 | 109 | 8 | 109 20 | 110 | 10 | 110 20 | 110 | 9 | 110 30 | 102 | 2 | 102 30 | 103 | 3 | 103 30 | 105 | 5 | 105 If you include the window_frame clause ROWS BETWEEN UNBOUNDED PRECEDING AND UNBOUNDED FOLLOWING, the LAST_VALUE() function will return the highest salary by department, an accurate representation of the information. => SELECT deptno, sal, empno, LAST_VALUE(sal) OVER (PARTITION BY deptno ORDER BY sal ROWS BETWEEN UNBOUNDED PRECEDING AND UNBOUNDED FOLLOWING) AS lv FROM emp; deptno | sal | empno | lv --------+-----+-------+-------10 | 101 | 1 | 104 10 | 104 | 4 | 104 20 | 100 | 11 | 110 20 | 109 | 7 | 110 20 | 109 | 6 | 110 20 | 109 | 8 | 110 20 | 110 | 10 | 110 HP Vertica Analytic Database (7.0.x) Page 232 of 1539 SQL Reference Manual SQL Functions 20 30 30 30 | | | | 110 102 103 105 | | | | 9 2 3 5 | | | | 110 105 105 105 For additional examples, see FIRST_VALUE(). See Also FIRST_VALUE [Analytic] l TIME_SLICE l l LEAD [Analytic] Returns the value of the input expression at the given offset after the current row within a window. Behavior Type Immutable Syntax LEAD ( expression [, offset ] [, default ] ) OVER ( ... [ window_partition_clause ] ... window_order_clause ) Parameters expression Is the expression to evaluate; for example, a constant, column, nonanalytic function, function expression, or expressions involving any of these. offset Is an optional parameter that defaults to 1 (the next row). The offset parameter must be (or can be evaluated to) a constant positive integer. default Is NULL. This optional parameter is the value returned if offset falls outside the bounds of the table or partition. Note: The third input argument must be a constant value or an expression that can be evaluated to a constant; its data type is coercible to that of the first argument. OVER(...) See Analytic Functions. HP Vertica Analytic Database (7.0.x) Page 233 of 1539 SQL Reference Manual SQL Functions Notes l The analytic window_order_clause is required but the window_partition_clause is optional. l The LEAD() function returns values from the row after the current row, letting you access more than one row in a table at the same time. This is useful for comparing values when the relative positions of rows can be reliably known. It also lets you avoid the more costly self join, which enhances query processing speed. l Analytic functions, such as LEAD(), cannot be nested within aggregate functions. Examples In this example, the LEAD() function finds the hire date of the employee hired just after the current row: => SELECT employee_region, hire_date, employee_key, employee_last_name, LEAD(hire_date, 1) OVER (PARTITION BY employee_region ORDER BY hire_date) AS "next_hir ed" FROM employee_dimension ORDER BY employee_region, hire_date, employee_key; employee_region | hire_date | employee_key | employee_last_name | next_hired -------------------+------------+--------------+--------------------+-----------East | 1956-04-08 | 9218 | Harris | 1957-02-06 East | 1957-02-06 | 7799 | Stein | 1957-05-25 East | 1957-05-25 | 3687 | Farmer | 1957-06-26 East | 1957-06-26 | 9474 | Bauer | 1957-08-18 East | 1957-08-18 | 570 | Jefferson | 1957-08-24 East | 1957-08-24 | 4363 | Wilson | 1958-02-17 East | 1958-02-17 | 6457 | McCabe | 1958-06-26 East | 1958-06-26 | 6196 | Li | 1958-07-16 East | 1958-07-16 | 7749 | Harris | 1958-09-18 East | 1958-09-18 | 9678 | Sanchez | 1958-11-10 (10 rows) The next example uses both LEAD() and LAG() to return the third row after the salary in the current row and fifth salary before the salary in the current row. => SELECT hire_date, employee_key, employee_last_name, LEAD(hire_date, 1) OVER (ORDER BY hire_date) AS "next_hired" , LAG(hire_date, 1) OVER (ORDER BY hire_date) AS "last_hired" FROM employee_dimension ORDER BY hire_date, employee_key; hire_date | employee_key | employee_last_name | next_hired | last_hired ------------+--------------+--------------------+------------+-----------1956-04-11 | 2694 | Farmer | 1956-05-12 | 1956-05-12 | 5486 | Winkler | 1956-09-18 | 1956-04-11 1956-09-18 | 5525 | McCabe | 1957-01-15 | 1956-05-12 1957-01-15 | 560 | Greenwood | 1957-02-06 | 1956-09-18 1957-02-06 | 9781 | Bauer | 1957-05-25 | 1957-01-15 1957-05-25 | 9506 | Webber | 1957-07-04 | 1957-02-06 1957-07-04 | 6723 | Kramer | 1957-07-07 | 1957-05-25 1957-07-07 | 5827 | Garnett | 1957-11-11 | 1957-07-04 1957-11-11 | 373 | Reyes | 1957-11-21 | 1957-07-07 HP Vertica Analytic Database (7.0.x) Page 234 of 1539 SQL Reference Manual SQL Functions 1957-11-21 | (10 rows) 3874 | Martin | 1958-02-06 | 1957-11-11 The following example returns employee name and salary, along with the next highest and lowest salaries. => SELECT employee_last_name, annual_salary, NVL(LEAD(annual_salary) OVER (ORDER BY annual_salary), MIN(annual_salary) OVER()) "Next Highest", NVL(LAG(annual_salary) OVER (ORDER BY annual_salary), MAX(annual_salary) OVER()) "Next Lowest" FROM employee_dimension; employee_last_name | annual_salary | Next Highest | Next Lowest --------------------+---------------+--------------+------------Nielson | 1200 | 1200 | 995533 Lewis | 1200 | 1200 | 1200 Harris | 1200 | 1202 | 1200 Robinson | 1202 | 1202 | 1200 Garnett | 1202 | 1202 | 1202 Weaver | 1202 | 1202 | 1202 Nielson | 1202 | 1202 | 1202 McNulty | 1202 | 1204 | 1202 Farmer | 1204 | 1204 | 1202 Martin | 1204 | 1204 | 1204 (10 rows) The next example returns, for each assistant director in the employees table, the hire date of the director hired just after the director on the current row. For example, Jackson was hired on 2007-1228, and the next director hired was Bauer: => SELECT employee_last_name, hire_date, LEAD(hire_date, 1) OVER (ORDER BY hire_date DESC) as "NextHired" FROM employee_dimension WHERE job_title = 'Assistant Director'; employee_last_name | hire_date | NextHired --------------------+------------+-----------Jackson | 2007-12-28 | 2007-12-26 Bauer | 2007-12-26 | 2007-12-11 Miller | 2007-12-11 | 2007-12-07 Fortin | 2007-12-07 | 2007-11-27 Harris | 2007-11-27 | 2007-11-15 Goldberg | 2007-11-15 | (5 rows) See Also l LAG [Analytic] l Using SQL Analytics HP Vertica Analytic Database (7.0.x) Page 235 of 1539 SQL Reference Manual SQL Functions MAX [Analytic] Returns the maximum value of an expression within a window. The return value is the same as the expression data type. Behavior Type Immutable Syntax MAX ... ... ... ( [ [ [ [ DISTINCT ] expression ) OVER ( window_partition_clause ] window_order_clause ] window_frame_clause ] ) Parameters DISTINCT This parameter has no meaning in this context. expression Any expression for which the maximum value is calculated, typically a column reference . OVER(...) See Analytic Functions. Example The following query computes the deviation between the employees' annual salary and the maximum annual salary in Massachusetts: => SELECT employee_state, annual_salary, MAX(annual_salary) OVER(PARTITION BY employee_state ORDER BY employee_key) max, annual_salary- MAX(annual_salary) OVER(PARTITION BY employee_state ORDER BY employee_key) diff FROM employee_dimension WHERE employee_state = 'MA'; employee_state | annual_salary | max | diff ----------------+---------------+--------+--------MA | 1918 | 995533 | -993615 MA | 2058 | 995533 | -993475 MA | 2586 | 995533 | -992947 MA | 2500 | 995533 | -993033 MA | 1318 | 995533 | -994215 MA | 2072 | 995533 | -993461 MA | 2656 | 995533 | -992877 MA | 2148 | 995533 | -993385 MA | 2366 | 995533 | -993167 MA | 2664 | 995533 | -992869 HP Vertica Analytic Database (7.0.x) Page 236 of 1539 SQL Reference Manual SQL Functions (10 rows) See Also MAX [Aggregate] l MIN [Analytic] l l MEDIAN [Analytic] A numerical value of an expression in a result set within a window, which separates the higher half of a sample from the lower half. For example, a query can retrieve the median of a finite list of numbers by arranging all observations from lowest value to highest value and then picking the middle one. If there is an even number of observations, then there is no single middle value; thus, the median is defined to be the mean (average) of the two middle values MEDIAN() is an alias for 50% PERCENTILE(); for example: PERCENTILE_CONT(0.5) WITHIN GROUP(ORDER BY expression) Behavior Type Immutable Syntax MEDIAN ( expression ) OVER ( [ window_partition_clause ] ) Parameters expression Any NUMERIC data type or any non-numeric data type that can be implicitly converted to a numeric data type. The function returns the middle value or an interpolated value that would be the middle value once the values are sorted. Null values are ignored in the calculation. OVER(...) See Analytic Functions. HP Vertica Analytic Database (7.0.x) Page 237 of 1539 SQL Reference Manual SQL Functions Notes l For each row, MEDIAN() returns the value that would fall in the middle of a value set within each partition. l HP Vertica determines the argument with the highest numeric precedence, implicitly converts the remaining arguments to that data type, and returns that data type. l MEDIAN() does not allow the window_order_clause or window_frame_clause. Examples The following query computes the median annual income for first 500 customers in Wisconsin and in the District of Columbia. The median is reported for every row in each partitioned result set: => SELECT customer_state, annual_income, MEDIAN(annual_income) OVER (PARTITION BY customer_state) AS MEDIAN FROM customer_dimension WHERE customer_state IN ('DC','WI') ORDER BY customer_state; customer_state | customer_key | annual_income | MEDIAN ----------------+--------------+---------------+---------DC | 120 | 299768 | 535413 DC | 113 | 535413 | 535413 DC | 130 | 848360 | 535413 ---------------------------------------------------------WI | 372 | 34962 | 668147 WI | 437 | 47128 | 668147 WI | 435 | 67770 | 668147 WI | 282 | 638054 | 668147 WI | 314 | 668147 | 668147 WI | 128 | 675608 | 668147 WI | 179 | 825304 | 668147 WI | 302 | 827618 | 668147 WI | 29 | 922760 | 668147 (12 rows) See Also l PERCENTILE_CONT [Analytic] l Using SQL Analytics MIN [Analytic] Returns the minimum value of an expression within a window. The return value is the same as the expression data type. HP Vertica Analytic Database (7.0.x) Page 238 of 1539 SQL Reference Manual SQL Functions Behavior Type Immutable Syntax MIN ... ... ... ( [ [ [ [ DISTINCT ] expression ) OVER ( window_partition_clause ] window_order_clause ] window_frame_clause ] ) Parameters DISTINCT This parameter has no meaning in this context. expression Any expression for which the minimum value is calculated, typically a column reference. OVER(...) See Analytic Functions. Examples The following query computes the deviation between the employees' annual salary and the minimum annual salary in Massachusetts: => SELECT employee_state, annual_salary, MIN(annual_salary) OVER(PARTITION BY employee_state ORDER BY employee_key) min, annual_salary- MIN(annual_salary) OVER(PARTITION BY employee_state ORDER BY employee_key) diff FROM employee_dimension WHERE employee_state = 'MA'; employee_state | annual_salary | min | diff ----------------+---------------+------+-----MA | 1918 | 1204 | 714 MA | 2058 | 1204 | 854 MA | 2586 | 1204 | 1382 MA | 2500 | 1204 | 1296 MA | 1318 | 1204 | 114 MA | 2072 | 1204 | 868 MA | 2656 | 1204 | 1452 MA | 2148 | 1204 | 944 MA | 2366 | 1204 | 1162 MA | 2664 | 1204 | 1460 (10 rows) HP Vertica Analytic Database (7.0.x) Page 239 of 1539 SQL Reference Manual SQL Functions See Also l MIN [Aggregate] l MAX [Analytic] l Using SQL Analytics NTILE [Analytic] Equally divides an ordered data set (partition) into a {value} number of subsets within a window, with buckets (subsets) numbered 1 through constant-value. For example, if constant-value = 4, then each row in the partition is assigned a number from 1 to 4. If the partition contains 20 rows, the first 5 would be assigned 1, the next 5 would be assigned 2, and so on. Behavior Type Immutable Syntax NTILE ( constant-value ) OVER ( ... [ window_partition_clause ] ... window_order_clause ) Parameters constant-value Represents the number of subsets and must resolve to a positive constant for each partition. OVER(...) See Analytic Functions. Notes l The analytic window_order_clause is required but the window_partition_clause is optional. l If the number of subsets is greater than the number of rows, then a number of subsets equal to the number of rows is filled, and the remaining subsets are empty. l In the event the cardinality of the partition is not evenly divisible by the number of subsets, the rows are distributed so no subset has more than 1 row more then any other subset, and the lowest subsets are the ones that have extra rows. For example, using constant-value = 4 again and the number of rows = 21, subset = 1 has 6 rows, subset = 2 has 5, and so on. l Analytic functions, such as NTILE(), cannot be nested within aggregate functions. HP Vertica Analytic Database (7.0.x) Page 240 of 1539 SQL Reference Manual SQL Functions Examples The following query assigns each month's sales total into one of four subsets: => SELECT calendar_month_name AS MONTH, SUM(sales_quantity), NTILE(4) OVER (ORDER BY SUM(sales_quantity)) AS NTILE FROM store.store_sales_fact JOIN date_dimension USING(date_key) GROUP BY calendar_month_name ORDER BY NTILE; MONTH | SUM | NTILE -----------+------+------February | 755 | 1 June | 842 | 1 September | 849 | 1 January | 881 | 2 May | 882 | 2 July | 894 | 2 August | 921 | 3 April | 952 | 3 March | 987 | 3 October | 1010 | 4 November | 1026 | 4 December | 1094 | 4 (12 rows) See Also l PERCENTILE_CONT [Analytic] l WIDTH_BUCKET l Using SQL Analytics PERCENT_RANK [Analytic] Calculates the relative rank of a row for a given row in a group within a window by dividing that row’s rank less 1 by the number of rows in the partition, also less 1. This function always returns values from 0 to 1 inclusive. The first row in any set has a PERCENT_RANK() of 0. The return value is NUMBER. ( rank - 1 ) / ( [ rows ] - 1 ) In the preceding formula, rank is the rank position of a row in the group and rows is the total number of rows in the partition defined by the OVER() clause. Behavior Type Immutable HP Vertica Analytic Database (7.0.x) Page 241 of 1539 SQL Reference Manual SQL Functions Syntax PERCENT_RANK ( ) OVER ( ... [ window_partition_clause ] ... window_order_clause ) Parameters OVER(...) See Analytic Functions. Notes The window_order_clause is required but the window_partition_clause is optional. Examples The following example finds the percent rank of gross profit for different states within each month of the first quarter: => SELECT calendar_month_name AS MONTH, store_state, SUM(gross_profit_dollar_amount), PERCENT_RANK() OVER (PARTITION BY calendar_month_name ORDER BY SUM(gross_profit_dollar_amount)) AS PERCENT_RANK FROM store.store_sales_fact JOIN date_dimension USING(date_key) JOIN store.store_dimension USING (store_key) WHERE calendar_month_name IN ('January','February','March') AND store_state IN ('OR','IA','DC','NV','WI') GROUP BY calendar_month_name, store_state ORDER BY calendar_month_name, PERCENT_RANK; MONTH | store_state | SUM | PERCENT_RANK ----------+-------------+------+------------------February | OR | 16 | 0 February | IA | 47 | 0.25 February | DC | 94 | 0.5 February | NV | 113 | 0.75 February | WI | 119 | 1 January | IA | -263 | 0 January | OR | 91 | 0.333333333333333 January | NV | 372 | 0.666666666666667 January | DC | 497 | 1 March | NV | -141 | 0 March | OR | 224 | 1 (11 rows) The following example calculates, for each employee, the percent rank of the employee's salary by their job title: => SELECT job_title, employee_last_name, annual_salary, HP Vertica Analytic Database (7.0.x) Page 242 of 1539 SQL Reference Manual SQL Functions PERCENT_RANK() OVER (PARTITION BY job_title ORDER BY annual_salary DESC) AS percent_rank FROM employee_dimension ORDER BY percent_rank, annual_salary; job_title | employee_last_name | annual_salary | PERCENT_RANK --------------------+--------------------+---------------+--------------------CEO | Campbell | 963914 | 0 Co-Founder | Nguyen | 968625 | 0 Founder | Overstreet | 995533 | 0 Greeter | Peterson | 3192 | 0.00113895216400911 Greeter | Greenwood | 3192 | 0.00113895216400911 Customer Service | Peterson | 3190 | 0.00121065375302663 Delivery Person | Rodriguez | 3192 | 0.00121065375302663 Shelf Stocker | Martin | 3194 | 0.00125786163522013 Shelf Stocker | Vu | 3194 | 0.00125786163522013 Marketing | Li | 99711 | 0.00190114068441065 Assistant Director | Sanchez | 99913 | 0.00190839694656489 Branch Manager | Perkins | 99901 | 0.00192307692307692 Advertising | Lampert | 99809 | 0.00204918032786885 Sales | Miller | 99727 | 0.00211416490486258 Shift Manager | King | 99904 | 0.00215982721382289 Custodian | Bauer | 3196 | 0.00235849056603774 Custodian | Goldberg | 3196 | 0.00235849056603774 Customer Service | Fortin | 3184 | 0.00242130750605327 Delivery Person | Greenwood | 3186 | 0.00242130750605327 Cashier | Overstreet | 3178 | 0.00243605359317905 Regional Manager | McCabe | 199688 | 0.00306748466257669 VP of Sales | Li | 199309 | 0.00313479623824451 Director of HR | Goldberg | 199592 | 0.00316455696202532 Head of Marketing | Stein | 199941 | 0.00317460317460317 VP of Advertising | Goldberg | 199036 | 0.00323624595469256 Head of PR | Stein | 199767 | 0.00323624595469256 Customer Service | Rodriguez | 3180 | 0.0036319612590799 Delivery Person | King | 3184 | 0.0036319612590799 Cashier | Dobisz | 3174 | 0.00365408038976857 Cashier | Miller | 3174 | 0.00365408038976857 Marketing | Dobisz | 99655 | 0.00380228136882129 Branch Manager | Gauthier | 99082 | 0.025 Branch Manager | Moore | 98415 | 0.05 ... See Also l CUME_DIST [Analytic] l Using SQL Analytics PERCENTILE_CONT [Analytic] An inverse distribution function where, for each row, PERCENTILE_CONT() returns the value that would fall into the specified percentile among a set of values in each partition within a window. For example, if the argument to the function is 0.5, the result of the function is the median of the data set (the 50th percentile). PERCENTILE_CONT() assumes a continuous distribution data model. Nulls are ignored. HP Vertica Analytic Database (7.0.x) Page 243 of 1539 SQL Reference Manual SQL Functions Behavior Type Immutable Syntax PERCENTILE_CONT ( %_number ) WITHIN GROUP ( ... ORDER BY expression [ ASC | DESC ] ) OVER ( ... [ window_partition_clause ] ) Parameters %_number Percentile value, which must be a FLOAT constant ranging from 0 to 1 (inclusive). WITHIN GROUP(ORDER BYexpression) Specifies how the data is sorted within each group. ORDER BY takes only one column/expression that must be INTEGER, FLOAT, INTERVAL, or NUMERIC data type. Nulls are discarded. Note: The WITHIN GROUP(ORDER BY) clause does not guarantee the order of the SQL result. Use the SQL ORDER BY clause to guarantee the ordering of the final result set. ASC | DESC Specifies the ordering sequence as ascending (default) or descending. OVER(...) See Analytic Functions. Notes l HP Vertica computes the percentile by first computing the row number where the percentile row would exist; for example: ROW_NUMBER = 1 + PERCENTILE_VALUE * (NUMBER_OF_ROWS_IN_PARTITION -1) If the CEILING(ROW_NUMBER) = FLOOR(ROW_NUMBER), then the percentile is the value at the ROW_NUMBER. Otherwise, there is an even number of rows, and HP Vertica must interpolate the value between the rows. In this case, the percentile CEILING_VAL = the value at the CEILING (ROW_NUMBER) and FLOOR_VAL = the value at FLOOR(ROW_NUMBER). The interpolation is determined by (CEILING(ROW_NUMBER) - ROW_NUMBER) * CEILING_VAL + (ROW_NUMBER FLOOR(ROW_NUMBER) * FLOOR_VAL. If CEIL(num) = FLOOR(num) = num, retrieve the value in that row. Otherwise compute values at [ CEIL(num) + FLOOR(num) ] / 2 HP Vertica Analytic Database (7.0.x) Page 244 of 1539 SQL Reference Manual SQL Functions l Specifying ASC or DESC in the WITHIN GROUP clause affects results as long as the percentile parameter is not .5. l The MEDIAN() function is a specific case of PERCENTILE_CONT()where the percentile value defaults to 0.5. For more information, see MEDIAN(). Examples This query computes the median annual income per group for the first 500 customers in Wisconsin and the District of Columbia. => SELECT customer_state, customer_key, annual_income, PERCENTILE_CONT(.5) WITHIN GROUP(ORDER BY annual_income) OVER (PARTITION BY customer_state) AS PERCENTILE_CONT FROM customer_dimension WHERE customer_state IN ('DC','WI') AND customer_key < 300 ORDER BY customer_state, customer_key; customer_state | customer_key | annual_income | PERCENTILE_CONT ----------------+--------------+---------------+----------------DC | 104 | 658383 | 658383 DC | 168 | 417092 | 658383 DC | 245 | 670205 | 658383 WI | 106 | 227279 | 458607 WI | 127 | 703889 | 458607 WI | 209 | 458607 | 458607 (6 rows) The median value for DC is 65838, and the median value for WI is 458607. With a %_number of 0.5 in the above query, PERCENTILE_CONT() returns the same result as MEDIAN() in the following query: => SELECT customer_state, customer_key, annual_income, MEDIAN(annual_income) OVER (PARTITION BY customer_state) AS MEDIAN FROM customer_dimension WHERE customer_state IN ('DC','WI') AND customer_key < 300 ORDER BY customer_state, customer_key; customer_state | customer_key | annual_income | MEDIAN ----------------+--------------+---------------+-------DC | 104 | 658383 | 658383 DC | 168 | 417092 | 658383 DC | 245 | 670205 | 658383 WI | 106 | 227279 | 458607 WI | 127 | 703889 | 458607 WI | 209 | 458607 | 458607 (6 rows) HP Vertica Analytic Database (7.0.x) Page 245 of 1539 SQL Reference Manual SQL Functions See Also l MEDIAN [Analytic] l Using SQL Analytics PERCENTILE_DISC [Analytic] An inverse distribution function where, for each row, PERCENTILE_DISC() returns the value that would fall into the specified percentile among a set of values in each partition within a window. PERCENTILE_DISC() assumes a discrete distribution data model. Nulls are ignored. Behavior Type Immutable Syntax PERCENTILE_DISC ( %_number ) WITHIN GROUP ( ... ORDER BY expression [ ASC | DESC ] ) OVER ( ... [ window_partition_clause ] ) Parameters %_number Percentile value, which must be a FLOAT constant ranging from 0 to 1 (inclusive). WITHIN GROUP(ORDER BYexpression) Specifies how the data is sorted within each group. ORDER BY takes only one column/expression that must be INTEGER, FLOAT, INTERVAL, or NUMERIC data type. Nulls are discarded. Note: The WITHIN GROUP(ORDER BY) clause does not guarantee the order of the SQL result. Use the SQL ORDER BY clause to guarantee the ordering of the final result set. ASC | DESC Specifies the ordering sequence as ascending (default) or descending. OVER(...) See Analytic Functions. HP Vertica Analytic Database (7.0.x) Page 246 of 1539 SQL Reference Manual SQL Functions Notes l PERCENTILE_DISC(%_number) examines the cumulative distribution values in each group until it finds one that is greater than or equal to %_number. l HP Vertica computes the percentile where, for each row, PERCENTILE_DISC outputs the first value of the WITHIN GROUP(ORDER BY) column whose CUME_DIST (cumulative distribution) value is >= the argument FLOAT value (for example, 0.4). Specifically: PERCENTILE_DIST(.4) WITHIN GROUP (ORDER BY salary) OVER(PARTITION By deptno) ... If you write, for example: SELECT CUME_DIST() OVER(ORDER BY salary) FROM table; The smallest CUME_DIST value that is greater than 0.4 is also the PERCENTILE_DISC. Example This query computes the 20th percentile annual income by group for first 500 customers in Wisconsin and the District of Columbia. => SELECT customer_state, customer_key, annual_income, PERCENTILE_DISC(.2) WITHIN GROUP(ORDER BY annual_income) OVER (PARTITION BY customer_state) AS PERCENTILE_DISC FROM customer_dimension WHERE customer_state IN ('DC','WI') AND customer_key < 300 ORDER BY customer_state, customer_key; customer_state | customer_key | annual_income | PERCENTILE_DISC ----------------+--------------+---------------+----------------DC | 104 | 658383 | 417092 DC | 168 | 417092 | 417092 DC | 245 | 670205 | 417092 WI | 106 | 227279 | 227279 WI | 127 | 703889 | 227279 WI | 209 | 458607 | 227279 (6 rows) See Also l CUME_DIST [Analytic] l PERCENTILE_CONT [Analytic] l Using SQL Analytics HP Vertica Analytic Database (7.0.x) Page 247 of 1539 SQL Reference Manual SQL Functions RANK [Analytic] Assigns a rank to each row returned from a query with respect to the other ordered rows, based on the values of the expressions in the window ORDER BY clause. The data within a group is sorted by the ORDER BY clause and then a numeric ranking is assigned to each row in turn, starting with 1, and continuing up. Rows with the same values of the ORDER BY expressions receive the same rank; however, if two rows receive the same rank (a tie), RANK() skips the ties. If, for example, two rows are numbered 1, RANK() skips number 2 and assigns 3 to the next row in the group. This is in contrast to DENSE_RANK(), which does not skip values. Behavior Type Immutable Syntax RANK ( ) OVER ( ... [ window_partition_clause ] ... window_order_clause ) Parameters OVER(...) See Analytic Functions. Notes l Ranking functions return a rank value for each row in a result set based on the order specified in the query. For example, a territory sales manager might want to identify the top or bottom ranking sales associates in a department or the highest/lowest-performing sales offices by region. l RANK() requires an OVER() clause. The window_partition_clause is optional. l In ranking functions, OVER() specifies the measures expression on which ranking is done and defines the order in which rows are sorted in each group (or partition). Once the data is sorted within each partition, ranks are given to each row starting from 1. l The primary difference between RANK and DENSE_RANK is that RANK leaves gaps when ranking records; DENSE_RANK leaves no gaps. For example, if more than one record occupies a particular position (a tie), RANK places all those records in that position and it places the next record after a gap of the additional records (it skips one). DENSE_RANK places all the records in that position only—it does not leave a gap for the next rank. If there is a tie at the third position with two records having the same value, RANK and DENSE_ RANK place both the records in the third position only, but RANK has the next record at the fifth HP Vertica Analytic Database (7.0.x) Page 248 of 1539 SQL Reference Manual SQL Functions position—leaving a gap of 1 position—while DENSE_RANK places the next record at the forth position (no gap). l If you omit NULLS FIRST | LAST | AUTO, the ordering of the null values depends on the ASC or DESC arguments. Null values are considered larger than any other values. If the ordering sequence is ASC, then nulls appear last; nulls appear first otherwise. Nulls are considered equal to other nulls and, therefore, the order in which nulls are presented is non-deterministic. Examples This example ranks the longest standing customers in Massachusetts. The query first computes the customer_since column by region, and then partitions the results by customers with businesses in MA. Then within each region, the query ranks customers over the age of 70. => SELECT customer_type, customer_name, RANK() OVER (PARTITION BY customer_region ORDER BY customer_since) as rank FROM customer_dimension WHERE customer_state = 'MA' AND customer_age > '70'; customer_type | customer_name | rank ---------------+---------------+-----Company | Virtadata | 1 Company | Evergen | 2 Company | Infocore | 3 Company | Goldtech | 4 Company | Veritech | 5 Company | Inishop | 6 Company | Intracom | 7 Company | Virtacom | 8 Company | Goldcom | 9 Company | Infostar | 10 Company | Golddata | 11 Company | Everdata | 12 Company | Goldcorp | 13 (13 rows) The following example shows the difference between RANK and DENSE_RANK when ranking customers by their annual income. RANK has a tie at 10 and skips 11, while DENSE_RANK leaves no gaps in the ranking sequence: => SELECT customer_name, SUM(annual_income), RANK () OVER (ORDER BY TO_CHAR(SUM(annual_income),'100000') DESC) rank, DENSE_RANK () OVER (ORDER BY TO_CHAR(SUM(annual_income),'100000') DESC) dense_rank FROM customer_dimension GROUP BY customer_name LIMIT 15; customer_name | sum | rank | dense_rank ---------------------+-------+------+-----------Brian M. Garnett | 99838 | 1 | 1 HP Vertica Analytic Database (7.0.x) Page 249 of 1539 SQL Reference Manual SQL Functions Tanya A. Brown Tiffany P. Farmer Jose V. Sanchez Marcus D. Rodriguez Alexander T. Nguyen Sarah G. Lewis Ruth Q. Vu Theodore T. Farmer Daniel P. Li Seth E. Brown Matt X. Gauthier Rebecca W. Lewis Dean L. Wilson Tiffany A. Smith (15 rows) | | | | | | | | | | | | | | 99834 99826 99673 99631 99604 99556 99542 99532 99497 99497 99402 99296 99276 99257 | | | | | | | | | | | | | | 2 3 4 5 6 7 8 9 10 10 12 13 14 15 | | | | | | | | | | | | | | 2 3 4 5 6 7 8 9 10 10 11 12 13 14 See Also l DENSE_RANK [Analytic] l Using SQL Analytics ROW_NUMBER [Analytic] Assigns a unique number, sequentially, starting from 1, to each row in a partition within a window. Behavior Type Immutable Syntax ROW_NUMBER ( ) OVER ( ... [ window_partition_clause ] ... window_order_clause ) Parameters OVER(...) See Analytic Functions. Notes l ROW_NUMBER() is an HP Vertica extension, not part of the SQL-99 standard. It requires an OVER () clause. The window_partition_clause is optional. l You can use the optional partition clause to group data into partitions before operating on it; for HP Vertica Analytic Database (7.0.x) Page 250 of 1539 SQL Reference Manual SQL Functions example: SUM OVER (PARTITION BY col1, col2, ...) l You can substitute any RANK() example for ROW_NUMBER(). The difference is that ROW_ NUMBERassigns a unique ordinal number, starting with 1, to each row in the ordered set. Examples The following query first partitions customers in the customer_dimension table by occupation and then ranks those customers based on the ordered set specified by the analytic partition_clause. => SELECT occupation, customer_key, customer_since, annual_income, ROW_NUMBER() OVER (PARTITION BY occupation) AS customer_since_row_num FROM public.customer_dimension ORDER BY occupation, customer_since_row_num; occupation | customer_key | customer_since | annual_income | customer_since_row_ num --------------------+--------------+----------------+---------------+----------------------Accountant | 19453 | 1973-11-06 | 602460 | 1 Accountant | 42989 | 1967-07-09 | 850814 | 2 Accountant | 24587 | 1995-05-18 | 180295 | 3 Accountant | 26421 | 2001-10-08 | 126490 | 4 Accountant | 37783 | 1993-03-16 | 790282 | 5 Accountant | 39170 | 1980-12-21 | 823917 | 6 Banker | 13882 | 1998-04-10 | 15134 | 1 Banker | 14054 | 1989-03-16 | 961850 | 2 Banker | 15850 | 1996-01-19 | 262267 | 3 Banker | 29611 | 2004-07-14 | 739016 | 4 Doctor | 261 | 1969-05-11 | 933692 | 1 Doctor | 1264 | 1981-07-19 | 593656 | 2 Psychologist | 5189 | 1999-05-04 | 397431 | 1 Psychologist | 5729 | 1965-03-26 | 339319 | 2 Software Developer | 2513 | 1996-09-22 | 920003 | 1 Software Developer | 5927 | 2001-03-12 | 633294 | 2 HP Vertica Analytic Database (7.0.x) Page 251 of 1539 SQL Reference Manual SQL Functions Software Developer 3 Software Developer 4 Software Developer 5 Software Developer 6 Software Developer 7 Software Developer 8 Software Developer 9 Software Developer 10 Software Developer 11 Stock Broker 1 Stock Broker 2 Stock Broker 3 Stock Broker 4 Stock Broker 5 Writer 1 Writer 2 Writer 3 Writer 4 Writer 5 Writer 6 (39 rows) | 9125 | 1971-10-06 | 198953 | | 16097 | 1968-09-02 | 748371 | | 23137 | 1988-12-07 | 92578 | | 24495 | 1989-04-16 | 149371 | | 24548 | 1994-09-21 | 743788 | | 33744 | 2005-12-07 | 735003 | | 9684 | 1970-05-20 | 246000 | | 24278 | 2001-11-14 | 122882 | | 27122 | 1994-02-05 | 810044 | | 5950 | 1965-01-20 | 752120 | | 12517 | 2003-06-13 | 380102 | | 33010 | 1984-05-07 | 384463 | | 46196 | 1972-11-28 | 497049 | | 8710 | 2005-02-11 | 79387 | | 3149 | 1998-11-17 | 643972 | | 17124 | 1965-01-18 | 444747 | | 20100 | 1994-08-13 | 106097 | | 23317 | 2003-05-27 | 511750 | | 42845 | 1967-10-23 | 433483 | | 47560 | 1997-04-23 | 515647 | See Also RANK [Analytic] l l STDDEV [Analytic] Note: The non-standard function STDDEV() is provided for compatibility with other databases. It is semantically identical to STDDEV_SAMP(). HP Vertica Analytic Database (7.0.x) Page 252 of 1539 SQL Reference Manual SQL Functions Computes the statistical sample standard deviation of the current row with respect to the group within a window. The STDDEV_SAMP() return value is the same as the square root of the variance defined for the VAR_SAMP() function: STDDEV(expression) = SQRT(VAR_SAMP(expression)) When VAR_SAMP() returns null, this function returns null. Behavior Type Immutable Syntax STDDEV ( expression ) OVER ( ... [ window_partition_clause ] ... [ window_order_clause ] ... [ window_frame_clause ] ) Parameters expression Any NUMERIC data type or any non-numeric data type that can be implicitly converted to a numeric data type. The function returns the same data type as the numeric data type of the argument. OVER(...) See Analytic Functions. Example The following example returns the standard deviations of salaries in the employee dimension table by job title Assistant Director: => SELECT employee_last_name, annual_salary, STDDEV(annual_salary) OVER (ORDER BY hire_date) as "stddev" FROM employee_dimension WHERE job_title = 'Assistant Director'; employee_last_name | annual_salary | stddev --------------------+---------------+-----------------Goldberg | 61859 | NaN Miller | 79582 | 12532.0534829692 Goldberg | 74236 | 9090.97147357388 Campbell | 66426 | 7909.9541665339 Moore | 66630 | 7068.30282316761 Nguyen | 53530 | 9154.14713486005 Harris | 74115 | 8773.54346886142 Lang | 59981 | 8609.60471031374 Farmer | 60597 | 8335.41158418579 Nguyen | 78941 | 8812.87941405456 Smith | 55018 | 9179.7672390773 HP Vertica Analytic Database (7.0.x) Page 253 of 1539 SQL Reference Manual SQL Functions ... See Also l STDDEV [Aggregate] l STDDEV_SAMP [Aggregate] STDDEV_SAMP [Analytic] l l STDDEV_POP [Analytic] Computes the statistical population standard deviation and returns the square root of the population variance within a window. The STDDEV_POP() return value is the same as the square root of the VAR_POP() function: STDDEV_POP(expression) = SQRT(VAR_POP(expression)) When VAR_POP returns null, STDDEV_POP returns null. Behavior Type Immutable Syntax STDDEV_POP ( expression ) OVER ( ... [ window_partition_clause ] ... [ window_order_clause ] ... [ window_frame_clause ] ) Parameters expression Any NUMERIC data type or any non-numeric data type that can be implicitly converted to a numeric data type. The function returns the same data type as the numeric data type of the argument. OVER(...) See Analytic Functions. Examples The following example returns the population standard deviations of salaries in the employee dimension table by job title Assistant Director: HP Vertica Analytic Database (7.0.x) Page 254 of 1539 SQL Reference Manual SQL Functions => SELECT employee_last_name, annual_salary, STDDEV_POP(annual_salary) OVER (ORDER BY hire_date) as "stddev_pop" FROM employee_dimension WHERE job_title = 'Assistant Director'; employee_last_name | annual_salary | stddev_pop --------------------+---------------+-----------------Goldberg | 61859 | 0 Miller | 79582 | 8861.5 Goldberg | 74236 | 7422.74712548456 Campbell | 66426 | 6850.22125098891 Moore | 66630 | 6322.08223926257 Nguyen | 53530 | 8356.55480080699 Harris | 74115 | 8122.72288970008 Lang | 59981 | 8053.54776538731 Farmer | 60597 | 7858.70140687825 Nguyen | 78941 | 8360.63150784682 See Also STDDEV_POP [Aggregate] l l STDDEV_SAMP [Analytic] Computes the statistical sample standard deviation of the current row with respect to the group within a window. The STDDEV_SAMP() return value is the same as the square root of the variance defined for the VAR_SAMP() function: STDDEV(expression) = SQRT(VAR_SAMP(expression)) When VAR_SAMP() returns null, STDDEV_SAMP returns null. Behavior Type Immutable Syntax STDDEV_SAMP ( expression ) OVER ( ... [ window_partition_clause ] ... [ window_order_clause ] ... [ window_frame_clause ] ) Parameters expression Any NUMERIC data type or any non-numeric data type that can be implicitly converted to a numeric data type. The function returns the same data type as the numeric data type of the argument.. OVER(...) See Analytic Functions. HP Vertica Analytic Database (7.0.x) Page 255 of 1539 SQL Reference Manual SQL Functions Notes STDDEV_SAMP() is semantically identical to the non-standard function, STDDEV(). Examples The following example returns the sample standard deviations of salaries in the employee dimension table by job title Assistant Director: => SELECT employee_last_name, annual_salary, STDDEV(annual_salary) OVER (ORDER BY hire_date) as "stddev_samp" FROM employee_dimension WHERE job_title = 'Assistant Director'; employee_last_name | annual_salary | stddev_samp --------------------+---------------+-----------------Goldberg | 61859 | NaN Miller | 79582 | 12532.0534829692 Goldberg | 74236 | 9090.97147357388 Campbell | 66426 | 7909.9541665339 Moore | 66630 | 7068.30282316761 Nguyen | 53530 | 9154.14713486005 Harris | 74115 | 8773.54346886142 Lang | 59981 | 8609.60471031374 Farmer | 60597 | 8335.41158418579 Nguyen | 78941 | 8812.87941405456 ... See Also l Analytic Functions l STDDEV [Analytic] l STDDEV [Aggregate] STDDEV_SAMP [Aggregate] l l SUM [Analytic] Computes the sum of an expression over a group of rows within a window. It returns a DOUBLE PRECISION value for a floating-point expression. Otherwise, the return value is the same as the expression data type. Behavior Type Immutable HP Vertica Analytic Database (7.0.x) Page 256 of 1539 SQL Reference Manual SQL Functions Syntax SUM ... ... ... ( [ [ [ expression ) OVER ( window_partition_clause ] window_order_clause ] window_frame_clause ] ) Parameters expression Any NUMERIC data type or any non-numeric data type that can be implicitly converted to a numeric data type. The function returns the same data type as the numeric data type of the argument. OVER(...) See Analytic Functions. Notes l If you encounter data overflow when using SUM(), use SUM_FLOAT() which converts data to a floating point. l SUM() returns the sum of values of an expression. Examples The following query returns the cumulative sum all of the returns made to stores in January: => SELECT calendar_month_name AS month, transaction_type, sales_quantity, SUM(sales_quantity) OVER (PARTITION BY calendar_month_name ORDER BY date_dimension.date_key) AS SUM FROM store.store_sales_fact JOIN date_dimension USING(date_key) WHERE calendar_month_name IN ('January') AND transaction_type= 'return'; month | transaction_type | sales_quantity | SUM ---------+------------------+----------------+-----January | return | 4 | 2338 January | return | 3 | 2338 January | return | 1 | 2338 January | return | 5 | 2338 January | return | 8 | 2338 January | return | 3 | 2338 January | return | 5 | 2338 January | return | 10 | 2338 January | return | 9 | 2338 January | return | 10 | 2338 (10 rows) HP Vertica Analytic Database (7.0.x) Page 257 of 1539 SQL Reference Manual SQL Functions See Also SUM [Aggregate] l Numeric Data Types l l VAR_POP [Analytic] Returns the statistical population variance of a non-null set of numbers (nulls are ignored) in a group within a window. Results are calculated by the sum of squares of the difference of expression from the mean of expression, divided by the number of rows remaining: (SUM(expression*expression) - SUM(expression)*SUM(expression) / UNT(expression) COUNT(expression)) / CO Behavior Type Immutable Syntax VAR_POP ( expression ) OVER ( ... [ window_partition_clause ] ... [ window_order_clause ] ... [ window_frame_clause ] ) Parameters expression Any NUMERIC data type or any non-numeric data type that can be implicitly converted to a numeric data type. The function returns the same data type as the numeric data type of the argument OVER(...) See Analytic Functions. Examples The following example calculates the cumulative population in the store orders fact table of sales in December 2007: => SELECT date_ordered, VAR_POP(SUM(total_order_cost)) OVER (ORDER BY date_ordered) "var_pop" FROM store.store_orders_fact s HP Vertica Analytic Database (7.0.x) Page 258 of 1539 SQL Reference Manual SQL Functions WHERE date_ordered BETWEEN '2007-12-01' AND '2007-12-31' GROUP BY s.date_ordered; date_ordered | var_pop --------------+-----------------2007-12-01 | 0 2007-12-02 | 1129564881 2007-12-03 | 1206008121.55542 2007-12-04 | 26353624176.1875 2007-12-05 | 21315288023.4402 2007-12-06 | 21619271028.3333 2007-12-07 | 19867030477.6328 2007-12-08 | 19197735288.5 2007-12-09 | 19100157155.2097 2007-12-10 | 19369222968.0896 (10 rows) See Also VAR_POP [Aggregate] l l VAR_SAMP [Analytic] Returns the sample variance of a non-null set of numbers (nulls in the set are ignored) for each row of the group within a window. Results are calculated by the sum of squares of the difference of expression from the mean of expression, divided by the number of rows remaining minus 1: (SUM(expression*expression) - SUM(expression)*SUM(expression) / COUNT(expression)) / (COUNT(expression) - 1) Behavior Type Immutable Syntax VAR_SAMP ( expression ) OVER ( ... [ window_partition_clause ] ... [ window_order_clause ] ... [ window_frame_clause ] ) Parameters expression Any NUMERIC data type or any non-numeric data type that can be implicitly converted to a numeric data type. The function returns the same data type as the numeric data type of the argument OVER(...) See Analytic Functions. HP Vertica Analytic Database (7.0.x) Page 259 of 1539 SQL Reference Manual SQL Functions Notes l VAR_SAMP() returns the sample variance of a set of numbers after it discards the nulls in the set. l If the function is applied to an empty set, then it returns null. l This function is similar to VARIANCE(), except that given an input set of one element, VARIANCE () returns 0 and VAR_SAMP() returns null. Examples The following example calculates the sample variance in the store orders fact table of sales in December 2007: => SELECT date_ordered, VAR_SAMP(SUM(total_order_cost)) OVER (ORDER BY date_ordered) "var_samp" FROM store.store_orders_fact s WHERE date_ordered BETWEEN '2007-12-01' AND '2007-12-31' GROUP BY s.date_ordered; date_ordered | var_samp --------------+-----------------2007-12-01 | NaN 2007-12-02 | 2259129762 2007-12-03 | 1809012182.33301 2007-12-04 | 35138165568.25 2007-12-05 | 26644110029.3003 2007-12-06 | 25943125234 2007-12-07 | 23178202223.9048 2007-12-08 | 21940268901.1431 2007-12-09 | 21487676799.6108 2007-12-10 | 21521358853.4331 (10 rows) See Also VARIANCE [Analytic] l VAR_SAMP [Aggregate] l l VARIANCE [Analytic] Note: The non-standard function VARIANCE() is provided for compatibility with other databases. It is semantically identical to VAR_SAMP(). HP Vertica Analytic Database (7.0.x) Page 260 of 1539 SQL Reference Manual SQL Functions Returns the sample variance of a non-null set of numbers (nulls in the set are ignored) for each row of the group within a window. Results are calculated by the sum of squares of the difference of expression from the mean of expression, divided by the number of rows remaining minus 1: (SUM(expression*expression) - SUM(expression)*SUM(expression) / OUNT(expression) - 1) COUNT(expression)) / (C Behavior Type Immutable Syntax VAR_SAMP ( expression ) OVER ( ... [ window_partition_clause ] ... [ window_order_clause ] ... [ window_frame_clause ] ) Parameters expression Any NUMERIC data type or any non-numeric data type that can be implicitly converted to a numeric data type. The function returns the same data type as the numeric data type of the argument. OVER(...) See Analytic Functions. Notes l VARIANCE() returns the variance of expression. l The variance of expression is calculated as follows: n 0 if the number of rows in expression = 1 n VAR_SAMP() if the number of rows in expression > 1 Examples The following example calculates the cumulative variance in the store orders fact table of sales in December 2007: => SELECT date_ordered, VARIANCE(SUM(total_order_cost)) OVER (ORDER BY date_ordered) "variance" FROM store.store_orders_fact s WHERE date_ordered BETWEEN '2007-12-01' AND '2007-12-31' GROUP BY s.date_ordered; HP Vertica Analytic Database (7.0.x) Page 261 of 1539 SQL Reference Manual SQL Functions date_ordered | variance --------------+-----------------2007-12-01 | NaN 2007-12-02 | 2259129762 2007-12-03 | 1809012182.33301 2007-12-04 | 35138165568.25 2007-12-05 | 26644110029.3003 2007-12-06 | 25943125234 2007-12-07 | 23178202223.9048 2007-12-08 | 21940268901.1431 2007-12-09 | 21487676799.6108 2007-12-10 | 21521358853.4331 (10 rows) See Also l VAR_SAMP [Analytic] l VARIANCE [Aggregate] VAR_SAMP [Aggregate] l l HP Vertica Analytic Database (7.0.x) Page 262 of 1539 SQL Reference Manual SQL Functions Date/Time Functions Date and time functions perform conversion, extraction, or manipulation operations on date and time data types and can return date and time information. Usage Functions that take TIME or TIMESTAMP inputs come in two variants: l TIME WITH TIME ZONE or TIMESTAMP WITH TIME ZONE l TIME WITHOUT TIME ZONE or TIMESTAMP WITHOUT TIME ZONE For brevity, these variants are not shown separately. The + and * operators come in commutative pairs; for example, both DATE + INTEGER and INTEGER + DATE. We show only one of each such pair. Daylight Savings Time Considerations When adding an INTERVAL value to (or subtracting an INTERVAL value from) a TIMESTAMP WITH TIME ZONE value, the days component advances (or decrements) the date of the TIMESTAMP WITH TIME ZONE by the indicated number of days. Across daylight saving time changes (with the session time zone set to a time zone that recognizes DST), this means INTERVAL '1 day' does not necessarily equal INTERVAL '24 hours'. For example, with the session time zone set to CST7CDT: TIMESTAMP WITH TIME ZONE '2005-04-02 12:00-07' + INTERVAL '1 day' produces TIMESTAMP WITH TIME ZONE '2005-04-03 12:00-06' Adding INTERVAL '24 hours' to the same initial TIMESTAMP WITH TIME ZONE produces TIMESTAMP WITH TIME ZONE '2005-04-03 13:00-06', This result occurs because there is a change in daylight saving time at 2005-04-03 02:00 in time zone CST7CDT. Date/Time Functions in Transactions CURRENT_TIMESTAMP() and related functions return the start time of the current transaction; their values do not change during the transaction. The intent is to allow a single transaction to have a consistent notion of the "current" time, so that multiple modifications within the same transaction HP Vertica Analytic Database (7.0.x) Page 263 of 1539 SQL Reference Manual SQL Functions bear the same timestamp. However, TIMEOFDAY() returns the wall-clock time and advances during transactions. See Also Template Patterns for Date/Time Formatting l ADD_MONTHS Takes a DATE, TIMESTAMP, or TIMESTAMPTZ argument and a number of months and returns a date. TIMESTAMPTZ arguments are implicitly cast to TIMESTAMP. Behavior Type Immutable if called with DATE or TIMESTAMP but stable with TIMESTAMPTZ in that its results can change based on TIMEZONE settings Syntax ADD_MONTHS ( d , n ); Parameters d The incoming DATE, TIMESTAMP, or TIMESTAMPTZ. If the start date falls on the last day of the month, or if the resulting month has fewer days than the given day of the month, then the result is the last day of the resulting month. Otherwise, the result has the same start day. n Any INTEGER. Examples The following example's results include a leap year: SELECT ADD_MONTHS('31-Jan-08', 1) "Months"; Months -----------2008-02-29 (1 row) The next example adds four months to January and returns a date in May: SELECT ADD_MONTHS('31-Jan-08', 4) "Months"; Months -----------2008-05-31 (1 row) This example subtracts four months from January, returning a date in September: HP Vertica Analytic Database (7.0.x) Page 264 of 1539 SQL Reference Manual SQL Functions SELECT ADD_MONTHS('31-Jan-08', -4) "Months"; Months -----------2007-09-30 (1 row) Because the following example specifies NULL, the result set is empty: SELECT ADD_MONTHS('31-Jan-03', NULL) "Months"; Months -------(1 row) This example provides no date argument, so even though the number of months specified is 1, the result set is empty: SELECT ADD_MONTHS(NULL, 1) "Months"; Months -------(1 row) In this example, the date field defaults to a timestamp, so the PST is ignored. Even though it is already the next day in Pacific time, the result falls on the same date in New York (two years later): SET TIME ZONE 'America/New_York'; SELECT ADD_MONTHS('2008-02-29 23:30 PST', 24); add_months -----------2010-02-28 (1 row) The next example specifies a timestamp with time zone, so the PST is taken into account: SET TIME ZONE 'America/New_York'; SELECT ADD_MONTHS('2008-02-29 23:30 PST'::TIMESTAMPTZ, 24); add_months -----------2010-03-01 (1 row) AGE_IN_MONTHS Returns an INTEGER value representing the difference in months between two TIMESTAMP, DATE or TIMESTAMPTZ values. HP Vertica Analytic Database (7.0.x) Page 265 of 1539 SQL Reference Manual SQL Functions Behavior Type Stable if second argument is omitted or if either argument is TIMESTAMPTZ. Immutable otherwise. Syntax AGE_IN_MONTHS ( expression1 [ , expression2 ] ) Parameters expression1 Specifies the beginning of the period. expression2 Specifies the end of the period. The default is the CURRENT_DATE. Notes The inputs can be TIMESTAMP, TIMESTAMPTZ, or DATE. Examples The following example returns the age in months of a person born on March 2, 1972 on the date June 21, 1990, with a time elapse of 18 years, 3 months, and 19 days: SELECT AGE_IN_MONTHS(TIMESTAMP '1990-06-21', TIMESTAMP '1972-03-02'); AGE_IN_MONTHS --------------219 (1 row) The next example shows the age in months of the same person (born March 2, 1972) as of March 16, 2010: SELECT AGE_IN_MONTHS(TIMESTAMP 'March 16, 2010', TIMESTAMP '1972-03-02'); AGE_IN_MONTHS --------------456 (1 row) This example returns the age in months of a person born on November 21, 1939: SELECT AGE_IN_MONTHS(TIMESTAMP '1939-11-21'); AGE_IN_MONTHS --------------844 (1 row) In the above form, the result changes as time goes by. HP Vertica Analytic Database (7.0.x) Page 266 of 1539 SQL Reference Manual SQL Functions See Also l AGE_IN_YEARS l INTERVAL AGE_IN_YEARS Returns an INTEGER value representing the difference in years between two TIMESTAMP, DATE or TIMESTAMPTZ values. Behavior Type Stable if second argument is omitted or if either argument is TIMESTAMPTZ. Immutable otherwise. Syntax AGE_IN_YEARS ( expression1 [ , expression2 ] ) Parameters expression1 Specifies the beginning of the period. expression2 Specifies the end of the period. The default is the CURRENT_DATE. Notes l The AGE_IN_YEARS() function was previously called AGE. AGE() is not supported. l Inputs can be TIMESTAMP, TIMESTAMPTZ, or DATE. Examples The following example returns the age in years of a person born on March 2, 1972 on the date June 21, 1990, with a time elapse of 18 years, 3 months, and 19 days: SELECT AGE_IN_YEARS(TIMESTAMP '1990-06-21', TIMESTAMP '1972-03-02'); AGE_IN_YEARS -------------18 (1 row) The next example shows the age in years of the same person (born March 2, 1972) as of February 24, 2009: HP Vertica Analytic Database (7.0.x) Page 267 of 1539 SQL Reference Manual SQL Functions SELECT AGE_IN_YEARS(TIMESTAMP '2009-02-24', TIMESTAMP '1972-03-02'); AGE_IN_YEARS -------------36 (1 row) This example returns the age in years of a person born on November 21, 1939: SELECT AGE_IN_YEARS(TIMESTAMP '1939-11-21'); AGE_IN_YEARS -------------70 (1 row) See Also l AGE_IN_MONTHS l INTERVAL CLOCK_TIMESTAMP Returns a value of type TIMESTAMP WITH TIMEZONE representing the current system-clock time. Behavior Type Volatile Syntax CLOCK_TIMESTAMP() Notes This function uses the date and time supplied by the operating system on the server to which you are connected, which should be the same across all servers. The value changes each time you call it. Examples The following command returns the current time on your system: SELECT CLOCK_TIMESTAMP() "Current Time"; Current Time ------------------------------ HP Vertica Analytic Database (7.0.x) Page 268 of 1539 SQL Reference Manual SQL Functions 2010-09-23 11:41:23.33772-04 (1 row) Each time you call the function, you get a different result. The difference in this example is in microseconds: SELECT CLOCK_TIMESTAMP() "Time 1", CLOCK_TIMESTAMP() "Time 2"; Time 1 | Time 2 -------------------------------+------------------------------2010-09-23 11:41:55.369201-04 | 2010-09-23 11:41:55.369202-04 (1 row) See Also l STATEMENT_TIMESTAMP l TRANSACTION_TIMESTAMP CURRENT_DATE Returns the date (date-type value) on which the current transaction started. Behavior Type Stable Syntax CURRENT_DATE Notes The CURRENT_DATE function does not require parentheses. Examples SELECT CURRENT_DATE; ?column? -----------2010-09-23 (1 row) CURRENT_TIME Returns a value of type TIME WITH TIMEZONE representing the time of day. HP Vertica Analytic Database (7.0.x) Page 269 of 1539 SQL Reference Manual SQL Functions Behavior Type Stable Syntax CURRENT_TIME [ ( precision ) ] Parameters precision (INTEGER) causes the result to be rounded to the specified number of fractional digits in the seconds field. Notes l This function returns the start time of the current transaction; the value does not change during the transaction. The intent is to allow a single transaction to have a consistent notion of the current time, so that multiple modifications within the same transaction bear the same timestamp. l The CURRENT_TIME function does not require parentheses. Examples SELECT CURRENT_TIME "Current Time"; Current Time -------------------12:45:12.186089-05 (1 row) CURRENT_TIMESTAMP Returns a value of type TIMESTAMP WITH TIME ZONE representing the start of the current transaction. Behavior Type Stable Syntax CURRENT_TIMESTAMP [ ( precision ) ] HP Vertica Analytic Database (7.0.x) Page 270 of 1539 SQL Reference Manual SQL Functions Parameters precision (INTEGER) causes the result to be rounded to the specified number of fractional digits in the seconds field. Range of INTEGER is 0-6. Notes This function returns the start time of the current transaction; the value does not change during the transaction. The intent is to allow a single transaction to have a consistent notion of the "current" time, so that multiple modifications within the same transaction bear the same timestamp. Examples SELECT CURRENT_TIMESTAMP; ?column? ------------------------------2010-09-23 11:37:22.354823-04 (1 row) SELECT CURRENT_TIMESTAMP(2); ?column? --------------------------2010-09-23 11:37:22.35-04 (1 row) DATE_PART Is modeled on the traditional Ingres equivalent to the SQL-standard function EXTRACT. Internally DATE_PART is used by the EXTRACT function. Behavior Type Stable when source is of type TIMESTAMPTZ, Immutable otherwise. Syntax DATE_PART ( field , source ) HP Vertica Analytic Database (7.0.x) Page 271 of 1539 SQL Reference Manual SQL Functions CENTURY The century number. SELECT EXTRACT(CENTURY FROM TIMESTAMP '2000-12-16 12:21:13'); Result: 20 SELECT EXTRACT(CENTURY FROM TIMESTAMP '2001-02-16 20:38:40'); Result: 21 The first century starts at 0001-01-01 00:00:00 AD. This definition applies to all Gregorian calendar countries. There is no century number 0, you go from –1 to 1. DAY The day (of the month) field (1–31). SELECT EXTRACT(DAY FROM TIMESTAMP '2001-02-16 20:38:40'); Result: 16 SELECT EXTRACT(DAY FROM DATE '2001-02-16'); Result: 16 DECADE The year field divided by 10. SELECT EXTRACT(DECADE FROM TIMESTAMP '2001-02-16 20:38:40'); Result: 200 SELECT EXTRACT(DECADE FROM DATE '2001-02-16'); Result: 200 DOQ The day within the current quarter. SELECT EXTRACT(DOQ FROM CURRENT_DATE); Result: 89 The result is calculated as follows: Current date = June 28, current quarter = 2 (April, May, June). 30 (April) + 31 (May) + 28 (June current day) = 89. DOQ recognizes leap year days. DOW The day of the week (0–6; Sunday is 0). SELECT EXTRACT(DOW FROM TIMESTAMP '2001-02-16 20:38:40'); Result: 5 SELECT EXTRACT(DOW FROM DATE '2001-02-16'); Result: 5 EXTRACT's day of the week numbering is different from that of the TO_CHAR function. DOY The day of the year (1–365/366) SELECT EXTRACT(DOY FROM TIMESTAMP '2001-02-16 20:38:40'); Result: 47 SELECT EXTRACT(DOY FROM DATE '2001-02-16'); Result: 5 HP Vertica Analytic Database (7.0.x) Page 272 of 1539 SQL Reference Manual SQL Functions EPOCH For DATE and TIMESTAMP values, the number of seconds since 1970-01-01 00:00:00-00 (can be negative); for INTERVAL values, the total number of seconds in the interval. SELECT EXTRACT(EPOCH FROM TIMESTAMP WITH TIME ZONE '2001-02-16 20:38:40-0 8'); Result: 982384720 SELECT EXTRACT(EPOCH FROM INTERVAL '5 days 3 hours'); Result: 442800 Here is how you can convert an epoch value back to a timestamp: SELECT TIMESTAMP WITH TIME ZONE 'epoch' + 982384720 * INTERVAL '1 second'; HOUR The hour field (0–23). SELECT EXTRACT(HOUR FROM TIMESTAMP '2001-02-16 20:38:40'); Result: 20 SELECT EXTRACT(HOUR FROM TIME '13:45:59'); Result: 13 ISODOW The ISO day of the week (1–7; Monday is 1). By definition, the ISO-8601 week starts on Monday, and the first week of a year contains January 4 of that year. In other words, the first Thursday of a year is in week 1 of that year. Because of this, it is possible for early January dates to be part of the 52nd or 53rd week of the previous year. For example, 2005-01-01 is part of the 53rd week of year 2004, and 2006-01-01 is part of the 52nd week of year 2005. SELECT EXTRACT(ISODOW FROM DATE '2010-09-27'); Result: 1 ISOWEEK The ISO week, which consists of 7 days starting on Monday and ending on Sunday. The first week of the year is the week that contains January 4. ISOYEAR The ISO year, which is 52 or 53 weeks (Monday–Sunday). SELECT EXTRACT(ISOYEAR FROM DATE '2006-01-01'); Result: 2005 SELECT EXTRACT(ISOYEAR FROM DATE '2006-01-02'); Result: 2006 SELECT EXTRACT(ISOYEAR FROM TIMESTAMP '2001-02-16 20:38:40'); Result: 2001 MICROSECONDS The seconds field, including fractional parts, multiplied by 1,000,000. This includes full seconds. SELECT EXTRACT(MICROSECONDS FROM TIME '17:12:28.5'); Result: 28500000 MILLENNIUM The millennium number. SELECT EXTRACT(MILLENNIUM FROM TIMESTAMP '2001-02-16 20:38:40'); Result: 3 Years in the 1900s are in the second millennium. The third millennium starts January 1, 2001. HP Vertica Analytic Database (7.0.x) Page 273 of 1539 SQL Reference Manual SQL Functions MILLISECONDS The seconds field, including fractional parts, multiplied by 1000. This includes full seconds. SELECT EXTRACT(MILLISECONDS FROM TIME '17:12:28.5'); Result: 28500 MINUTE The minutes field (0 - 59). SELECT EXTRACT(MINUTE FROM TIMESTAMP '2001-02-16 20:38:40'); Result: 38 SELECT EXTRACT(MINUTE FROM TIME '13:45:59'); Result: 45 MONTH For timestamp values, the number of the month within the year (1 - 12) ; for interval values the number of months, modulo 12 (0 - 11). SELECT EXTRACT(MONTH FROM TIMESTAMP '2001-02-16 20:38:40'); Result: 2 SELECT EXTRACT(MONTH FROM INTERVAL '2 years 3 months'); Result: 3 SELECT EXTRACT(MONTH FROM INTERVAL '2 years 13 months'); Result: 1 QUARTER The quarter of the year (1–4) that the day is in (for timestamp values only). SELECT EXTRACT(QUARTER FROM TIMESTAMP '2001-02-16 20:38:40'); Result: 1 SECOND The seconds field, including fractional parts (0–59) (60 if leap seconds are implemented by the operating system). SELECT EXTRACT(SECOND FROM TIMESTAMP '2001-02-16 20:38:40'); Result: 40 SELECT EXTRACT(SECOND FROM TIME '17:12:28.5'); Result: 28.5 TIME ZONE The time zone offset from UTC, measured in seconds. Positive values correspond to time zones east of UTC, negative values to zones west of UTC. TIMEZONE_HOUR The hour component of the time zone offset. TIMEZONE_MINUT E The minute component of the time zone offset. WEEK The number of the week of the calendar year that the day is in. SELECT EXTRACT(WEEK FROM TIMESTAMP '2001-02-16 20:38:40'); Result: 7 SELECT EXTRACT(WEEK FROM DATE '2001-02-16'); Result: 7 YEAR The year field. Keep in mind there is no 0 AD, so subtract BC years from AD years with care. SELECT EXTRACT(YEAR FROM TIMESTAMP '2001-02-16 20:38:40'); Result: 2001 HP Vertica Analytic Database (7.0.x) Page 274 of 1539 SQL Reference Manual SQL Functions Parameters field Single-quoted string value that specifies the field to extract. You must enter the constant field values (for example, CENTURY, DAY, etc). when specifying the field. Note: The field parameter values are the same for the EXTRACT function. source A date/time expression Field Values CENTURY The century number. SELECT EXTRACT(CENTURY FROM TIMESTAMP '2000-12-16 12:21:13'); Result: 20 SELECT EXTRACT(CENTURY FROM TIMESTAMP '2001-02-16 20:38:40'); Result: 21 The first century starts at 0001-01-01 00:00:00 AD. This definition applies to all Gregorian calendar countries. There is no century number 0, you go from –1 to 1. DAY The day (of the month) field (1–31). SELECT EXTRACT(DAY FROM TIMESTAMP '2001-02-16 20:38:40'); Result: 16 SELECT EXTRACT(DAY FROM DATE '2001-02-16'); Result: 16 DECADE The year field divided by 10. SELECT EXTRACT(DECADE FROM TIMESTAMP '2001-02-16 20:38:40'); Result: 200 SELECT EXTRACT(DECADE FROM DATE '2001-02-16'); Result: 200 DOQ The day within the current quarter. SELECT EXTRACT(DOQ FROM CURRENT_DATE); Result: 89 The result is calculated as follows: Current date = June 28, current quarter = 2 (April, May, June). 30 (April) + 31 (May) + 28 (June current day) = 89. DOQ recognizes leap year days. HP Vertica Analytic Database (7.0.x) Page 275 of 1539 SQL Reference Manual SQL Functions DOW The day of the week (0–6; Sunday is 0). SELECT EXTRACT(DOW FROM TIMESTAMP '2001-02-16 20:38:40'); Result: 5 SELECT EXTRACT(DOW FROM DATE '2001-02-16'); Result: 5 EXTRACT's day of the week numbering is different from that of the TO_CHAR function. DOY The day of the year (1–365/366) SELECT EXTRACT(DOY FROM TIMESTAMP '2001-02-16 20:38:40'); Result: 47 SELECT EXTRACT(DOY FROM DATE '2001-02-16'); Result: 5 EPOCH For DATE and TIMESTAMP values, the number of seconds since 1970-01-01 00:00:00-00 (can be negative); for INTERVAL values, the total number of seconds in the interval. SELECT EXTRACT(EPOCH FROM TIMESTAMP WITH TIME ZONE '2001-02-16 20:38:40-0 8'); Result: 982384720 SELECT EXTRACT(EPOCH FROM INTERVAL '5 days 3 hours'); Result: 442800 Here is how you can convert an epoch value back to a timestamp: SELECT TIMESTAMP WITH TIME ZONE 'epoch' + 982384720 * INTERVAL '1 second'; HOUR The hour field (0–23). SELECT EXTRACT(HOUR FROM TIMESTAMP '2001-02-16 20:38:40'); Result: 20 SELECT EXTRACT(HOUR FROM TIME '13:45:59'); Result: 13 ISODOW The ISO day of the week (1–7; Monday is 1). By definition, the ISO-8601 week starts on Monday, and the first week of a year contains January 4 of that year. In other words, the first Thursday of a year is in week 1 of that year. Because of this, it is possible for early January dates to be part of the 52nd or 53rd week of the previous year. For example, 2005-01-01 is part of the 53rd week of year 2004, and 2006-01-01 is part of the 52nd week of year 2005. SELECT EXTRACT(ISODOW FROM DATE '2010-09-27'); Result: 1 ISOWEEK The ISO week, which consists of 7 days starting on Monday and ending on Sunday. The first week of the year is the week that contains January 4. HP Vertica Analytic Database (7.0.x) Page 276 of 1539 SQL Reference Manual SQL Functions ISOYEAR The ISO year, which is 52 or 53 weeks (Monday–Sunday). SELECT EXTRACT(ISOYEAR FROM DATE '2006-01-01'); Result: 2005 SELECT EXTRACT(ISOYEAR FROM DATE '2006-01-02'); Result: 2006 SELECT EXTRACT(ISOYEAR FROM TIMESTAMP '2001-02-16 20:38:40'); Result: 2001 MICROSECONDS The seconds field, including fractional parts, multiplied by 1,000,000. This includes full seconds. SELECT EXTRACT(MICROSECONDS FROM TIME '17:12:28.5'); Result: 28500000 MILLENNIUM The millennium number. SELECT EXTRACT(MILLENNIUM FROM TIMESTAMP '2001-02-16 20:38:40'); Result: 3 Years in the 1900s are in the second millennium. The third millennium starts January 1, 2001. MILLISECONDS The seconds field, including fractional parts, multiplied by 1000. This includes full seconds. SELECT EXTRACT(MILLISECONDS FROM TIME '17:12:28.5'); Result: 28500 MINUTE The minutes field (0 - 59). SELECT EXTRACT(MINUTE FROM TIMESTAMP '2001-02-16 20:38:40'); Result: 38 SELECT EXTRACT(MINUTE FROM TIME '13:45:59'); Result: 45 MONTH For timestamp values, the number of the month within the year (1 - 12) ; for interval values the number of months, modulo 12 (0 - 11). SELECT EXTRACT(MONTH FROM TIMESTAMP '2001-02-16 20:38:40'); Result: 2 SELECT EXTRACT(MONTH FROM INTERVAL '2 years 3 months'); Result: 3 SELECT EXTRACT(MONTH FROM INTERVAL '2 years 13 months'); Result: 1 QUARTER The quarter of the year (1–4) that the day is in (for timestamp values only). SELECT EXTRACT(QUARTER FROM TIMESTAMP '2001-02-16 20:38:40'); Result: 1 SECOND The seconds field, including fractional parts (0–59) (60 if leap seconds are implemented by the operating system). SELECT EXTRACT(SECOND FROM TIMESTAMP '2001-02-16 20:38:40'); Result: 40 SELECT EXTRACT(SECOND FROM TIME '17:12:28.5'); Result: 28.5 HP Vertica Analytic Database (7.0.x) Page 277 of 1539 SQL Reference Manual SQL Functions TIME ZONE The time zone offset from UTC, measured in seconds. Positive values correspond to time zones east of UTC, negative values to zones west of UTC. TIMEZONE_HOUR The hour component of the time zone offset. TIMEZONE_MINUT E The minute component of the time zone offset. WEEK The number of the week of the calendar year that the day is in. SELECT EXTRACT(WEEK FROM TIMESTAMP '2001-02-16 20:38:40'); Result: 7 SELECT EXTRACT(WEEK FROM DATE '2001-02-16'); Result: 7 YEAR The year field. Keep in mind there is no 0 AD, so subtract BC years from AD years with care. SELECT EXTRACT(YEAR FROM TIMESTAMP '2001-02-16 20:38:40'); Result: 2001 Examples The following example extracts the day value from the input parameters: SELECT DATE_PART('day', TIMESTAMP '2009-02-24 20:38:40') "Day"; Day ----24 (1 row) The following example extracts the month value from the input parameters: SELECT DATE_PART('month', TIMESTAMP '2009-02-24 20:38:40') "Month"; Month ------2 (1 row) The following example extracts the year value from the input parameters: SELECT DATE_PART('year', TIMESTAMP '2009-02-24 20:38:40') "Year"; Year -----2009 (1 row) The following example extracts the hours from the input parameters: SELECT DATE_PART('hour', TIMESTAMP '2009-02-24 20:38:40') "Hour"; Hour HP Vertica Analytic Database (7.0.x) Page 278 of 1539 SQL Reference Manual SQL Functions -----20 (1 row) The following example extracts the minutes from the input parameters: SELECT DATE_PART('minutes', TIMESTAMP '2009-02-24 20:38:40') "Minutes"; Minutes --------38 (1 row) The following example extracts the seconds from the input parameters: SELECT DATE_PART('seconds', TIMESTAMP '2009-02-24 20:38:40') "Seconds"; Seconds --------40 (1 row) The following example extracts the day of quarter (DOQ) from the input parameters: SELECT DATE_PART('DOQ', TIMESTAMP '2009-02-24 20:38:40') "DOQ"; DOQ ----55 (1 row) SELECT DATE_PART('day', INTERVAL '29 days 23 hours'); date_part ----------29 (1 row) Notice what happens to the above query if you add an hour: SELECT DATE_PART('day', INTERVAL '29 days 24 hours'); date_part ----------30 (1 row) The following example returns 0 because an interval in hours is up to 24 only: SELECT DATE_PART('hour', INTERVAL '24 hours 45 minutes'); date_part ----------0 (1 row) HP Vertica Analytic Database (7.0.x) Page 279 of 1539 SQL Reference Manual SQL Functions See Also EXTRACT l DATE Converts a TIMESTAMP, TIMESTAMPTZ, DATE, or VARCHAR to a DATE. You can also use this function to convert an INTEGER to a DATE. In this case, the resulting date reflects the int number of days after 0001 AD. (Day 1 is January 1, 0001.) Behavior Type Immutable, except for TIMESTAMPTZ arguments where it is Stable. Syntax DATE ( d | n ) Parameters d TIMESTAMP, TIMESTAMPTZ, VARCHAR, or DATE input value. n Integer you want to convert to a DATE. Examples => SELECT DATE (1); DATE -----------0001-01-01 (1 row) => SELECT DATE (734260); DATE -----------2011-05-03 (1 row) => SELECT DATE ('TODAY'); DATE -----------2011-05-31 (1 row) DATE_TRUNC Truncates date and time values as indicated. The return value is of type TIME or TIMETZ with all fields that are less significant than the selected one set to zero (or one, for day and month). HP Vertica Analytic Database (7.0.x) Page 280 of 1539 SQL Reference Manual SQL Functions Behavior Type Stable. Syntax DATE_TRUNC ( field , source ) Parameters field String constant that selects the precision to which truncate the input value. source Value expression of type TIME or TIMETZ. Field Values CENTURY The century number. The first century starts at 0001-01-01 00:00:00 AD. This definition applies to all Gregorian calendar countries. There is no century number 0, you go from –1 to 1. DAY The day (of the month) field (1–31). DECADE The year field divided by 10. HOUR The hour field (0–23). MICROSECONDS The seconds field, including fractional parts, multiplied by 1,000,000. This includes full seconds. MILLENNIUM The millennium number. Years in the 1900s are in the second millennium. The third millennium starts January 1, 2001. MILLISECONDS The seconds field, including fractional parts, multiplied by 1000. Note that this includes full seconds. MINUTE The minutes field (0–59). MONTH For timestamp values, the number of the month within the year (1–12) ; for interval values the number of months, modulo 12 (0–11). SECOND The seconds field, including fractional parts (0–59) (60 if leap seconds are implemented by the operating system). HP Vertica Analytic Database (7.0.x) Page 281 of 1539 SQL Reference Manual SQL Functions WEEK By definition, the ISO-8601 week starts on Monday, and the first week of a year contains January 4 of that year. In other words, the first Thursday of a year is in week 1 of that year. Because of this, it is possible for early January dates to be part of the 52nd or 53rd week of the previous year. For example, 2005-01-01 is part of the 53rd week of year 2004, and 2006-01-01 is part of the 52nd week of year 2005. The number of the week of the year that the day is in. YEAR The year field. Keep in mind there is no 0 AD, so subtract BC years from AD years with care. Examples The following example sets the field value as hour and returns the hour, truncating the minutes and seconds: VMart=> select date_trunc('hour', timestamp '2012-02-24 13:38:40') as hour; hour --------------------2012-02-24 13:00:00 (1 row) The following example returns the year from the input timestamptz '2012-02-24 13:38:40'. The function also defaults the month and day to January 1, truncates the hour:minute:second of the timestamp, and appends the time zone (-05): VMart=> select date_trunc('year', timestamptz '2012-02-24 13:38:40') as year; year -----------------------2012-01-01 00:00:00-05 (1 row) The following example returns the year and month and defaults day of month to 1, truncating the rest of the string: VMart=> select date_trunc('month', timestamp '2012-02-24 13:38:40') as year; year --------------------2012-02-01 00:00:00 (1 row) DATEDIFF Returns the difference between two date or time values, based on the specified start and end arguments. HP Vertica Analytic Database (7.0.x) Page 282 of 1539 SQL Reference Manual SQL Functions Behavior Type Immutable, except for TIMESTAMPTZ arguments where it is Stable. Syntax 1 DATEDIFF ( datepart , startdate , enddate); Syntax 2 DATEDIFF ( datepart , starttime , endtime); Parameters datepart Returns the number of specified datepart boundaries between the specified startdate and enddate. Can be an unquoted identifier, a quoted string, or an expression in parentheses, which evaluates to the datepart as a character string. The following table lists the valid datepartarguments. startdate datepart Abbreviation year yy, yyyy quarter qq, q month mm, m day dd, d, dy, dayofyear, y week wk, ww hour hh minute mi, n second ss, s millisecond ms microsecond mcs, us Start date for the calculation and is an expression that returns a TIMESTAMP, DATE, or TIMESTAMPTZ value. The startdate value is not included in the count. HP Vertica Analytic Database (7.0.x) Page 283 of 1539 SQL Reference Manual SQL Functions enddate End date for the calculation and is an expression that returns a TIMESTAMP, DATE, or TIMESTAMPTZ value. The enddate value is included in the count. starttime endtime Start time for the calculation and is an expression that returns an INTERVAL or TIME data type. l The starttime value is not included in the count. l Year, quarter, or month dateparts are not allowed. End time for the calculation and is an expression that returns an INTERVAL or TIME data type. l The endtime value is included in the count. l Year, quarter, or month dateparts are not allowed. Notes l DATEDIFF() is an immutable function with a default type of TIMESTAMP. It also takes DATE. If TIMESTAMPTZ is specified, the function is stable. l HP Vertica accepts statements written in any of the following forms: => DATEDIFF(year, s, e); => DATEDIFF('year', s, e); If you use an expression, the expression must be enclosed in parentheses: => DATEDIFF((expression), s, e); l Starting arguments are not included in the count, but end arguments are included. The Datepart Boundaries DATEDIFF calculates results according to ticks—or boundaries—within the date range or time range. Results are calculated based on the specified datepart. Examine the following statement and its results: SELECT DATEDIFF('year', TO_DATE('01-01-2005','MM-DD-YYYY'), TO_DATE('12-31-2008','MM-DD-Y YYY')); datediff ---------3 (1 row) HP Vertica Analytic Database (7.0.x) Page 284 of 1539 SQL Reference Manual SQL Functions The previous example specified a datepart of year, a startdate of January 1, 2005 and an enddate of December 31, 2008. DATEDIFF returns 3 by counting the year intervals as follows: [1] January 1, 2006 + [2] January 1, 2007 + [3] January 1, 2008 = 3 The function returns 3, and not 4, because startdate (January 1, 2005) is not counted in the calculation. DATEDIFF also ignores the months between January 1, 2008 and December 31, 2008 because the datepart specified is year and only the start of each year is counted. Sometimes the enddate occurs earlier in the ending year than the startdate in the starting year. For example, assume a datepart of year, a startdate of August 15, 2005, and an enddate of January 1, 2009. In this scenario, less than three years have elapsed, but DATEDIFF counts the same way it did in the previous example, returning 3 because it returns the number of January 1s between the limits: [1] January 1, 2006 + [2] January 1, 2007 + [3] January 1, 2008 = 3 In the following query, HP Vertica recognizes the full year 2005 as the starting year and 2009 as the ending year. SELECT DATEDIFF('year', TO_DATE('08-15-2005','MM-DD-YYYY'), TO_DATE('01-01-2009','MM-DD-Y YYY')); The count occurs as follows: [1] January 1, 2006 + [2] January 1, 2007 + [3] January 1, 2008 + [4] January 1, 2009 = 4 Even though August 15 has not yet occurred in the enddate, the function counts the entire enddate year as one tick or boundary because of the year datepart. Examples Year: In this example, the startdateand enddateare adjacent. The difference between the dates is one time boundary (second) of its datepart, so the result set is 1. SELECT DATEDIFF('year', TIMESTAMP '2008-12-31 23:59:59', '2009-01-01 00:00:00'); datediff ---------1 (1 row) Quarters start on January, April, July, and October. In the following example, the result is 0 because the difference from January to February in the same calendar year does not span a quarter: SELECT DATEDIFF('qq', TO_DATE('01-01-1995','MM-DD-YYYY'), HP Vertica Analytic Database (7.0.x) Page 285 of 1539 SQL Reference Manual SQL Functions TO_DATE('02-02-1995','MM-DD-YYYY')); datediff ---------0 (1 row) The next example, however, returns eight quarters because the difference spans two full years. The extra month is ignored: SELECT DATEDIFF('quarter', TO_DATE('01-01-1993','MM-DD-YYYY'), TO_DATE('02-02-1995','MM-DD-YYYY')); datediff ---------8 (1 row) Months are based on real calendar months. The following statement returns 1 because there is a one-month difference between January and February in the same calendar year: SELECT DATEDIFF('mm', TO_DATE('01-01-2005','MM-DD-YYYY'), TO_DATE('02-02-2005','MM-DD-YYYY')); datediff ---------1 (1 row) The next example returns a negative value of 1: SELECT DATEDIFF('month', TO_DATE('02-02-1995','MM-DD-YYYY'), TO_DATE('01-01-1995','MM-DD-YYYY')); datediff ----------1 (1 row) And this third example returns 23 because there are 23 months difference between SELECT DATEDIFF('m', TO_DATE('02-02-1993','MM-DD-YYYY'), TO_DATE('01-01-1995','MM-DD-YYYY')); datediff ---------23 (1 row) Weeks start on Sunday at midnight. The first example returns 0 because, even though the week starts on a Sunday, it is not a full calendar week: HP Vertica Analytic Database (7.0.x) Page 286 of 1539 SQL Reference Manual SQL Functions SELECT DATEDIFF('ww', TO_DATE('02-22-2009','MM-DD-YYYY'), TO_DATE('02-28-2009','MM-DD-YYYY')); datediff ---------0 (1 row) The following example returns 1 (week); January 1, 2000 fell on a Saturday. SELECT DATEDIFF('week', TO_DATE('01-01-2000','MM-DD-YYYY'), TO_DATE('01-02-2000','MM-DD-YYYY')); datediff ---------1 (1 row) In the next example, DATEDIFF() counts the weeks between January 1, 1995 and February 2, 1995 and returns 4 (weeks): SELECT DATEDIFF('wk', TO_DATE('01-01-1995','MM-DD-YYYY'), TO_DATE('02-02-1995','MM-DD-YYYY')); datediff ---------4 (1 row) The next example returns a difference of 100 weeks: SELECT DATEDIFF('ww', TO_DATE('02-02-2006','MM-DD-YYYY'), TO_DATE('01-01-2008','MM-DD-YYYY')); datediff ---------100 (1 row) Days are based on real calendar days. The first example returns 31, the full number of days in the month of July 2008. SELECT DATEDIFF('day', 'July 1, 2008', 'Aug 1, 2008'::date); datediff ---------31 (1 row) Just over two years of days: SELECT DATEDIFF('d', TO_TIMESTAMP('01-01-1993','MM-DD-YYYY'), TO_TIMESTAMP('02-02-1995','MM-DD-YYYY')); datediff ---------- HP Vertica Analytic Database (7.0.x) Page 287 of 1539 SQL Reference Manual SQL Functions 762 (1 row) Hours, minutes, and seconds are based on clock time. The first example counts backwards from March 2 to February 14 and returns –384 hours: SELECT DATEDIFF('hour', TO_DATE('03-02-2009','MM-DD-YYYY'), TO_DATE('02-14-2009','MM-DD-YYYY')); datediff ----------384 (1 row) Another hours example: SELECT DATEDIFF('hh', TO_TIMESTAMP('01-01-1993','MM-DD-YYYY'), TO_TIMESTAMP('02-02-1995','MM-DD-YYYY')); datediff ---------18288 (1 row) This example counts the minutes backwards: SELECT DATEDIFF('mi', TO_TIMESTAMP('01-01-1993 03:00:45','MM-DD-YYYY HH:MI:SS'), TO_TIMESTAMP('01-01-1993 01:30:21',' MM-DD-YYYY HH:MI:SS')); datediff ----------90 (1 row) And this example counts the minutes forward: SELECT DATEDIFF('minute', TO_DATE('01-01-1993','MM-DD-YYYY'), TO_DATE('02-02-1995','MM-DD-YYYY')); datediff ---------1097280 (1 row) In the following example, the query counts the difference in seconds, beginning at a start time of 4:44 and ending at 5:55 with an interval of two days: SELECT DATEDIFF('ss', TIME '04:44:42.315786', INTERVAL '2 05:55:52.963558'); datediff ---------177070 (1 row) HP Vertica Analytic Database (7.0.x) Page 288 of 1539 SQL Reference Manual SQL Functions See Also Date/Time Expressions l DAY Extracts the day of the month from a TIMESTAMP, TIMESTAMPTZ, INTEGER, VARCHAR, or INTERVAL input value. The return value is of type INTEGER. Behavior Type Immutable, except for TIMESTAMPTZ arguments where it is Stable. Syntax DAY (d ) Parameters d TIMESTAMP, TIMESTAMPTZ, INTERVAL, VARCHAR, or INTEGER input value. Examples => SELECT DAY ----6 (1 row) => SELECT DAY ----22 (1 row) => SELECT DAY ----22 (1 row) => SELECT DAY ----35 (1 row) DAY (6); DAY(TIMESTAMP 'sep 22, 2011 12:34'); DAY('sep 22, 2011 12:34'); DAY(INTERVAL '35 12:34'); HP Vertica Analytic Database (7.0.x) Page 289 of 1539 SQL Reference Manual SQL Functions DAYOFMONTH Returns an integer representing the day of the month based on a VARCHAR, DATE, TIMESTAMP, OR TIMESTAMPTZ input value. Behavior Type Immutable, except for TIMESTAMPTZ arguments where it is Stable. Syntax DAYOFMONTH ( d ) Parameters d VARCHAR, DATE, TIMESTAMP, or TIMESTAMPTZ input value. Example => SELECT DAYOFMONTH (TIMESTAMP 'sep 22, 2011 12:34'); DAYOFMONTH -----------22 (1 row) DAYOFWEEK Returns an INTEGER representing the day of the week based on a TIMESTAMP, TIMESTAMPTZ, VARCHAR, or DATE input value. Valid return values are: Integer Week Day 1 Sunday 2 Monday 3 Tuesday 4 Wednesday 5 Thursday 6 Friday 7 Saturday HP Vertica Analytic Database (7.0.x) Page 290 of 1539 SQL Reference Manual SQL Functions Behavior Type Immutable, except for TIMESTAMPTZ arguments where it is Stable. Syntax DAYOFWEEK ( d ) Parameters d TIMESTAMP, TIMESTAMPTZ, VARCHAR, or DATE input value. Example => SELECT DAYOFWEEK (TIMESTAMP 'sep 17, 2011 12:34'); DAYOFWEEK ----------7 (1 row) DAYOFWEEK_ISO Returns an INTEGER representing the ISO 8061 day of the week based on a VARCHAR, DATE, TIMESTAMP, or TIMESTAMPTZ input value. Valid return values are: Integer Week Day 1 Monday 2 Tuesday 3 Wednesday 4 Thursday 5 Friday 6 Saturday 7 Sunday Behavior Type Immutable, except for TIMESTAMPTZ arguments where it is Stable. HP Vertica Analytic Database (7.0.x) Page 291 of 1539 SQL Reference Manual SQL Functions Syntax DAYOFWEEK_ISO ( d ) Parameters d VARCHAR, DATE, TIMESTAMP, or TIMESTAMPTZ input value. Examples => SELECT DAYOFWEEK_ISO(TIMESTAMP 'Sep 22, 2011 12:34'); DAYOFWEEK_ISO --------------4 (1 row) The following example shows how to combine the DAYOFWEEK_ISO, WEEK_ISO, and YEAR_ ISO functions to find the ISO day of the week, week, and year: => SELECT DAYOFWEEK_ISO('Jan 1, 2000'), WEEK_ISO('Jan 1, 2000'),YEAR_ISO('Jan1,2000'); DAYOFWEEK_ISO | WEEK_ISO | YEAR_ISO ---------------+----------+---------6 | 52 | 1999 (1 row) See Also l WEEK_ISO l DAYOFWEEK_ISO l http://en.wikipedia.org/wiki/ISO_8601 DAYOFYEAR Returns an INTEGER representing the day of the year based on a TIMESTAMP, TIMESTAMPTZ , VARCHAR, or DATE input value. (January 1 is day 1.) Behavior Type Immutable, except for TIMESTAMPTZ arguments where it is Stable. Syntax DAYOFYEAR ( d ) HP Vertica Analytic Database (7.0.x) Page 292 of 1539 SQL Reference Manual SQL Functions Parameters d TIMESTAMP, TIMESTAMPTZ, VARCHAR, OR DATE input value. Example => SELECT DAYOFYEAR (TIMESTAMP 'SEPT 22,2011 12:34'); DAYOFYEAR ----------265 (1 row) DAYS Converts a DATE, VARCHAR, TIMESTAMP, or TIMESTAMPTZ to an INTEGER, reflecting the number of days after 0001 AD. Behavior Type Immutable Syntax DAYS( DATE d ) Parameters DATE d VARCHAR, DATE, TIMESTAMP, or TIMESTAMPTZ input value. Example => SELECT DAYS (DATE '2011-01-22'); DAYS -------734159 (1 row) => SELECT DAYS ('1999-12-31'); DAYS -------730119 (1 row) HP Vertica Analytic Database (7.0.x) Page 293 of 1539 SQL Reference Manual SQL Functions EXTRACT Retrieves subfields such as year or hour from date/time values and returns values of type NUMERIC. EXTRACT is primarily intended for computational processing, rather than for formatting date/time values for display. Internally EXTRACT uses the DATE_PART function. Behavior Type Stable when source is of type TIMESTAMPTZ, Immutable otherwise. Syntax EXTRACT ( field FROM source ) CENTURY The century number. SELECT EXTRACT(CENTURY FROM TIMESTAMP '2000-12-16 12:21:13'); Result: 20 SELECT EXTRACT(CENTURY FROM TIMESTAMP '2001-02-16 20:38:40'); Result: 21 The first century starts at 0001-01-01 00:00:00 AD. This definition applies to all Gregorian calendar countries. There is no century number 0, you go from –1 to 1. DAY The day (of the month) field (1–31). SELECT EXTRACT(DAY FROM TIMESTAMP '2001-02-16 20:38:40'); Result: 16 SELECT EXTRACT(DAY FROM DATE '2001-02-16'); Result: 16 DECADE The year field divided by 10. SELECT EXTRACT(DECADE FROM TIMESTAMP '2001-02-16 20:38:40'); Result: 200 SELECT EXTRACT(DECADE FROM DATE '2001-02-16'); Result: 200 DOQ The day within the current quarter. SELECT EXTRACT(DOQ FROM CURRENT_DATE); Result: 89 The result is calculated as follows: Current date = June 28, current quarter = 2 (April, May, June). 30 (April) + 31 (May) + 28 (June current day) = 89. DOQ recognizes leap year days. HP Vertica Analytic Database (7.0.x) Page 294 of 1539 SQL Reference Manual SQL Functions DOW The day of the week (0–6; Sunday is 0). SELECT EXTRACT(DOW FROM TIMESTAMP '2001-02-16 20:38:40'); Result: 5 SELECT EXTRACT(DOW FROM DATE '2001-02-16'); Result: 5 EXTRACT's day of the week numbering is different from that of the TO_CHAR function. DOY The day of the year (1–365/366) SELECT EXTRACT(DOY FROM TIMESTAMP '2001-02-16 20:38:40'); Result: 47 SELECT EXTRACT(DOY FROM DATE '2001-02-16'); Result: 5 EPOCH For DATE and TIMESTAMP values, the number of seconds since 1970-01-01 00:00:00-00 (can be negative); for INTERVAL values, the total number of seconds in the interval. SELECT EXTRACT(EPOCH FROM TIMESTAMP WITH TIME ZONE '2001-02-16 20:38:40-0 8'); Result: 982384720 SELECT EXTRACT(EPOCH FROM INTERVAL '5 days 3 hours'); Result: 442800 Here is how you can convert an epoch value back to a timestamp: SELECT TIMESTAMP WITH TIME ZONE 'epoch' + 982384720 * INTERVAL '1 second'; HOUR The hour field (0–23). SELECT EXTRACT(HOUR FROM TIMESTAMP '2001-02-16 20:38:40'); Result: 20 SELECT EXTRACT(HOUR FROM TIME '13:45:59'); Result: 13 ISODOW The ISO day of the week (1–7; Monday is 1). By definition, the ISO-8601 week starts on Monday, and the first week of a year contains January 4 of that year. In other words, the first Thursday of a year is in week 1 of that year. Because of this, it is possible for early January dates to be part of the 52nd or 53rd week of the previous year. For example, 2005-01-01 is part of the 53rd week of year 2004, and 2006-01-01 is part of the 52nd week of year 2005. SELECT EXTRACT(ISODOW FROM DATE '2010-09-27'); Result: 1 ISOWEEK The ISO week, which consists of 7 days starting on Monday and ending on Sunday. The first week of the year is the week that contains January 4. HP Vertica Analytic Database (7.0.x) Page 295 of 1539 SQL Reference Manual SQL Functions ISOYEAR The ISO year, which is 52 or 53 weeks (Monday–Sunday). SELECT EXTRACT(ISOYEAR FROM DATE '2006-01-01'); Result: 2005 SELECT EXTRACT(ISOYEAR FROM DATE '2006-01-02'); Result: 2006 SELECT EXTRACT(ISOYEAR FROM TIMESTAMP '2001-02-16 20:38:40'); Result: 2001 MICROSECONDS The seconds field, including fractional parts, multiplied by 1,000,000. This includes full seconds. SELECT EXTRACT(MICROSECONDS FROM TIME '17:12:28.5'); Result: 28500000 MILLENNIUM The millennium number. SELECT EXTRACT(MILLENNIUM FROM TIMESTAMP '2001-02-16 20:38:40'); Result: 3 Years in the 1900s are in the second millennium. The third millennium starts January 1, 2001. MILLISECONDS The seconds field, including fractional parts, multiplied by 1000. This includes full seconds. SELECT EXTRACT(MILLISECONDS FROM TIME '17:12:28.5'); Result: 28500 MINUTE The minutes field (0 - 59). SELECT EXTRACT(MINUTE FROM TIMESTAMP '2001-02-16 20:38:40'); Result: 38 SELECT EXTRACT(MINUTE FROM TIME '13:45:59'); Result: 45 MONTH For timestamp values, the number of the month within the year (1 - 12) ; for interval values the number of months, modulo 12 (0 - 11). SELECT EXTRACT(MONTH FROM TIMESTAMP '2001-02-16 20:38:40'); Result: 2 SELECT EXTRACT(MONTH FROM INTERVAL '2 years 3 months'); Result: 3 SELECT EXTRACT(MONTH FROM INTERVAL '2 years 13 months'); Result: 1 QUARTER The quarter of the year (1–4) that the day is in (for timestamp values only). SELECT EXTRACT(QUARTER FROM TIMESTAMP '2001-02-16 20:38:40'); Result: 1 SECOND The seconds field, including fractional parts (0–59) (60 if leap seconds are implemented by the operating system). SELECT EXTRACT(SECOND FROM TIMESTAMP '2001-02-16 20:38:40'); Result: 40 SELECT EXTRACT(SECOND FROM TIME '17:12:28.5'); Result: 28.5 HP Vertica Analytic Database (7.0.x) Page 296 of 1539 SQL Reference Manual SQL Functions TIME ZONE The time zone offset from UTC, measured in seconds. Positive values correspond to time zones east of UTC, negative values to zones west of UTC. TIMEZONE_HOUR The hour component of the time zone offset. TIMEZONE_MINUT E The minute component of the time zone offset. WEEK The number of the week of the calendar year that the day is in. SELECT EXTRACT(WEEK FROM TIMESTAMP '2001-02-16 20:38:40'); Result: 7 SELECT EXTRACT(WEEK FROM DATE '2001-02-16'); Result: 7 YEAR The year field. Keep in mind there is no 0 AD, so subtract BC years from AD years with care. SELECT EXTRACT(YEAR FROM TIMESTAMP '2001-02-16 20:38:40'); Result: 2001 Parameters field Identifier or string that selects what field to extract from the source value. You must enter the constant field values (i.e. CENTURY, DAY, etc). when specifying the field. Note: The field parameter is the same for the DATE_PART() function. source Expression of type DATE, TIMESTAMP, TIME, or INTERVAL. Note: Expressions of type DATE are cast to TIMESTAMP. Field Values CENTURY The century number. SELECT EXTRACT(CENTURY FROM TIMESTAMP '2000-12-16 12:21:13'); Result: 20 SELECT EXTRACT(CENTURY FROM TIMESTAMP '2001-02-16 20:38:40'); Result: 21 The first century starts at 0001-01-01 00:00:00 AD. This definition applies to all Gregorian calendar countries. There is no century number 0, you go from –1 to 1. DAY The day (of the month) field (1–31). SELECT EXTRACT(DAY FROM TIMESTAMP '2001-02-16 20:38:40'); Result: 16 SELECT EXTRACT(DAY FROM DATE '2001-02-16'); Result: 16 HP Vertica Analytic Database (7.0.x) Page 297 of 1539 SQL Reference Manual SQL Functions DECADE The year field divided by 10. SELECT EXTRACT(DECADE FROM TIMESTAMP '2001-02-16 20:38:40'); Result: 200 SELECT EXTRACT(DECADE FROM DATE '2001-02-16'); Result: 200 DOQ The day within the current quarter. SELECT EXTRACT(DOQ FROM CURRENT_DATE); Result: 89 The result is calculated as follows: Current date = June 28, current quarter = 2 (April, May, June). 30 (April) + 31 (May) + 28 (June current day) = 89. DOQ recognizes leap year days. DOW The day of the week (0–6; Sunday is 0). SELECT EXTRACT(DOW FROM TIMESTAMP '2001-02-16 20:38:40'); Result: 5 SELECT EXTRACT(DOW FROM DATE '2001-02-16'); Result: 5 EXTRACT's day of the week numbering is different from that of the TO_CHAR function. DOY The day of the year (1–365/366) SELECT EXTRACT(DOY FROM TIMESTAMP '2001-02-16 20:38:40'); Result: 47 SELECT EXTRACT(DOY FROM DATE '2001-02-16'); Result: 5 EPOCH For DATE and TIMESTAMP values, the number of seconds since 1970-01-01 00:00:00-00 (can be negative); for INTERVAL values, the total number of seconds in the interval. SELECT EXTRACT(EPOCH FROM TIMESTAMP WITH TIME ZONE '2001-02-16 20:38:40-0 8'); Result: 982384720 SELECT EXTRACT(EPOCH FROM INTERVAL '5 days 3 hours'); Result: 442800 Here is how you can convert an epoch value back to a timestamp: SELECT TIMESTAMP WITH TIME ZONE 'epoch' + 982384720 * INTERVAL '1 second'; HOUR The hour field (0–23). SELECT EXTRACT(HOUR FROM TIMESTAMP '2001-02-16 20:38:40'); Result: 20 SELECT EXTRACT(HOUR FROM TIME '13:45:59'); Result: 13 HP Vertica Analytic Database (7.0.x) Page 298 of 1539 SQL Reference Manual SQL Functions ISODOW The ISO day of the week (1–7; Monday is 1). By definition, the ISO-8601 week starts on Monday, and the first week of a year contains January 4 of that year. In other words, the first Thursday of a year is in week 1 of that year. Because of this, it is possible for early January dates to be part of the 52nd or 53rd week of the previous year. For example, 2005-01-01 is part of the 53rd week of year 2004, and 2006-01-01 is part of the 52nd week of year 2005. SELECT EXTRACT(ISODOW FROM DATE '2010-09-27'); Result: 1 ISOWEEK The ISO week, which consists of 7 days starting on Monday and ending on Sunday. The first week of the year is the week that contains January 4. ISOYEAR The ISO year, which is 52 or 53 weeks (Monday–Sunday). SELECT EXTRACT(ISOYEAR FROM DATE '2006-01-01'); Result: 2005 SELECT EXTRACT(ISOYEAR FROM DATE '2006-01-02'); Result: 2006 SELECT EXTRACT(ISOYEAR FROM TIMESTAMP '2001-02-16 20:38:40'); Result: 2001 MICROSECONDS The seconds field, including fractional parts, multiplied by 1,000,000. This includes full seconds. SELECT EXTRACT(MICROSECONDS FROM TIME '17:12:28.5'); Result: 28500000 MILLENNIUM The millennium number. SELECT EXTRACT(MILLENNIUM FROM TIMESTAMP '2001-02-16 20:38:40'); Result: 3 Years in the 1900s are in the second millennium. The third millennium starts January 1, 2001. MILLISECONDS The seconds field, including fractional parts, multiplied by 1000. This includes full seconds. SELECT EXTRACT(MILLISECONDS FROM TIME '17:12:28.5'); Result: 28500 MINUTE The minutes field (0 - 59). SELECT EXTRACT(MINUTE FROM TIMESTAMP '2001-02-16 20:38:40'); Result: 38 SELECT EXTRACT(MINUTE FROM TIME '13:45:59'); Result: 45 HP Vertica Analytic Database (7.0.x) Page 299 of 1539 SQL Reference Manual SQL Functions MONTH For timestamp values, the number of the month within the year (1 - 12) ; for interval values the number of months, modulo 12 (0 - 11). SELECT EXTRACT(MONTH FROM TIMESTAMP '2001-02-16 20:38:40'); Result: 2 SELECT EXTRACT(MONTH FROM INTERVAL '2 years 3 months'); Result: 3 SELECT EXTRACT(MONTH FROM INTERVAL '2 years 13 months'); Result: 1 QUARTER The quarter of the year (1–4) that the day is in (for timestamp values only). SELECT EXTRACT(QUARTER FROM TIMESTAMP '2001-02-16 20:38:40'); Result: 1 SECOND The seconds field, including fractional parts (0–59) (60 if leap seconds are implemented by the operating system). SELECT EXTRACT(SECOND FROM TIMESTAMP '2001-02-16 20:38:40'); Result: 40 SELECT EXTRACT(SECOND FROM TIME '17:12:28.5'); Result: 28.5 TIME ZONE The time zone offset from UTC, measured in seconds. Positive values correspond to time zones east of UTC, negative values to zones west of UTC. TIMEZONE_HOUR The hour component of the time zone offset. TIMEZONE_MINUT E The minute component of the time zone offset. WEEK The number of the week of the calendar year that the day is in. SELECT EXTRACT(WEEK FROM TIMESTAMP '2001-02-16 20:38:40'); Result: 7 SELECT EXTRACT(WEEK FROM DATE '2001-02-16'); Result: 7 YEAR The year field. Keep in mind there is no 0 AD, so subtract BC years from AD years with care. SELECT EXTRACT(YEAR FROM TIMESTAMP '2001-02-16 20:38:40'); Result: 2001 Examples => SELECT EXTRACT (DAY FROM DATE '2008-12-25'); date_part ----------25 (1 row) => SELECT EXTRACT (MONTH FROM DATE '2008-12-25'); date_part ----------- HP Vertica Analytic Database (7.0.x) Page 300 of 1539 SQL Reference Manual SQL Functions 12 (1 row SELECT EXTRACT(DOQ FROM CURRENT_DATE); date_part ----------89 (1 row) Remember that internally EXTRACT() uses the DATE_PART() function: => SELECT EXTRACT(EPOCH FROM AGE_IN_YEARS(TIMESTAMP '2009-02-24', 2') :: INTERVAL year); date_part ----------1136073600 (1 row) TIMESTAMP '1972-03-0 In the above example, AGE_IN_YEARS is 36. The UNIX epoch uses 365.25 days per year: => SELECT 1136073600.0/36/(24*60*60); ?column? ---------365.25 (1 row) You can extract the timezone hour from TIMETZ: => SELECT EXTRACT(timezone_hour FROM TIMETZ '10:30+13:30'); date_part ----------13 (1 row) See Also l DATE_PART GETDATE Returns the current system date and time as a TIMESTAMP value. Behavior Type Stable Syntax GETDATE(); HP Vertica Analytic Database (7.0.x) Page 301 of 1539 SQL Reference Manual SQL Functions Notes l GETDATE is a stable function that requires parentheses but accepts no arguments. l This function uses the date and time supplied by the operating system on the server to which you are connected, which is the same across all servers. l GETDATE internally converts STATEMENT_TIMESTAMP() from TIMESTAMPTZ to TIMESTAMP. l This function is identical to SYSDATE(). Example => SELECT GETDATE(); GETDATE ---------------------------2011-03-07 13:21:29.497742 (1 row) See Also l Date/Time Expressions GETUTCDATE Returns the current system date and time as a TIMESTAMP value relative to UTC. Behavior Type Stable Syntax GETUTCDATE(); Notes l GETUTCDATE is a stable function that requires parentheses but accepts no arguments. l This function uses the date and time supplied by the operating system on the server to which you are connected, which is the same across all servers. l GETUTCDATE is internally converted to STATEMENT_TIMESTAMP() at TIME ZONE 'UTC'. HP Vertica Analytic Database (7.0.x) Page 302 of 1539 SQL Reference Manual SQL Functions Example => SELECT GETUTCDATE(); GETUTCDATE ---------------------------2011-03-07 20:20:26.193052 (1 row) See Also Date/Time Expressions l HOUR Extracts the hour from a DATE, TIMESTAMP, TIMESTAMPTZ, VARCHAR, or INTERVAL value. The return value is of type INTEGER. (Hour 0 is midnight to 1 a.m.) Behavior Type Immutable, except for TIMESTAMPTZ arguments where it is Stable. Syntax HOUR ( d ) Parameters d Incoming DATE, TIMESTAMP, TIMESTAMPTZ, VARCHAR, or INTERVAL value. Examples => SELECT HOUR (TIMESTAMP 'sep 22, 2011 12:34'); HOUR -----12 (1 row) => SELECT HOUR (INTERVAL '35 12:34'); HOUR -----12 (1 row) => SELECT HOUR ('12:34'); HOUR -----12 HP Vertica Analytic Database (7.0.x) Page 303 of 1539 SQL Reference Manual SQL Functions (1 row) ISFINITE Tests for the special TIMESTAMP constant INFINITY and returns a value of type BOOLEAN. Behavior Type Immutable Syntax ISFINITE ( timestamp ) Parameters timestamp Expression of type TIMESTAMP Examples SELECT ISFINITE(TIMESTAMP '2009-02-16 21:28:30'); isfinite ---------t (1 row) SELECT ISFINITE(TIMESTAMP 'INFINITY'); isfinite ---------f (1 row) JULIAN_DAY Returns an INTEGER representing the Julian day based on an input TIMESTAMP, TIMESTAMPTZ, VARCHAR, or DATE value. Behavior Type Immutable, except for TIMESTAMPTZ arguments where it is Stable. Syntax JULIAN_DAY ( d ) HP Vertica Analytic Database (7.0.x) Page 304 of 1539 SQL Reference Manual SQL Functions Parameters d Is the TIMESTAMP, TIMESTAMPTZ, VARCHAR, or DATE input value. Example => SELECT JULIAN_DAY(TIMESTAMP 'sep 22, 2011 12:34'); JULIAN_DAY -----------2455827 (1 row) LAST_DAY Returns the last day of the month based on a TIMESTAMP. The TIMESTAMP can be supplied as a DATE or a TIMESTAMPTZ data type. Behavior Type Immutable, unless called with TIMESTAMPTZ, in which case it is Stable. Syntax LAST_DAY ( date ); Examples The following example returns the last day of the month, February, as 29 because 2008 was a leap year: SELECT LAST_DAY('2008-02-28 23:30 PST') "Last"; Last -----------2008-02-29 (1 row) The following example returns the last day of the month in March, after converting the string value to the specified DATE type: SELECT LAST_DAY('2003/03/15') "Last"; Last -----------2003-03-31 (1 row) The following example returns the last day of February in the specified year (not a leap year): HP Vertica Analytic Database (7.0.x) Page 305 of 1539 SQL Reference Manual SQL Functions SELECT LAST_DAY('2003/02/03') "Last"; Last -----------2003-02-28 (1 row) LOCALTIME Returns a value of type TIME representing the time of day. Behavior Type Stable Syntax LOCALTIME [ ( precision ) ] Parameters precision Causes the result to be rounded to the specified number of fractional digits in the seconds field. Notes This function returns the start time of the current transaction; the value does not change during the transaction. The intent is to allow a single transaction to have a consistent notion of the "current" time, so that multiple modifications within the same transaction bear the same timestamp. Example SELECT LOCALTIME; time ----------------16:16:06.790771 (1 row) LOCALTIMESTAMP Returns a value of type TIMESTAMP that represents today's date and time of day. Behavior Type Stable HP Vertica Analytic Database (7.0.x) Page 306 of 1539 SQL Reference Manual SQL Functions Syntax LOCALTIMESTAMP [ ( precision ) ] Parameters precision Causes the result to be rounded to the specified number of fractional digits in the seconds field. Notes This function returns the start time of the current transaction; the value does not change during the transaction. The intent is to allow a single transaction to have a consistent notion of the "current" time, so that multiple modifications within the same transaction bear the same timestamp. Example SELECT LOCALTIMESTAMP; timestamp -------------------------2009-02-24 14:47:48.5951 (1 row) MICROSECOND Returns an INTEGER representing the microsecond portion of an input DATE, VARCHAR, TIMESTAMP, TIMESTAMPTZ, or INTERVAL value. Behavior Type Immutable, except for TIMESTAMPTZ arguments where it is Stable. Syntax MICROSECOND ( d ) Parameters d DATE, VARCHAR, TIMESTAMP, TIMESTAMPTZ, or INTERVAL input value. HP Vertica Analytic Database (7.0.x) Page 307 of 1539 SQL Reference Manual SQL Functions Example => SELECT MICROSECOND (TIMESTAMP 'Sep 22, 2011 12:34:01.123456'); MICROSECOND ------------123456 (1 row) MIDNIGHT_SECONDS Returns an INTEGER that represents the number of seconds between midnight and the input value. The input value can be of type VARCHAR, TIME, TIMESTAMP, or TIMESTAMPTZ. Behavior Type Immutable, except for TIMESTAMPTZ arguments where it is Stable. Syntax MIDNIGHT_SECONDS ( d ) Parameters d VARCHAR, TIME, TIMESTAMP, or TIMESTAMPTZ input value. Example => SELECT MIDNIGHT_SECONDS('12:34:00.987654'); MIDNIGHT_SECONDS -----------------45240 (1 row) => SELECT MIDNIGHT_SECONDS(TIME '12:34:00.987654'); MIDNIGHT_SECONDS -----------------45240 (1 row) => SELECT MIDNIGHT_SECONDS (TIMESTAMP 'sep 22, 2011 12:34'); MIDNIGHT_SECONDS -----------------45240 (1 row) HP Vertica Analytic Database (7.0.x) Page 308 of 1539 SQL Reference Manual SQL Functions MINUTE Returns an INTEGER that represents the minute value of the input value. The input value can be of type VARCHAR, DATE, TIMESTAMP, TIMESTAMPTZ, or INTERVAL. Behavior Type Immutable, except for TIMESTAMPTZ arguments where it is Stable. Syntax MINUTE ( d ) Parameters d VARCHAR, DATE, TIMESTAMP, TIMESTAMPTZ, or INTERVAL input value. Example => SELECT MINUTE('12:34:03.456789'); MINUTE -------34 (1 row) =>SELECT MINUTE (TIMESTAMP 'sep 22, 2011 12:34'); MINUTE -------34 (1 row) => SELECT MINUTE(INTERVAL '35 12:34:03.456789'); MINUTE -------34 (1 row) MONTH Returns an INTEGER that represents the month portion of the input value. The input value can be of type VARCHAR, DATE, TIMESTAMP, TIMESTAMPTZ, or INTERVAL. Behavior Type Immutable, except for TIMESTAMPTZ arguments where it is Stable. HP Vertica Analytic Database (7.0.x) Page 309 of 1539 SQL Reference Manual SQL Functions Syntax MONTH( d ) Parameters d Incoming VARCHAR, DATE, TIMESTAMP, TIMESTAMPTZ, or INTERVAL value. Examples => SELECT MONTH('6-9'); MONTH ------9 (1 row) => SELECT MONTH (TIMESTAMP 'sep 22, 2011 12:34'); MONTH ------9 (1 row) => SELECT MONTH(INTERVAL '2-35' year to month); MONTH ------11 (1 row) MONTHS_BETWEEN Returns the number of months between date1 and date2 as a FLOAT8, where the input arguments can be of TIMESTAMP, DATE, or TIMESTAMPTZ type. Behavior Type Immutable for TIMESTAMP and DATE, Stable for TIMESTAMPTZ Syntax MONTHS_BETWEEN ( date1 , date2 ); HP Vertica Analytic Database (7.0.x) Page 310 of 1539 SQL Reference Manual SQL Functions Parameters date1, date2 If date1 is later than date2, the result is positive. If date1 is earlier than date2, then the result is negative. If date1 and date2 are either the same days of the month or both are the last days of their respective month, then the result is always an integer. Otherwise MONTHS_BETWEEN returns a FLOAT8 result based on a 31-day month, which considers the difference between date1 and date2. Examples The following result is an integral number of days because the dates are on the same day of the month: => SELECT MONTHS_BETWEEN('2009-03-07 16:00'::TIMESTAMP, '2009-04-07 15:00'::TIMESTAMP); MONTHS_BETWEEN ----------------1 (1 row) The result from the next example returns an integral number of days because the days fall on the last day of their respective months: => SELECT MONTHS_BETWEEN('29Feb2000', '30Sep2000') "Months"; MONTHS ----------------7 (1 row) In the next example, and in the example that immediately follows it, MONTHS_BETWEEN() returns the number of months between date1 and date2 as a fraction because the days do not fall on the same day or on the last day of their respective months: => SELECT MONTHS_BETWEEN(TO_DATE('02-02-1995','MM-DD-YYYY'), TO_DATE('01-01-1995','MM-DD-YYYY') ) "Months"; Months -----------------1.03225806451613 (1 row) => SELECT MONTHS_BETWEEN(TO_DATE ('2003/01/01', 'yyyy/mm/dd'), TO_DATE ('2003/03/14', 'yyyy/mm/dd') ) "Months"; Months -------------------2.41935483870968 (1 row) HP Vertica Analytic Database (7.0.x) Page 311 of 1539 SQL Reference Manual SQL Functions The following two examples use the same date1 and date2 strings, but they are cast to a different data types (TIMESTAMP and TIMESTAMPTZ). The result is the same for both statements: SELECT MONTHS_BETWEEN('2008-04-01'::timestamp, '2008-02-29'::timestamp); months_between -----------------1.09677419354839 (1 row) SELECT MONTHS_BETWEEN('2008-04-01'::timestamptz, '2008-02-29'::timestamptz); months_between -----------------1.09677419354839 (1 row) The following two examples show alternate inputs: SELECT MONTHS_BETWEEN('2008-04-01'::date, '2008-02-29'::timestamp); months_between -----------------1.09677419354839 (1 row) SELECT MONTHS_BETWEEN('2008-02-29'::timestamptz, '2008-04-01'::date); months_between -------------------1.09677419354839 (1 row) NEW_TIME Converts a TIMESTAMP value between time zones. Intervals are not permitted. Behavior Type Immutable Syntax NEW_TIME( 'timestamp' , 'timezone1' , 'timezone2') Returns TIMESTAMP HP Vertica Analytic Database (7.0.x) Page 312 of 1539 SQL Reference Manual SQL Functions Parameters timestamp The TIMESTAMP (or a TIMESTAMPTZ, DATE, or character string which can be converted to a TIMESTAMP) representing a TIMESTAMP in timezone1 that returns the equivalent timestamp in timezone2. timezone1 VARCHAR string of the form required by the TIMESTAMP AT TIMEZONE 'zone' clause. timezone1 indicates the time zone from which you want to convert timestamp. It must be a valid timezone, as listed in the field for timezone2 below. timezone2 VARCHAR string of the form required by the TIMESTAMP AT TIMEZONE 'zone' clause. timezone2 indicates the time zone into which you want to convert timestamp. Notes The timezone arguments are character strings of the form required by the TIMESTAMP AT TIMEZONE 'zone' clause; for example: AST, ADT Atlantic Standard Time or Daylight Time BST, BDT Bering Standard Time or Daylight Time CST, CDT Central Standard Time or Daylight Time EST, EDT Eastern Standard Time or Daylight Time GMT Greenwich Mean Time HST Alaska-Hawaii Standard Time MST, MDT Mountain Standard Time or Daylight Time NST Newfoundland Standard Time PST, PDT Pacific Standard Time or Daylight Time Examples The following command converts the specified time from Eastern Standard Time to Pacific Standard Time: => SELECT NEW_TIME('05-24-12 13:48:00', 'EST', 'PST'); NEW_TIME --------------------2012-05-24 10:48:00 (1 row) This command converts the time on January 1 from Eastern Standard Time to Pacific Standard Time. Notice how the time rolls back to the previous year: HP Vertica Analytic Database (7.0.x) Page 313 of 1539 SQL Reference Manual SQL Functions => SELECT NEW_TIME('01-01-12 01:00:00', 'EST', 'PST'); NEW_TIME --------------------2011-12-31 22:00:00 (1 row) Query the current system time: => SELECT NOW(); now ------------------------------2012-05-24 08:28:10.155887-04 (1 row) => SELECT NEW_TIME('NOW', 'EDT', 'CDT'); NEW_TIME ---------------------------2012-05-24 07:28:10.155887 (1 row) The following example returns the year 45 before the Common Era in Greenwich Mean Time and converts it to Newfoundland Standard Time: => SELECT NEW_TIME('April 1, 45 BC', 'GMT', 'NST'); NEW_TIME -----------------------0045-03-31 20:30:00 BC (1 row) => SELECT NEW_TIME('April 1 2011', 'EDT', 'PDT'); NEW_TIME --------------------2011-03-31 21:00:00 (1 row) => SELECT NEW_TIME('May 24, 2012 10:00', 'Pacific/Kiritamati', 'EDT'); NEW_TIME --------------------2011-05-23 16:00:00 (1 row) NEXT_DAY Returns the date of the first instance of a particular day of the week that follows the specified date. Behavior Type Immutable, except for TIMESTAMPTZ arguments where it is Stable. Syntax NEXT_DAY( 'date', 'DOW') HP Vertica Analytic Database (7.0.x) Page 314 of 1539 SQL Reference Manual SQL Functions Parameters date VARCHAR, TIMESTAMP, TIMESTAMPTZ, or DATE. Only standard English day-names and day-name abbreviations are accepted. DOW Day of week, type CHAR/VARCHAR string or character constant. DOW is not case sensitive and trailing spaces are ignored. Examples The following example returns the date of the next Friday following the specified date. All are variations on the same query, and all return the same result: => SELECT NEXT_DAY('28-MAR-2011','FRIDAY') "NEXT DAY" FROM DUAL; NEXT DAY -----------2011-04-01 (1 row) => SELECT NEXT_DAY('March 28 2011','FRI') "NEXT DAY" FROM DUAL; NEXT DAY -----------2011-04-01 (1 row) => SELECT NEXT_DAY('3-29-11','FRI') "NEXT DAY" FROM DUAL; NEXT DAY -----------2011-04-01 (1 row) NOW [Date/Time] Returns a value of type TIMESTAMP WITH TIME ZONE representing the start of the current transaction. NOW is equivalent to CURRENT_TIMESTAMP except that it does not accept a precision parameter. Behavior Type Stable Syntax NOW() Notes This function returns the start time of the current transaction; the value does not change during the transaction. The intent is to allow a single transaction to have a consistent notion of the "current" HP Vertica Analytic Database (7.0.x) Page 315 of 1539 SQL Reference Manual SQL Functions time, so that multiple modifications within the same transaction bear the same timestamp. Example SELECT NOW(); NOW ------------------------------2010-04-01 15:31:12.144584-04 (1 row) See Also l CURRENT_TIMESTAMP OVERLAPS Returns true when two time periods overlap, false when they do not overlap. Behavior Type Stable when TIMESTAMP and TIMESTAMPTZ are both used, or when TIMESTAMPTZ is used with INTERVAL, Immutable otherwise. Syntax ( start, end ) OVERLAPS ( start, end ) ( start, interval ) OVERLAPS ( start, interval ) Parameters start DATE, TIME, or TIME STAMP value that specifies the beginning of a time period. end DATE, TIME, or TIME STAMP value that specifies the end of a time period. interval Value that specifies the length of the time period. Examples The first command returns true for an overlap in date range of 2007-02-16 through 2007-12-21 with 2007-10-30 through 2008-10-30. SELECT (DATE '2007-02-16', DATE '2007-12-21') OVERLAPS (DATE '2007-10-30', DATE '2008-10-30'); OVERLAPS ---------t HP Vertica Analytic Database (7.0.x) Page 316 of 1539 SQL Reference Manual SQL Functions (1 row) The next command returns false for an overlap in date range of 2007-02-16 through 2007-12-21 with 2008-10-30 through 2008-10-30. SELECT (DATE '2007-02-16', DATE '2007-12-21') OVERLAPS (DATE '2008-10-30', DATE '2008-10-30'); OVERLAPS ---------f (1 row) The next command returns false for an overlap in date range of 2007-02-16, 22 hours ago with 200710-30, 22 hours ago. SELECT (DATE '2007-02-16', INTERVAL '1 12:59:10') OVERLAPS (DATE '2007-10-30', INTERVAL '1 12:59:10'); overlaps ---------f (1 row) QUARTER Returns an INTEGER representing calendar quarter into which the input value falls. The input value can be of type VARCHAR, DATE, TIMESTAMP or TIMESTAMPTZ. Syntax QUARTER( d ) Behavior Type Immutable, except for TIMESTAMPTZ arguments where it is Stable. Parameters d DATE, VARCHAR, TIMESTAMP, or TIMESTAMPTZ input value. Example => SELECT QUARTER (TIMESTAMP 'sep 22, 2011 12:34'); QUARTER --------3 HP Vertica Analytic Database (7.0.x) Page 317 of 1539 SQL Reference Manual SQL Functions (1 row) ROUND [Date/Time] Rounds a TIMESTAMP, TIMESTAMPTZ, or DATE. The return value is of type TIMESTAMP. Behavior Type Immutable, except for TIMESTAMPTZ arguments where it is Stable. Syntax ROUND([TIMESTAMP | DATE] , format ) Parameters TIMESTAMP | DATE TIMESTAMP or DATE input value. HP Vertica Analytic Database (7.0.x) Page 318 of 1539 SQL Reference Manual SQL Functions format A string constant that selects the precision to which truncate the input value. Valid values are: Precision Valid values Century CC, SCC Year SYYY, YYYY, YEAR, YYY, YY,Y ISO Year IYYY, IYY, IY, I Quarter Q Month MONTH, MON, MM, RM Same day of the week as the first day of the year WW Same day of the week as the first day of the ISO year IW Same day of the week as the first day of the month W Day DDD, DD, J Starting day of the week DAY, DY, D Hour HH, HH12, HH24 Minute MI Second SS Example => SELECT ROUND(TIMESTAMP 'sep 22, 2011 12:34:00', 'dy'); ROUND --------------------2011-09-18 00:00:00 (1 row) SECOND Returns an INTEGER representing the second portion of the input value. The input value can be of type VARCHAR, TIMESTAMP, TIMESTAMPTZ, or INTERVAL. HP Vertica Analytic Database (7.0.x) Page 319 of 1539 SQL Reference Manual SQL Functions Syntax SECOND( d ) Behavior Type Immutable, except for TIMESTAMPTZ arguments where it is Stable. Parameters d VARCHAR, TIMESTAMP, TIMESTAMPTZ, or INTERVAL input value. Examples => SELECT SECOND ('23:34:03.456789'); SECOND -------3 (1 row) => SELECT SECOND (TIMESTAMP 'sep 22, 2011 12:34'); SECOND -------0 (1 row) => SELECT SECOND (INTERVAL '35 12:34:03.456789'); SECOND -------3 (1 row) STATEMENT_TIMESTAMP Is similar to TRANSACTION_TIMESTAMP. It returns a value of type TIMESTAMP WITH TIME ZONE representing the start of the current statement. Behavior Type Stable Syntax STATEMENT_TIMESTAMP() HP Vertica Analytic Database (7.0.x) Page 320 of 1539 SQL Reference Manual SQL Functions Notes This function returns the start time of the current statement; the value does not change during the statement. The intent is to allow a single statement to have a consistent notion of the "current" time, so that multiple modifications within the same statement bear the same timestamp. Example SELECT STATEMENT_TIMESTAMP(); STATEMENT_TIMESTAMP ------------------------------2010-04-01 15:40:42.223736-04 (1 row) See Also l CLOCK_TIMESTAMP l TRANSACTION_TIMESTAMP SYSDATE Returns the current system date and time as a TIMESTAMP value. Behavior Type Stable Syntax SYSDATE(); Notes l SYSDATE is a stable function (called once per statement) that requires no arguments. Parentheses are optional. l This function uses the date and time supplied by the operating system on the server to which you are connected, which must be the same across all servers. l In implementation, SYSDATE converts STATEMENT_TIMESTAMP from TIMESTAMPTZ to TIMESTAMP. l This function is identical to GETDATE(). HP Vertica Analytic Database (7.0.x) Page 321 of 1539 SQL Reference Manual SQL Functions Example => SELECT SYSDATE(); sysdate ---------------------------2011-03-07 13:22:28.295802 (1 row) See Also l Date/Time Expressions TIME_SLICE Aggregates data by different fixed-time intervals and returns a rounded-up input TIMESTAMP value to a value that corresponds with the start or end of the time slice interval. Given an input TIMESTAMP value, such as '2000-10-28 00:00:01', the start time of a 3-second time slice interval is '2000-10-28 00:00:00', and the end time of the same time slice is '2000-10-28 00:00:03'. Behavior Type Immutable Syntax TIME_SLICE(expression, slice_length, [ time_unit = 'SECOND' ], [ start_or_end = 'START' ] ) Parameters expression Can be either a column of type TIMESTAMP or a (string) constant that can be parsed into a TIMESTAMP value, such as '2004-10-19 10:23:54'. HP Vertica evaluates expression on each row. slice_length Length of the slice specified in integers. Input must be a positive integer. time_unit Time unit of the slice with a default of SECOND. Domain of possible values: { HOUR, MINUTE, SECOND, MILLISECOND, MICROSECOND }. HP Vertica Analytic Database (7.0.x) Page 322 of 1539 SQL Reference Manual SQL Functions start_or_end Indicates whether the returned value corresponds to the start or end time of the time slice interval. The default is START. Domain of possible values: { START, END }. Notes l The returned value's data type is TIMESTAMP. l The corresponding SQL data type for TIMESTAMP is TIMESTAMP WITHOUT TIME ZONE. HP Vertica supports TIMESTAMP for TIME_SLICE instead of DATE and TIME data types. l TIME_SLICE exhibits the following behavior around nulls: n The system returns an error when any one of slice_length, time_unit, or start_or_end parameters is null. n When slice_length, time_unit, and start_or_end contain legal values, and expression is null, the system returns a NULL value, instead of an error. Usage The following command returns the (default) start time of a 3-second time slice: SELECT TIME_SLICE('2009-09-19 00:00:01', 3); time_slice --------------------2009-09-19 00:00:00 (1 row) The following command returns the end time of a 3-second time slice: SELECT TIME_SLICE('2009-09-19 00:00:01', 3, 'SECOND', 'END'); time_slice --------------------2009-09-19 00:00:03 (1 row) This command returns results in milliseconds, using a 3-second time slice: SELECT TIME_SLICE('2009-09-19 00:00:01', 3, 'ms'); time_slice ------------------------2009-09-19 00:00:00.999 (1 row) This command returns results in microseconds, using a 9-second time slice: HP Vertica Analytic Database (7.0.x) Page 323 of 1539 SQL Reference Manual SQL Functions SELECT TIME_SLICE('2009-09-19 00:00:01', 3, 'us'); time_slice ---------------------------2009-09-19 00:00:00.999999 (1 row) The next example uses a 3-second interval with an input value of '00:00:01'. To focus specifically on seconds, the example omits date, though all values are implied as being part of the timestamp with a given input of '00:00:01': l '00:00:00' is the start of the 3-second time slice l '00:00:03' is the end of the 3-second time slice. l '00:00:03' is also the start of the second 3-second time slice. In time slice boundaries, the end value of a time slice does not belong to that time slice; it starts the next one. When the time slice interval is not a factor of 60 seconds, such as a given slice length of 9 in the following example, the slice does not always start or end on 00 seconds: SELECT TIME_SLICE('2009-02-14 20:13:01', 9); time_slice --------------------2009-02-14 20:12:54 (1 row) This is expected behavior, as the following properties are true for all time slices: l Equal in length l Consecutive (no gaps between them) l Non-overlapping HP Vertica Analytic Database (7.0.x) Page 324 of 1539 SQL Reference Manual SQL Functions To force the above example ('2009-02-14 20:13:01') to start at '2009-02-14 20:13:00', adjust the output timestamp values so that the remainder of 54 counts up to 60: SELECT TIME_SLICE('2009-02-14 20:13:01', 9 )+'6 seconds'::INTERVAL AS time; time --------------------2009-02-14 20:13:00 (1 row) Alternatively, you could use a different slice length, which is divisible by 60, such as 5: SELECT TIME_SLICE('2009-02-14 20:13:01', 5); time_slice --------------------2009-02-14 20:13:00 (1 row) A TIMESTAMPTZ value is implicitly cast to TIMESTAMP. For example, the following two statements have the same effect. SELECT TIME_SLICE('2009-09-23 11:12:01'::timestamptz, 3); TIME_SLICE --------------------2009-09-23 11:12:00 (1 row) SELECT TIME_SLICE('2009-09-23 11:12:01'::timestamptz::timestamp, 3); TIME_SLICE --------------------2009-09-23 11:12:00 (1 row) Examples You can use the SQL analytic functions FIRST_VALUE and LAST_VALUE to find the first/last price within each time slice group (set of rows belonging to the same time slice). This structure could be useful if you want to sample input data by choosing one row from each time slice group. SELECT date_key, transaction_time, sales_dollar_amount,TIME_SLICE(DATE '2000-01-01' + dat e_key + transaction_time, 3), FIRST_VALUE(sales_dollar_amount) OVER (PARTITION BY TIME_SLICE(DATE '2000-01-01' + date_key + transaction_time, 3) ORDER BY DATE '2000-01-01' + date_key + transaction_time) AS first_value FROM store.store_sales_fact LIMIT 20; date_key | transaction_time | sales_dollar_amount | time_slice | first_value ----------+------------------+---------------------+---------------------+------------1 | 00:41:16 | 164 | 2000-01-02 00:41:15 | 164 1 | 00:41:33 | 310 | 2000-01-02 00:41:33 | 310 1 | 15:32:51 | 271 | 2000-01-02 15:32:51 | 271 1 | 15:33:15 | 419 | 2000-01-02 15:33:15 | 419 HP Vertica Analytic Database (7.0.x) Page 325 of 1539 SQL Reference Manual SQL Functions 1 1 1 2 3 3 3 3 3 3 4 4 4 4 4 5 (20 rows) | | | | | | | | | | | | | | | | 15:33:44 16:36:29 16:36:44 03:11:28 03:55:15 11:58:05 11:58:24 11:58:52 19:01:21 22:15:05 13:36:57 13:37:24 13:37:54 13:38:04 13:38:31 10:21:24 | | | | | | | | | | | | | | | | 193 466 250 39 375 369 174 449 201 156 -125 -251 353 426 209 488 | | | | | | | | | | | | | | | | 2000-01-02 2000-01-02 2000-01-02 2000-01-03 2000-01-04 2000-01-04 2000-01-04 2000-01-04 2000-01-04 2000-01-04 2000-01-05 2000-01-05 2000-01-05 2000-01-05 2000-01-05 2000-01-06 15:33:42 16:36:27 16:36:42 03:11:27 03:55:15 11:58:03 11:58:24 11:58:51 19:01:21 22:15:03 13:36:57 13:37:24 13:37:54 13:38:03 13:38:30 10:21:24 | | | | | | | | | | | | | | | | 193 466 250 39 375 369 174 449 201 156 -125 -251 353 426 209 488 TIME_SLICE rounds the transaction time to the 3-second slice length. The following example uses the analytic (window) OVER() clause to return the last trading price (the last row ordered by TickTime) in each 3-second time slice partition: SELECT DISTINCT TIME_SLICE(TickTime, 3), LAST_VALUE(price)OVER (PARTITION BY TIME_SLICE(T ickTime, 3) ORDER BY TickTime ROWS BETWEEN UNBOUNDED PRECEDING AND UNBOUNDED FOLLOWING); Note: If you omit the windowing clause from an analytic clause, LAST_VALUE defaults to RANGE BETWEEN UNBOUNDED PRECEDING AND CURRENT ROW. Results can seem non-intuitive, because instead of returning the value from the bottom of the current partition, the function returns the bottom of the window, which continues to change along with the current input row that is being processed. For more information, see Using Time Series Analytics and Using SQL Analytics in the Programmer's Guide. In the next example, FIRST_VALUE is evaluated once for each input record and the data is sorted by ascending values. Use SELECT DISTINCT to remove the duplicates and return only one output record per TIME_SLICE: SELECT DISTINCT TIME_SLICE(TickTime, 3), FIRST_VALUE(price)OVER (PARTITION BY TIME_SLICE( TickTime, 3) ORDER BY TickTime ASC) FROM tick_store; TIME_SLICE | ?column? ---------------------+---------2009-09-21 00:00:06 | 20.00 2009-09-21 00:00:09 | 30.00 2009-09-21 00:00:00 | 10.00 (3 rows) The information output by the above query can also return MIN, MAX, and AVG of the trading prices within each time slice. HP Vertica Analytic Database (7.0.x) Page 326 of 1539 SQL Reference Manual SQL Functions SELECT DISTINCT TIME_SLICE(TickTime, 3),FIRST_VALUE(Price) OVER (PARTITION BY TIME_SLICE( TickTime, 3) ORDER BY TickTime ASC), MIN(price) OVER (PARTITION BY TIME_SLICE(TickTime, 3)), MAX(price) OVER (PARTITION BY TIME_SLICE(TickTime, 3)), AVG(price) OVER (PARTITION BY TIME_SLICE(TickTime, 3)) FROM tick_store; See Also l Aggregate Functions l FIRST_VALUE [Analytic] l LAST_VALUE [Analytic] l TIMESERIES Clause l TS_FIRST_VALUE TS_LAST_VALUE l l TIMEOFDAY Returns a text string representing the time of day. Behavior Type Volatile Syntax TIMEOFDAY() Notes TIMEOFDAY() returns the wall-clock time and advances during transactions. Example SELECT TIMEOFDAY(); TIMEOFDAY ------------------------------------Thu Apr 01 15:42:04.483766 2010 EDT HP Vertica Analytic Database (7.0.x) Page 327 of 1539 SQL Reference Manual SQL Functions (1 row) TIMESTAMPADD Adds a specified number of intervals to a TIMESTAMP or TIMESTAMPTZ. The return value depends on the input, as follows: l If starttimestamp is TIMESTAMP, the return value is of type TIMESTAMP. l If starttimestampis TIMESTAMPTZ, the return value is of type TIMESTAMPTZ. Behavior Type Immutable, except for TIMESTAMPTZ arguments where it is Stable. Syntax TIMESTAMPADD ( datepart ,interval, starttimestamp ); HP Vertica Analytic Database (7.0.x) Page 328 of 1539 SQL Reference Manual SQL Functions Parameters datepart (VARCHAR) Returns the number of specified datepart boundaries between the specified startdate and enddate. Can be an unquoted identifier, a quoted string, or an expression in parentheses, which evaluates to the datepart as a character string. The following table lists the valid datepartarguments. datepart* Abbreviation YEAR yy, yyyy QUARTER qq, q MONTH mm, m DAY dd, d, dy, dayofyear, y WEEK wk, ww HOUR hh MINUTE mi, n SECOND ss, s MILLISECOND ms MICROSECOND mcs, us * Each of these dateparts can be prefixed with SQL_TSI_ (i.e. SQL_TSI_ YEAR, SQL_TSI_DAY, and so forth.) starttimestamp Start TIMESTAMP or TIMESTAMPTZ for the calculation. Notes l TIMESTAMPDIFF() is an immutable function with a default type of TIMESTAMP. If TIMESTAMPTZ is specified, the function is stable. l HP Vertica accepts statements written in any of the following forms: TIMESTAMPDIFF(year, s, e); TIMESTAMPDIFF('year', s, e); If you use an expression, the expression must be enclosed in parentheses: HP Vertica Analytic Database (7.0.x) Page 329 of 1539 SQL Reference Manual SQL Functions DATEDIFF((expression), s, e); l Starting arguments are not included in the count, but end arguments are included. Example => SELECT TIMESTAMPADD (SQL_TSI_MONTH, 2, ('jan 1, 2006')); TIMESTAMPADD -----------------------2006-03-01 00:00:00-05 (1 row) See Also l Date/Time Expressions TIMESTAMPDIFF Returns the difference between two TIMESTAMP or TIMESTAMPTZ values, based on the specified start and end arguments. Behavior Type Immutable, except for TIMESTAMPTZ arguments where it is Stable. Syntax TIMESTAMPDIFF ( datepart , starttimestamp , endtimestamp ); Parameters HP Vertica Analytic Database (7.0.x) Page 330 of 1539 SQL Reference Manual SQL Functions datepart (VARCHAR) Returns the number of specified datepart boundaries between the specified startdate and enddate. Can be an unquoted identifier, a quoted string, or an expression in parentheses, which evaluates to the datepart as a character string. The following table lists the valid datepartarguments. datepart Abbreviation year yy, yyyy quarter qq, q month mm, m day dd, d, dy, dayofyear, y week wk, ww hour hh minute mi, n second ss, s millisecond ms microsecond mcs, us starttimestamp Start TIMESTAMP for the calculation. endtimestamp End TIMESTAMP for the calculation. Notes l TIMESTAMPDIFF() is an immutable function with a default type of TIMESTAMP. If TIMESTAMPTZ is specified, the function is stable. l HP Vertica accepts statements written in any of the following forms: TIMESTAMPDIFF(year, s, e); TIMESTAMPDIFF('year', s, e); If you use an expression, the expression must be enclosed in parentheses: TIMESTAMPDIFF((expression), s, e); l Starting arguments are not included in the count, but end arguments are included. HP Vertica Analytic Database (7.0.x) Page 331 of 1539 SQL Reference Manual SQL Functions Example => SELECT TIMESTAMPDIFF ('YEAR',('jan 1, 2006 12:34:00'), ('jan 1, 2008 12:34:00')); TIMESTAMPDIFF --------------2 (1 row) See Also l Date/Time Expressions TIMESTAMP_ROUND Rounds a TIMESTAMP to a specified format. The return value is of type TIMESTAMP. Behavior Type Immutable, except for TIMESTAMPTZ arguments where it is Stable. Syntax TIMESTAMP_ROUND ( timestamp, format ) Parameters timestamp TIMESTAMP or TIMESTAMPTZ input value. HP Vertica Analytic Database (7.0.x) Page 332 of 1539 SQL Reference Manual SQL Functions format String constant that selects the precision to which truncate the input value. Valid values for format are: Precision Valid values Century CC, SCC Year SYYY, YYYY, YEAR, YYY, YY,Y ISO Year IYYY, IYY, IY, I Quarter Q Month MONTH, MON, MM, RM Same day of the week as the first day of the year WW Same day of the week as the first day of the ISO year IW Same day of the week as the first day of the month W Day DDD, DD, J Starting day of the week DAY, DY, D Hour HH, HH12, HH24 Minute MI Second SS Examples b=> SELECT TIMESTAMP_ROUND('sep 22, 2011 12:34:00', 'dy'); TIMESTAMP_ROUND --------------------2011-09-18 00:00:00 (1 row) TIMESTAMP_TRUNC Truncates a TIMESTAMP. The return value is of type TIMESTAMP. Behavior Type Immutable, except for TIMESTAMPTZ arguments where it is Stable. HP Vertica Analytic Database (7.0.x) Page 333 of 1539 SQL Reference Manual SQL Functions Syntax TIMESTAMP_TRUNC ( timestamp, format ) Parameters timestamp TIMESTAMP or TIMESTAMPTZ input value. format String constant that selects the precision to which truncate the input value. Valid values for format are: Precision Valid values Century CC, SCC Year SYYY, YYYY, YEAR, YYY, YY,Y ISO Year IYYY, IYY, IY, I Quarter Q Month MONTH, MON, MM, RM Same day of the week as the first day of the year WW Same day of the week as the first day of the ISO year IW Same day of the week as the first day of the month W Day DDD, DD, J Starting day of the week DAY, DY, D Hour HH, HH12, HH24 Minute MI Second SS Examples => SELECT TIMESTAMP_TRUNC('sep 22, 2011 12:34:00'); TIMESTAMP_TRUNC --------------------2011-09-22 00:00:00 (1 row) HP Vertica Analytic Database (7.0.x) Page 334 of 1539 SQL Reference Manual SQL Functions => SELECT TIMESTAMP_TRUNC('sep 22, 2011 12:34:00', 'dy'); TIMESTAMP_TRUNC --------------------2011-09-18 00:00:00 (1 row) TRANSACTION_TIMESTAMP Returns a value of type TIMESTAMP WITH TIME ZONE representing the start of the current transaction. TRANSACTION_TIMESTAMP is equivalent to CURRENT_TIMESTAMP except that it does not accept a precision parameter. Behavior Type Stable Syntax TRANSACTION_TIMESTAMP() Notes This function returns the start time of the current transaction; the value does not change during the transaction. The intent is to allow a single transaction to have a consistent notion of the "current" time, so that multiple modifications within the same transaction bear the same timestamp. Example SELECT TRANSACTION_TIMESTAMP(); TRANSACTION_TIMESTAMP ------------------------------2010-04-01 15:31:12.144584-04 (1 row) See Also l CLOCK_TIMESTAMP l STATEMENT_TIMESTAMP TRUNC [Date/Time] Truncates a TIMESTAMP, TIMESTAMPTZ, or DATE. The return value is of type TIMESTAMP. HP Vertica Analytic Database (7.0.x) Page 335 of 1539 SQL Reference Manual SQL Functions Behavior Type Immutable, except for TIMESTAMPTZ arguments where it is Stable. Syntax TRUNC ([TIMESTAMP | DATE] , format ) Parameters TIMESTAMP | DATE TIMESTAMP or DATE input value. format A string constant that selects the precision to which truncate the input value. Valid values for format are: Precision Valid values Century CC, SCC Year SYYY, YYYY, YEAR, YYY, YY,Y ISO Year IYYY, IYY, IY, I Quarter Q Month MONTH, MON, MM, RM Same day of the week as the first day of the WW year Same day of the week as the first day of the IW ISO year Same day of the week as the first day of the W month Day DDD, DD, J Starting day of the week DAY, DY, D Hour HH, HH12, HH24 Minute MI Second SS HP Vertica Analytic Database (7.0.x) Page 336 of 1539 SQL Reference Manual SQL Functions Examples => SELECT TRUNC(TIMESTAMP 'sep 22, 2011 12:34:00', 'dy'); TRUNC --------------------2011-09-18 00:00:00 (1 row) WEEK Returns an INTEGER representing the week of the year into which the input value falls. A week starts on Sunday. January 1 is always in the first week of the year. Syntax WEEK ( d ) Behavior Type Immutable, except for TIMESTAMPTZ arguments where it is Stable. Parameters d VARCHAR, DATE, TIMESTAMP, TIMESTAMPTZ input value. Example => SELECT WEEK (TIMESTAMP 'sep 22, 2011 12:34'); WEEK -----39 (1 row) WEEK_ISO Returns an INTEGER from 1–53 that represents the week of the year into which the input value falls. The return value is based on the ISO 8061 standard. The ISO week consists of 7 days starting on Monday and ending on Sunday. The first week of the year is the week that contains January 4. Syntax WEEK_ISO ( d ) HP Vertica Analytic Database (7.0.x) Page 337 of 1539 SQL Reference Manual SQL Functions Behavior Type Immutable, except for TIMESTAMPTZ arguments where it is Stable. Parameters d VARCHAR, DATE, TIMESTAMP, or TIMESTAMPTZ input value. Examples The following examples illustrate the different results returned by WEEK_ISO. The first shows that December 28, 2011 falls within week 52 of the ISO calendar: => SELECT WEEK_ISO (TIMESTAMP 'Dec 28, 2011 10:00:00'); WEEK_ISO ---------52 (1 row) The second example shows WEEK_ISO results for January 1, 2012. Note that, since this date falls on a Sunday, it falls within week 52 of the ISO year: => SELECT WEEK_ISO (TIMESTAMP 'Jan 1, 2012 10:00:00'); WEEK_ISO ---------52 (1 row) The third example shows WEEK_ISO results for January 2, 2012, which occurs on a Monday. This is the first week of the year that contains a Thursday and contains January 4. The function returns week 1. => SELECT WEEK_ISO (TIMESTAMP 'Jan 2, 2012 10:00:00'); WEEK_ISO ---------1 The last example shows how to combine the DAYOFWEEK_ISO, WEEK_ISO, and YEAR_ISO functions to find the ISO day of the week, week, and year: => SELECT DAYOFWEEK_ISO('Jan 1, 2000'), WEEK_ISO('Jan 1, 2000'),YEAR_ISO('Jan1,2000'); DAYOFWEEK_ISO | WEEK_ISO | YEAR_ISO ---------------+----------+---------6 | 52 | 1999 (1 row) HP Vertica Analytic Database (7.0.x) Page 338 of 1539 SQL Reference Manual SQL Functions See Also l YEAR_ISO l DAYOFWEEK_ISO l http://en.wikipedia.org/wiki/ISO_8601 YEAR Returns an INTEGER representing the year portion of the input value. Syntax YEAR( d ) Behavior Type Immutable, except for TIMESTAMPTZ arguments where it is Stable. Parameters d VARCHAR, TIMESTAMP, TIMESTAMPTZ, or INTERVAL input value. Examples => SELECT YEAR ('6-9'); YEAR -----6 (1 row) => SELECT YEAR (TIMESTAMP 'sep 22, 2011 12:34'); YEAR -----2011 (1 row) => SELECT YEAR (INTERVAL '2-35' year to month); YEAR -----4 (1 row) YEAR_ISO Returns an INTEGER representing the year portion of the input value. The return value is based on the ISO 8061 standard. HP Vertica Analytic Database (7.0.x) Page 339 of 1539 SQL Reference Manual SQL Functions The first week of the ISO year is the week that contains January 4. Syntax YEAR_ISO( d ) Behavior Type Immutable, except for TIMESTAMPTZ arguments where it is Stable. Parameters d VARCHAR, DATE, TIMESTAMP, or TIMESTAMPTZ input value. Examples => SELECT YEAR_ISO (TIMESTAMP 'sep 22, 2011 12:34'); YEAR_ISO ---------2011 (1 row) The following example shows how to combine the DAYOFWEEK_ISO, WEEK_ISO, and YEAR_ ISO functions to find the ISO day of the week, week, and year: => SELECT DAYOFWEEK_ISO('Jan 1, 2000'), WEEK_ISO('Jan 1, 2000'),YEAR_ISO('Jan1,2000'); DAYOFWEEK_ISO | WEEK_ISO | YEAR_ISO ---------------+----------+---------6 | 52 | 1999 (1 row) See Also l WEEK_ISO l DAYOFWEEK_ISO l http://en.wikipedia.org/wiki/ISO_8601 HP Vertica Analytic Database (7.0.x) Page 340 of 1539 SQL Reference Manual SQL Functions Formatting Functions Formatting functions provide a powerful tool set for converting various data types (DATE/TIME, INTEGER, FLOATING POINT) to formatted strings and for converting from formatted strings to specific data types. These functions all use a common calling convention: l The first argument is the value to be formatted. l The second argument is a template that defines the output or input format. Note: The TO_TIMESTAMP function can take a single double precision argument. TO_BITSTRING Returns a VARCHAR that represents the given VARBINARY value in bitstring format. Behavior Type Immutable Syntax TO_BITSTRING ( expression ) Parameters expression (VARCHAR) is the string to return. Notes VARCHAR TO_BITSTRING(VARBINARY) converts data from binary type to character type (where the character representation is the bitstring format). This function is the inverse of BITSTRING_TO_BINARY: TO_BITSTRING(BITSTRING_TO_BINARY(x)) = x) BITSTRING_TO_BINARY(TO_BITSTRING(x)) = x) Examples SELECT TO_BITSTRING('ab'::BINARY(2)); HP Vertica Analytic Database (7.0.x) Page 341 of 1539 SQL Reference Manual SQL Functions to_bitstring -----------------0110000101100010 (1 row) SELECT TO_BITSTRING(HEX_TO_BINARY('0x10')); to_bitstring -------------00010000 (1 row) SELECT TO_BITSTRING(HEX_TO_BINARY('0xF0')); to_bitstring -------------11110000 (1 row) See Also l BITCOUNT l BITSTRING_TO_BINARY TO_CHAR Converts various date/time and numeric values into text strings. Behavior Type Stable Syntax TO_CHAR ( expression [, pattern ] ) Parameters expression (TIMESTAMP, TIMESTAMPTZ, TIME, TIMETZ, INTERVAL, INTEGER, DOUBLE PRECISION) specifies the value to convert. pattern [Optional] (CHAR or VARCHAR) specifies an output pattern string using the Template Patterns for Date/Time Formatting and and/or Template Patterns for Numeric Formatting. HP Vertica Analytic Database (7.0.x) Page 342 of 1539 SQL Reference Manual SQL Functions Notes l TO_CHAR(any) casts any type, except BINARY/VARBINARY, to VARCHAR. The following example returns an error if you attempt to cast TO_CHAR to a binary data type: => SELECT TO_CHAR('abc'::VARBINARY); ERROR: cannot cast type varbinary to varchar l TO_CHAR accepts TIME and TIMETZ data types as inputs if you explicitly cast TIME to TIMESTAMP and TIMETZ to TIMESTAMPTZ. => SELECT TO_CHAR(TIME '14:34:06.4','HH12:MI am'); => SELECT TO_CHAR(TIMETZ '14:34:06.4+6','HH12:MI am'); You can extract the timezone hour from TIMETZ: SELECT EXTRACT(timezone_hour FROM TIMETZ '10:30+13:30'); date_part ----------13 (1 row) l Ordinary text is allowed in to_char templates and is output literally. You can put a substring in double quotes to force it to be interpreted as literal text even if it contains pattern key words. For example, in '"Hello Year "YYYY', the YYYY is replaced by the year data, but the single Y in Year is not. l The TO_CHAR function's day-of-the-week numbering (see the 'D' template pattern) is different from that of the EXTRACT function. l Given an INTERVAL type, TO_CHAR formats HH and HH12 as hours in a single day, while HH24 can output hours exceeding a single day, for example, >24. l To use a double quote character in the output, precede it with a double backslash. This is necessary because the backslash already has a special meaning in a string constant. For example: '\\"YYYY Month\\"' l TO_CHAR does not support the use of V combined with a decimal point. For example: 99.9V99 is not allowed. Examples Expression SELECT TO_CHAR(CURRENT_TIMESTAMP, 'Day, DD HP Vertica Analytic Database (7.0.x) Result HH12:MI:SS'); 'Tuesday , 06 05:39:18' Page 343 of 1539 SQL Reference Manual SQL Functions SELECT TO_CHAR(CURRENT_TIMESTAMP, 'FMDay, FMDD HH12:MI:SS'); 'Tuesday, 6 SELECT TO_CHAR(TIMEtz '14:34:06.4+6','HH12:MI am'); TO_CHAR 04:34 am SELECT TO_CHAR(-0.1, '99.99'); ' SELECT TO_CHAR(-0.1, 'FM9.99'); '-.1' SELECT TO_CHAR(0.1, '0.9'); ' 0.1' SELECT TO_CHAR(12, '9990999.9'); ' SELECT TO_CHAR(12, 'FM9990999.9'); '0012.' SELECT TO_CHAR(485, '999'); ' 485' SELECT TO_CHAR(-485, '999'); '-485' SELECT TO_CHAR(485, '9 9 9'); ' 4 8 5' SELECT TO_CHAR(1485, '9,999'); ' 1,485' SELECT TO_CHAR(1485, '9G999'); ' 1 485' SELECT TO_CHAR(148.5, '999.999'); ' 148.500' SELECT TO_CHAR(148.5, 'FM999.999'); '148.5' SELECT TO_CHAR(148.5, 'FM999.990'); '148.500' SELECT TO_CHAR(148.5, '999D999'); ' 148,500' SELECT TO_CHAR(3148.5, '9G999D999'); ' 3 148,500' SELECT TO_CHAR(-485, '999S'); '485-' SELECT TO_CHAR(-485, '999MI'); '485-' SELECT TO_CHAR(485, '999MI'); '485 ' SELECT TO_CHAR(485, 'FM999MI'); '485' SELECT TO_CHAR(485, 'PL999'); '+485' SELECT TO_CHAR(485, 'SG999'); '+485' SELECT TO_CHAR(-485, 'SG999'); '-485' SELECT TO_CHAR(-485, '9SG99'); '4-85' SELECT TO_CHAR(-485, '999PR'); '<485>' SELECT TO_CHAR(485, 'L999'); 'DM 485 SELECT TO_CHAR(485, 'RN'); ' SELECT TO_CHAR(485, 'FMRN'); 'CDLXXXV' SELECT TO_CHAR(5.2, 'FMRN'); 'V' SELECT TO_CHAR(482, '999th'); ' 482nd' HP Vertica Analytic Database (7.0.x) 05:39:18' -.10' 0012.0' CDLXXXV' Page 344 of 1539 SQL Reference Manual SQL Functions SELECT TO_CHAR(485, '"Good number:"999'); 'Good number: 485' SELECT TO_CHAR(485.8, '"Pre:"999" Post:" .999'); 'Pre: 485 Post: .800' SELECT TO_CHAR(12, '99V999'); ' 12000' SELECT TO_CHAR(12.4, '99V999'); ' 12400' SELECT TO_CHAR(12.45, '99V9'); ' 125' SELECT TO_CHAR(-1234.567); -1234.567 SELECT TO_CHAR('1999-12-25'::DATE); 1999-12-25 SELECT TO_CHAR('1999-12-25 11:31'::TIMESTAMP); 1999-12-25 11:31:00 SELECT TO_CHAR('1999-12-25 11:31 EST'::TIMESTAMPTZ); 1999-12-25 11:31:00-05 SELECT TO_CHAR('3 days 1000.333 secs'::INTERVAL); 3 days 00:16:40.333 TO_DATE Converts a string value to a DATE type. Behavior Type Stable Syntax TO_DATE ( expression , pattern ) Parameters expression (CHAR or VARCHAR) specifies the value to convert. pattern (CHAR or VARCHAR) specifies an output pattern string using the Template Patterns for Date/Time Formatting and/or Template Patterns for Numeric Formatting. Input Value Considerations The TO_DATE function requires a CHAR or VARCHAR expression. For other input types, use TO_CHAR to perform an explicit cast to a CHAR or VARCHAR before using this function. Notes l To use a double quote character in the output, precede it with a double backslash. This is necessary because the backslash already has a special meaning in a string constant. For HP Vertica Analytic Database (7.0.x) Page 345 of 1539 SQL Reference Manual SQL Functions example: '\\"YYYY Month\\"' n n TO_TIMESTAMP, TO_TIMESTAMP_TZ, and TO_DATE skip multiple blank spaces in the input string if the FX option is not used. FX must be specified as the first item in the template. For example: o For example TO_TIMESTAMP('2000 JUN', 'YYYY MON') is correct. o TO_TIMESTAMP('2000 JUN', 'FXYYYY MON') returns an error, because TO_ TIMESTAMP expects one space only. The YYYY conversion from string to TIMESTAMP or DATE has a restriction if you use a year with more than four digits. You must use a non-digit character or template after YYYY, otherwise the year is always interpreted as four digits. For example (with the year 20000): TO_DATE('200001131', 'YYYYMMDD') is interpreted as a four-digit year Instead, use a non-digit separator after the year, such as TO_DATE('20000-1131', 'YYYYMMDD') or TO_DATE('20000Nov31', 'YYYYMonDD'). n In conversions from string to TIMESTAMP or DATE, the CC field is ignored if there is a YYY, YYYY or Y,YYY field. If CC is used with YY or Y then the year is computed as (CC–1) *100+YY. Examples SELECT TO_DATE('13 Feb 2000', 'DD Mon YYYY'); to_date -----------2000-02-13 (1 row) See Also l Template Pattern Modifiers for Date/Time Formatting TO_HEX Returns a VARCHAR or VARBINARY representing the hexadecimal equivalent of a number. Behavior Type Immutable Syntax TO_HEX ( number ) HP Vertica Analytic Database (7.0.x) Page 346 of 1539 SQL Reference Manual SQL Functions Parameters number (INTEGER) is the number to convert to hexadecimal Notes VARCHAR TO_HEX(INTEGER) and VARCHAR TO_HEX(VARBINARY) are similar. The function converts data from binary type to character type (where the character representation is in hexadecimal format). This function is the inverse of HEX_TO_BINARY. TO_HEX(HEX_TO_BINARY(x)) = x); HEX_TO_BINARY(TO_HEX(x)) = x); Examples SELECT TO_HEX(123456789); TO_HEX --------75bcd15 (1 row) For VARBINARY inputs, the returned value is not preceded by "0x". For example: SELECT TO_HEX('ab'::binary(2)); TO_HEX -------6162 (1 row) TO_TIMESTAMP Converts a string value or a UNIX/POSIX epoch value to a TIMESTAMP type. Behavior Type Stable Syntax TO_TIMESTAMP ( expression, pattern )TO_TIMESTAMP ( unix-epoch ) HP Vertica Analytic Database (7.0.x) Page 347 of 1539 SQL Reference Manual SQL Functions Parameters expression (CHAR or VARCHAR) is the string to convert pattern (CHAR or VARCHAR) specifies an output pattern string using the Template Patterns for Date/Time Formatting and/or Template Patterns for Numeric Formatting. unix-epoch (DOUBLE PRECISION) specifies some number of seconds elapsed since midnight UTC of January 1, 1970, not counting leap seconds. INTEGER values are implicitly cast to DOUBLE PRECISION. Notes l For more information about UNIX/POSIX time, see Wikipedia. l Millisecond (MS) and microsecond (US) values in a conversion from string to TIMESTAMP are used as part of the seconds after the decimal point. For example TO_TIMESTAMP('12:3', 'SS:MS') is not 3 milliseconds, but 300, because the conversion counts it as 12 + 0.3 seconds. This means for the format SS:MS, the input values 12:3, 12:30, and 12:300 specify the same number of milliseconds. To get three milliseconds, use 12:003, which the conversion counts as 12 + 0.003 = 12.003 seconds. Here is a more complex example: TO_TIMESTAMP('15:12:02.020.001230', 'HH:MI:SS.MS.US') is 15 hours, 12 minutes, and 2 seconds + 20 milliseconds + 1230 microseconds = 2.021230 seconds. l To use a double quote character in the output, precede it with a double backslash. This is necessary because the backslash already has a special meaning in a string constant. For example: '\\"YYYY Month\\"' l TZ/tz are not supported patterns for the TO_TIMESTAMP function; for example, the following command returns an error: SELECT TO_TIMESTAMP('01-01-01 01:01:01+03:00','DD-MM-YY HH24:MI:SSTZ'); ERROR: "TZ"/"tz" not supported n n TO_TIMESTAMP, TO_TIMESTAMP_TZ, and TO_DATE skip multiple blank spaces in the input string if the FX option is not used. FX must be specified as the first item in the template. For example: o For example TO_TIMESTAMP('2000 JUN', 'YYYY MON') is correct. o TO_TIMESTAMP('2000 JUN', 'FXYYYY MON') returns an error, because TO_ TIMESTAMP expects one space only. The YYYY conversion from string to TIMESTAMP or DATE has a restriction if you use a year HP Vertica Analytic Database (7.0.x) Page 348 of 1539 SQL Reference Manual SQL Functions with more than four digits. You must use a non-digit character or template after YYYY, otherwise the year is always interpreted as four digits. For example (with the year 20000): TO_DATE('200001131', 'YYYYMMDD') is interpreted as a four-digit year Instead, use a non-digit separator after the year, such as TO_DATE('20000-1131', 'YYYYMMDD') or TO_DATE('20000Nov31', 'YYYYMonDD'). n In conversions from string to TIMESTAMP or DATE, the CC field is ignored if there is a YYY, YYYY or Y,YYY field. If CC is used with YY or Y then the year is computed as (CC–1) *100+YY. Examples => SELECT TO_TIMESTAMP('13 Feb 2009', 'DD Mon YYY'); TO_TIMESTAMP --------------------1200-02-13 00:00:00 (1 row) => SELECT TO_TIMESTAMP(200120400); TO_TIMESTAMP --------------------1976-05-05 01:00:00 (1 row) See Also l Template Pattern Modifiers for Date/Time Formatting TO_TIMESTAMP_TZ Converts a string value or a UNIX/POSIX epoch value to a TIMESTAMP WITH TIME ZONE type. Behavior Type Immutable if single argument form, Stable otherwise. Syntax TO_TIMESTAMP_TZ ( expression, pattern )TO_TIMESTAMP ( unix-epoch ) Parameters expression (CHAR or VARCHAR) is the string to convert HP Vertica Analytic Database (7.0.x) Page 349 of 1539 SQL Reference Manual SQL Functions pattern (CHAR or VARCHAR) specifies an output pattern string using the Template Patterns for Date/Time Formatting and/or Template Patterns for Numeric Formatting. unix-epoch (DOUBLE PRECISION) specifies some number of seconds elapsed since midnight UTC of January 1, 1970, not counting leap seconds. INTEGER values are implicitly cast to DOUBLE PRECISION. Notes l For more information about UNIX/POSIX time, see Wikipedia. l Millisecond (MS) and microsecond (US) values in a conversion from string to TIMESTAMP are used as part of the seconds after the decimal point. For example TO_TIMESTAMP('12:3', 'SS:MS') is not 3 milliseconds, but 300, because the conversion counts it as 12 + 0.3 seconds. This means for the format SS:MS, the input values 12:3, 12:30, and 12:300 specify the same number of milliseconds. To get three milliseconds, use 12:003, which the conversion counts as 12 + 0.003 = 12.003 seconds. Here is a more complex example: TO_TIMESTAMP('15:12:02.020.001230', 'HH:MI:SS.MS.US') is 15 hours, 12 minutes, and 2 seconds + 20 milliseconds + 1230 microseconds = 2.021230 seconds. l To use a double quote character in the output, precede it with a double backslash. This is necessary because the backslash already has a special meaning in a string constant. For example: '\\"YYYY Month\\"' n n TO_TIMESTAMP, TO_TIMESTAMP_TZ, and TO_DATE skip multiple blank spaces in the input string if the FX option is not used. FX must be specified as the first item in the template. For example: o For example TO_TIMESTAMP('2000 JUN', 'YYYY MON') is correct. o TO_TIMESTAMP('2000 JUN', 'FXYYYY MON') returns an error, because TO_ TIMESTAMP expects one space only. The YYYY conversion from string to TIMESTAMP or DATE has a restriction if you use a year with more than four digits. You must use a non-digit character or template after YYYY, otherwise the year is always interpreted as four digits. For example (with the year 20000): TO_DATE('200001131', 'YYYYMMDD') is interpreted as a four-digit year Instead, use a non-digit separator after the year, such as TO_DATE('20000-1131', 'YYYYMMDD') or TO_DATE('20000Nov31', 'YYYYMonDD'). n In conversions from string to TIMESTAMP or DATE, the CC field is ignored if there is a YYY, YYYY or Y,YYY field. If CC is used with YY or Y then the year is computed as (CC–1) *100+YY. HP Vertica Analytic Database (7.0.x) Page 350 of 1539 SQL Reference Manual SQL Functions Examples => SELECT TO_TIMESTAMP_TZ('13 Feb 2009', 'DD Mon YYY'); TO_TIMESTAMP_TZ -----------------------1200-02-13 00:00:00-05 (1 row) => SELECT TO_TIMESTAMP_TZ(200120400); TO_TIMESTAMP_TZ -----------------------1976-05-05 01:00:00-04 (1 row) See Also l Template Pattern Modifiers for Date/Time Formatting TO_NUMBER Converts a string value to DOUBLE PRECISION. Behavior Type Stable Syntax TO_NUMBER ( expression, [ pattern ] ) Parameters expression (CHAR or VARCHAR) specifies the string to convert. pattern (CHAR or VARCHAR) Optional parameter specifies an output pattern string using the Template Patterns for Date/Time Formatting and/or Template Patterns for Numeric Formatting. If omitted, function returns a floating point. Notes To use a double quote character in the output, precede it with a double backslash. This is necessary because the backslash already has a special meaning in a string constant. For example: '\\"YYYY Month\\"' HP Vertica Analytic Database (7.0.x) Page 351 of 1539 SQL Reference Manual SQL Functions Examples SELECT TO_CHAR(2009, 'rn'), TO_NUMBER('mmix', 'rn'); TO_CHAR | TO_NUMBER -----------------+----------mmix | 2009 (1 row) It the pattern parameter is omitted, the function returns a floating point. SELECT TO_NUMBER('-123.456e-01'); TO_NUMBER -----------12.3456 HP Vertica Analytic Database (7.0.x) Page 352 of 1539 SQL Reference Manual SQL Functions Template Patterns for Date/Time Formatting In an output template string (for TO_CHAR), there are certain patterns that are recognized and replaced with appropriately-formatted data from the value to be formatted. Any text that is not a template pattern is copied verbatim. Similarly, in an input template string (for anything other than TO_CHAR), template patterns identify the parts of the input data string to be looked at and the values to be found there. Note: HP Vertica uses the ISO 8601:2004 style for date/time fields in HP Vertica *.log files. For example, 2008-09-16 14:40:59.123 TM Moveout:0x2aaaac002180 [Txn] Certain modifiers can be applied to any template pattern to alter its behavior as described in Template Pattern Modifiers for Date/Time Formatting. Pattern Description HH Hour of day (00-23) HH12 Hour of day (01-12) HH24 Hour of day (00-23) MI Minute (00-59) SS Second (00-59) MS Millisecond (000-999) US Microsecond (000000-999999) SSSS Seconds past midnight (0-86399) AM or A.M. or PM or P.M. Meridian indicator (uppercase) am or a.m. or pm or p.m. Meridian indicator (lowercase) Y,YYY Year (4 and more digits) with comma YYYY Year (4 and more digits) YYY Last 3 digits of year YY Last 2 digits of year Y Last digit of year IYYY ISO year (4 and more digits) IYY Last 3 digits of ISO year IY Last 2 digits of ISO year HP Vertica Analytic Database (7.0.x) Page 353 of 1539 SQL Reference Manual SQL Functions Pattern Description I Last digits of ISO year BC or B.C. or AD or A.D. Era indicator (uppercase) bc or b.c. or ad or a.d. Era indicator (lowercase) MONTH Full uppercase month name (blank-padded to 9 chars) Month Full mixed-case month name (blank-padded to 9 chars) month Full lowercase month name (blank-padded to 9 chars) MON Abbreviated uppercase month name (3 chars) Mon Abbreviated mixed-case month name (3 chars) mon Abbreviated lowercase month name (3 chars) MM Month number (01-12) DAY Full uppercase day name (blank-padded to 9 chars) Day Full mixed-case day name (blank-padded to 9 chars) day full lowercase day name (blank-padded to 9 chars) DY Abbreviated uppercase day name (3 chars) Dy Abbreviated mixed-case day name (3 chars) dy Abbreviated lowercase day name (3 chars) DDD Day of year (001-366) DD Day of month (01-31) for TIMESTAMP Note: For INTERVAL, DD is day of year (001-366) because day of month is undefined. D Day of week (1-7; Sunday is 1) W Week of month (1-5) (The first week starts on the first day of the month.) WW Week number of year (1-53) (The first week starts on the first day of the year.) IW ISO week number of year (The first Thursday of the new year is in week 1.) CC Century (2 digits) HP Vertica Analytic Database (7.0.x) Page 354 of 1539 SQL Reference Manual SQL Functions Pattern Description J Julian Day (days since January 1, 4712 BC) Q Quarter RM Month in Roman numerals (I-XII; I=January) (uppercase) rm Month in Roman numerals (i-xii; i=January) (lowercase) TZ Time-zone name (uppercase) tz Time-zone name (lowercase) Template Pattern Modifiers for Date/Time Formatting Certain modifiers can be applied to any template pattern to alter its behavior. For example, FMMonth is the Month pattern with the FM modifier. Modifier Description AM Time is before 12:00 AT Ignored JULIAN, JD, J Next field is Julian Day FM prefix Fill mode (suppress padding blanks and zeros) For example: FMMonth Note: The FM modifier suppresses leading zeros and trailing blanks that would otherwise be added to make the output of a pattern fixed width. FX prefix Fixed format global option For example: FX Month DD Day ON Ignored PM Time is on or after 12:00 T Next field is time TH suffix Uppercase ordinal number suffix For example: DDTH th suffix Lowercase ordinal number suffix For example: DDth TM prefix Translation mode (print localized day and month names based on lc_messages). For example: TMMonth HP Vertica Analytic Database (7.0.x) Page 355 of 1539 SQL Reference Manual SQL Functions Template Patterns for Numeric Formatting Pattern Description 9 Value with the specified number of digits 0 Value with leading zeros . (period) Decimal point , (comma) Group (thousand) separator PR Negative value in angle brackets S Sign anchored to number (uses locale) L Currency symbol (uses locale) D Decimal point (uses locale) G Group separator (uses locale) MI Minus sign in specified position (if number < 0) PL Plus sign in specified position (if number > 0) SG Plus/minus sign in specified position RN Roman numeral (input between 1 and 3999) TH or th Ordinal number suffix V Shift specified number of digits (see notes) EEEE Scientific notation (not implemented yet) Usage l A sign formatted using SG, PL, or MI is not anchored to the number; for example: n TO_CHAR(-12, 'S9999') produces ' -12' n TO_CHAR(-12, 'MI9999') produces '- 12' l 9 results in a value with the same number of digits as there are 9s. If a digit is not available it outputs a space. l TH does not convert values less than zero and does not convert fractional numbers. l V effectively multiplies the input values by 10^n, where n is the number of digits following V. TO_CHAR does not support the use of V combined with a decimal point. For example: 99.9V99 is not allowed. HP Vertica Analytic Database (7.0.x) Page 356 of 1539 SQL Reference Manual SQL Functions Geospatial Package SQL Functions The HP Vertica Geospatial package contains a suite of geospatial SQL functions you can install to report on and analyze geographic location data. To Install the Geospatial package: Run the install.sh script that appears in the /opt/vertica/packages/geospatial directory. Note: If you choose to install the Geospatial package in a directory other than the default, be sure to set the GEOSPATIAL_HOME environment variable to reflect the correct directory. Contents of the Geospatial Package When you installed HP Vertica, the RPM saved the Geospatial package files here: /opt/vertica/packages/geospatial This directory contains these files: install.sh Installs the Geospatial package. readme.txt Contains instructions for installing the package. This directory also contains these directories: /src Contains this file: l geospatial.sql—This file contains all the functions that are installed with the package. The file describes the calculations used for each function, and provides examples. This file also contains links to helpful sites that provide more information about standards and calculations. /examples Contains this file: l regions_demo.sql—This file is a demo, intended to illustrate a simple use case: determine the New England state in which a given point lies. Using Geospatial Package SQL Functions For high-level descriptions of all of the functions included in the package, see Geospatial SQL Functions. For more detailed information about each function and for links to other useful information, see /opt/vertica/packages/geospatial/src/geospatial.sql. HP Vertica Analytic Database (7.0.x) Page 357 of 1539 SQL Reference Manual SQL Functions Using Built-In HP Vertica Functions for Geospatial Analysis Four mathematical functions, automatically installed with HP Vertica, perform geospatial operations: l DEGREES l DISTANCE l DISTANCEV l RADIANS These functions are not part of the Vertica Geospatial Package; they are installed with HP Vertica. Geospatial SQL Functions With the Geospatial Package, HP Vertica provides SQL functions that let you find geographic constants to use in your calculations and analysis. These functions appear in the file /opt/vertica/packages/geospatial/src/geospatial.sql. You can use these functions as they are supplied; you can also edit the geospatial.sql file to change the calculations according to your needs. If you do modify the geospatial functions, be sure to save a copy of your changes in a private location so that your changes are not lost if you upgrade your HP Vertica installation. Note that an upgrade does not overwrite any functions already loaded in your database; the upgrade only overwrites only the .sql file containing the function definitions. These functions measure distances in kilometers and angles in fractional degrees, unless stated otherwise. Of the several possible definitions of latitude, the geodetic latitude is most commonly used; and this is what the HP Vertica Geospatial Package uses. Latitude goes from +90 degrees at the North Pole to –90 at the South Pole. Longitude 0 is near Greenwich, England. It increases going east to +180 degrees, and decreases going west to –180 degrees. True bearings are measured clockwise from north. For more information, see: http://en.wikipedia.org/wiki/Latitude. WGS-84 SQL Functions The following functions return constants determined by the World Geodetic System (WGS) standard, WGS-84. l WGS84_a() l WGS84_b() l WGS84_e2() HP Vertica Analytic Database (7.0.x) Page 358 of 1539 SQL Reference Manual SQL Functions l WGS84_f() l WGS84_if() Earth Radius, Radius of Curvature, and Bearing SQL Functions These functions return the earth's radius, radius of curvature, and bearing values. l RADIUS_r(lat) l WGS84_r1() l RADIUS_SI() l RADIUS_M(lat) l RADIUS_N (lat) l RADIUS_Ra (lat) l RADIUS_Rv (lat) l RADIUS_Rc (lat,bearing) l BEARING (lat1,lon1,lat2,lon2) l RADIUS_LON (lat) ECEF Conversion SQL Functions The following functions convert values to Earth-Centered, Earth-Fixed (ECEF) values. The ECEF system represents positions on x, y, and z axes in meters. (0,0,0) is the center of the earth; x is toward latitude 0, longitude 0; y is toward latitude 0, longitude 90 degrees; and z is toward the North Pole. The height above mean sea level (h) is also in meters. l ECEF_x (lat,lon,h) l ECEF_y (lat,lon,h) l ECEF_z (lat,lon,h) l ECEF_chord (lat1,lon1,h1,lat2,lon2,h2) l CHORD_TO_ARC (chord) HP Vertica Analytic Database (7.0.x) Page 359 of 1539 SQL Reference Manual SQL Functions Bounding Box SQL Functions These functions determine whether points are within a bounding box, a rectangular area whose edges are latitude and longitude lines. Bounding box methods allow you to narrow your focus, and they work best on HP Vertica projections that are sorted by latitude, or by region (such as swtate) and then by latitude. These methods also work on projections sorted by longitude. l BB_WITHIN (lat,lon,llat,llon,ulat,rlon) l LAT_WITHIN (lat,lat0,d) l LON_WITHIN (lon,lat0,lon0,d) l LL_WITHIN (lat,lon,lat0,lon0,d) l DWITHIN (lat,lon,lat0,lon0,d) l LLD_WITHIN (lat,lon,lat0,lon0,d) l ISLEFT (x0,y0,x1,y1,x2,y2) l RAYCROSSING (x0,y0,x1,y1,x2,y2) Miles/Kilometer Conversion SQL Functions These functions convert miles to kilometers and kilometers to miles: l MILES2KM (miles) l KM2MILES (km) BB_WITHIN Determines whether a point (lat, lon) falls within a bounding box defined by its lower-left and upperright corners. The return value has the type BOOLEAN. This function is available only if you install the HP Vertica Geospatial Package. See Geospatial Package SQL Functions for information on installing the package. Behavior Type Immutable Syntax BB_WITHIN (lat,lon,llat,llon,ulat,rlon) HP Vertica Analytic Database (7.0.x) Page 360 of 1539 SQL Reference Manual SQL Functions Parameters lat A value of type DOUBLE PRECISION indicating the latitude of a given point. lon A value of type DOUBLE PRECISION indicating the longitude of a given point. llat A value of type DOUBLE PRECISION indicating the latitude used to define the lower-left corner of the bounding box. llon A value of type DOUBLE PRECISION indicating the longitude used to define the lowerleft corner of the bounding box. ulat A value of type DOUBLE PRECISION indicating the latitude used to define the upper-right corner of the bounding box. rlon A value of type DOUBLE PRECISION indicating the longitude used in defining the upperright corner of the bounding box. Example The following example determines that the point (14,30) is not contained in the bounding box defined by (23.0,45) and (13,37): => SELECT BB_WITHIN(14,30,23.0,45,13,37); BB_WITHIN ----------f (1 row) The following example determines that the point (14,30) is contained in the bounding box defined by (13.0,45) and (23,37). => SELECT BB_WITHIN(14,30,13.0,45,23,37); BB_WITHIN ----------t (1 row) BEARING Returns the approximate bearing from a starting point to an ending point, in degrees. It assumes a flat earth and is useful only for short distances. The return value has the type DOUBLE PRECISION. This function is available only if you install the HP Vertica Geospatial Package. See Geospatial Package SQL Functions for information on installing the package. HP Vertica Analytic Database (7.0.x) Page 361 of 1539 SQL Reference Manual SQL Functions Behavior Type Immutable Syntax BEARING (lat1,lon1,lat2,lon2) Parameters lat1 A value of type DOUBLE PRECISION indicating latitude of the starting point. lon1 A value of type DOUBLE PRECISION indicating longitude of the starting point. lat2 A value of type DOUBLE PRECISION indicating latitude of the ending point. lon2 A value of type DOUBLE PRECISION indicating longitude of the ending point. Example The following examples calculate the bearing, in degrees, from point (45,13) to (33,3) and from point (33,3) to (45,13): => SELECT BEARING(45,13,33,3); BEARING -------------------140.194428907735 (1 row) => SELECT BEARING(33,3,45,13); BEARING -----------------39.8055710922652 (1 row) CHORD_TO_ARC Converts a chord (the straight line between two points) in meters to a geodesic arc length, also in meters. The return value has the type DOUBLE PRECISION. This function is available only if you install the HP Vertica Geospatial Package. See Geospatial Package SQL Functions for information on installing the package. Behavior Type Immutable HP Vertica Analytic Database (7.0.x) Page 362 of 1539 SQL Reference Manual SQL Functions Syntax CHORD_TO_ARC (chord) Parameters chord A value of type DOUBLE PRECISION indicating chord length (in meters) Example The following examples convert the length of a chord to the length of its geodesic arc: => SELECT CHORD_TO_ARC(120); CHORD_TO_ARC -----------------120.000000001774 (1 row) => SELECT CHORD_TO_ARC(12000); CHORD_TO_ARC -----------------12000.0017738474 (1 row) => SELECT CHORD_TO_ARC(1200000); CHORD_TO_ARC -----------------1201780.96402514 (1 row) DWITHIN Determines whether a point (lat,lon) is within a circle of radius d kilometers centered at a given point (lat0,lon0). The return value has the type BOOLEAN. This function is available only if you install the HP Vertica Geospatial Package. See Geospatial Package SQL Functions for information on installing the package. Behavior Type Immutable Syntax DWITHIN (lat,lon,lat0,lon0,d) HP Vertica Analytic Database (7.0.x) Page 363 of 1539 SQL Reference Manual SQL Functions Parameters lat A value of type DOUBLE PRECISION indicating a given latitude. lon A value of type DOUBLE PRECISION indicating a given longitude. lat0 A value of type DOUBLE PRECISION indicating the latitude of the center point of a circle. lon0 A value of type DOUBLE PRECISION indicating the longitude of the center point of a circle. d A value of type DOUBLE PRECISION indicating the radius of the circle (in kilometers). Example The following examples determine that the point (13.6,43.5) is within 3880–3890 kilometers of the radius of a circle centered at (48.5,45.5): => SELECT DWITHIN(13.6,43.5,48.5,45.5,3880); DWITHIN --------f (1 row) => SELECT DWITHIN(13.6,43.5,48.5,45.5,3890); DWITHIN --------t (1 row) ECEF_CHORD Calculates the distance in meters between two ECEF coordinates. The return value has the type DOUBLE PRECISION. This function is available only if you install the HP Vertica Geospatial Package. See Geospatial Package SQL Functions for information on installing the package. Behavior Type Immutable Syntax ECEF_CHORD (lat1,lon1,h1,lat2,lon2,h2) HP Vertica Analytic Database (7.0.x) Page 364 of 1539 SQL Reference Manual SQL Functions Parameters lat A value of type DOUBLE PRECISION indicating the latitude of one end point of the line. lon1 A value of type DOUBLE PRECISION indicating the longitude of one end point of the line. h1 A value of type DOUBLE PRECISION indicating the height above sea level (in meters) of one end point of the line. lat2 A value of type DOUBLE PRECISION indicating the latitude of one end point of the line. lon2 A value of type DOUBLE PRECISION indicating the longitude of one end point of the line. h2 A value of type DOUBLE PRECISION indicating the height of one end point of the line. Example The following example calculates the distance in meters between the ECEF coordinates (12,10.0,14) and (12,-10,17): => SELECT ECEF_chord (-12,10.0,14,12,-10,17); ECEF_chord -----------------3411479.93992789 (1 row) ECEF_x Converts a given latitude, longitude, and height into the ECEF x coordinate in meters. The return value has the type DOUBLE PRECISION. This function is available only if you install the HP Vertica Geospatial Package. See Geospatial Package SQL Functions for information on installing the package. Behavior Type Immutable Syntax ECEF_x (lat,lon,h) Parameters lat A value of type DOUBLE PRECISION indicating latitude. HP Vertica Analytic Database (7.0.x) Page 365 of 1539 SQL Reference Manual SQL Functions lon A value of type DOUBLE PRECISION indicating longitude. h A value of type DOUBLE PRECISION indicating height. Example The following example calculates the ECEF x coordinate in meters for the point (-12,13.2,0): => SELECT ECEF_x(-12,13.2,0); ECEF_x -----------------6074803.56179976 (1 row) ECEF_y Converts a given latitude, longitude, and height into the ECEF y coordinate in meters. The return value has the type DOUBLE PRECISION. This function is available only if you install the HP Vertica Geospatial Package. See Geospatial Package SQL Functions for information on installing the package. Behavior Type Immutable Syntax ECEF_y (lat,lon,h) Parameters lat A value of type DOUBLE PRECISION indicating latitude. lon A value of type DOUBLE PRECISION indicating longitude. h A value of type DOUBLE PRECISION indicating height. Example The following example calculates the ECEF y coordinate in meters for the point (12.0,-14.2,12): => SELECT ECEF_y(12.0,-14.2,12); ECEF_y ------------------- HP Vertica Analytic Database (7.0.x) Page 366 of 1539 SQL Reference Manual SQL Functions -1530638.12327962 (1 row) ECEF_z Converts a given latitude, longitude, and height into the ECEF z coordinate in meters. The return value has the type DOUBLE PRECISION. This function is available only if you install the HP Vertica Geospatial Package. See Geospatial Package SQL Functions for information on installing the package. Behavior Type Immutable Syntax ECEF_Z (lat,lon,h) Parameters lat A value of type DOUBLE PRECISION indicating latitude. lon A value of type DOUBLE PRECISION indicating longitude. h A value of type DOUBLE PRECISION indicating height. Example The following example calculates the ECEF z coordinate in meters for the point (12.0,-14.2,12): => SELECT ECEF_z(12.0,-14.2,12); ECEF_z -----------------1317405.02616989 (1 row) ISLEFT Determines whether a given point is anywhere to the left of a directed line that goes though two specified points. The return value has the type FLOAT and has the following possible values: l > 0: The point is to the left of the line. l = 0: The point is on the line. HP Vertica Analytic Database (7.0.x) Page 367 of 1539 SQL Reference Manual SQL Functions l < 0: The point is to the right of the line. This function is available only if you install the HP Vertica Geospatial Package. See Geospatial Package SQL Functions for information on installing the package. Behavior Type Immutable Syntax ISLEFT (x0,y0,x1,y1,x2,y2) Parameters x0 A value of type DOUBLE PRECISION indicating the latitude of the first point through which the directed line passes. y0 A value of type DOUBLE PRECISION indicating the longitude of the the first point through which the directed line passes. x1 A value of type DOUBLE PRECISION indicating the latitude of the second point through which the directed line passes. y1 A value of type DOUBLE PRECISION indicating the longitude of the the second point through which the directed line passes. x2 A value of type DOUBLE PRECISION indicating the latitude of the point whose position you are evaluating. y2 A value of type DOUBLE PRECISION indicating the longitude of a whose position you are evaluating. Example The following example determines that (0,0) is to the left of the line that passes through (1,1) and (2,3): => SELECT ISLEFT(1,1,2,3,0,0); ISLEFT -------1 (1 row) KM2MILES Converts a value from kilometers to miles. The return value is of type DOUBLE PRECISION. HP Vertica Analytic Database (7.0.x) Page 368 of 1539 SQL Reference Manual SQL Functions This function is available only if you install the HP Vertica Geospatial Package. See Geospatial Package SQL Functions for information on installing the package. Behavior Type Immutable Syntax KM2MILES (km) Parameters km A value of type DOUBLE PRECISION indicating the number of kilometers you want to convert. Example The following example converts 1.0 kilometers to miles: => SELECT KM2MILES(1.0); KM2MILES ------------------0.621371192237334 (1 row) LAT_WITHIN Determines whether a certain latitude (lat) is within d kilometers of another latitude point (lat0), independent of longitude. The return value has the type BOOLEAN. This function is available only if you install the HP Vertica Geospatial Package. See Geospatial Package SQL Functions for information on installing the package. Behavior Type Immutable Syntax LAT_WITHIN (lat,lat0,d) HP Vertica Analytic Database (7.0.x) Page 369 of 1539 SQL Reference Manual SQL Functions Parameters lat A value of type DOUBLE PRECISION indicating a given latitude. lat0 A value of type DOUBLE PRECISION indicating the latitude of the point to which you are comparing the first latitude. d A value of type DOUBLE PRECISION indicating the number of kilometers that determines the range you are evaluating. Example The following examples determine that latitude 12 is between 220 and 230 kilometers of latitude 14.0: => SELECT LAT_WITHIN(12,14.0,220); LAT_WITHIN -----------f (1 row) => SELECT LAT_WITHIN(12,14.0,230); LAT_WITHIN -----------t (1 row) LL_WITHIN Determines whether a point (lat, lon) is within a bounding box whose sides are 2d kilometers long, centered at a given point (lat0, lon0). The return value has the type BOOLEAN. This function is available only if you install the HP Vertica Geospatial Package. See Geospatial Package SQL Functions for information on installing the package. Behavior Type Immutable Syntax LL_WITHIN (lat,lon,lat0,lon0,d); Parameters lat A value of type DOUBLE PRECISION indicating a given latitude. HP Vertica Analytic Database (7.0.x) Page 370 of 1539 SQL Reference Manual SQL Functions lon A value of type DOUBLE PRECISION indicating a given longitude. lat0 A value of type DOUBLE PRECISION indicating the latitude of the center point of the bounding box. lon0 A value of type DOUBLE PRECISION indicating the longitude of the center point of the bounding box. d A value of type DOUBLE PRECISION indicating the length of half the side of the box. Example The following examples determine that the point (16,15) is within a bounding box centered at (12,13) whose sides are between 880 and 890 kilometers long: => SELECT LL_WITHIN(16,15,12,13.0,440); LL_WITHIN ----------f (1 row) => SELECT LL_WITHIN(16,15,12,13.0,445); LL_WITHIN ----------t (1 row) LLD_WITHIN Determines whether a point (lat,lon) is within a circle of radius d kilometers centered at a given point (lat0,lon0). LLD_WITHIN is a faster, but less accurate version of DWITHIN. The return value has the type BOOLEAN. This function is available only if you install the HP Vertica Geospatial Package. See Geospatial Package SQL Functions for information on installing the package. Behavior Type Immutable Syntax LLD_WITHIN (lat,lon,lat0,lon0,d) Parameters lat A value of type DOUBLE PRECISION indicating a given latitude. HP Vertica Analytic Database (7.0.x) Page 371 of 1539 SQL Reference Manual SQL Functions lon A value of type DOUBLE PRECISION indicating a given longitude. lat0 A value of type DOUBLE PRECISION indicating the latitude of the center point of a circle. lon0 A value of type DOUBLE PRECISION indicating the longitude of the center point of a circle. d A value of type DOUBLE PRECISION indicating the radius of the circle (in kilometers). Example The following examples determine that the point (13.6,43.5) is within a circle centered at (48.5,45.5) whose radius is between 3800 and 3900 kilometers long: => SELECT LLD_WITHIN(13.6,43.5,48.5,45.5,3800); LLD_WITHIN -----------f (1 row) => SELECT LLD_WITHIN(13.6,43.5,48.5,45.5,3900); LLD_WITHIN -----------t (1 row) LON_WITHIN Determines whether a longitude (lon) is within d kilometers of a given point (lat0, lon0). The return value has the type BOOLEAN. This function is available only if you install the HP Vertica Geospatial Package. See Geospatial Package SQL Functions for information on installing the package. Behavior Type Immutable Syntax LON_WITHIN (lon,lat0,lon0,d) Parameters lon A value of type DOUBLE PRECISION indicating a given longitude. lat0 A value of type DOUBLE PRECISION indicating the latitude of the point to which you want to compare the lon value. HP Vertica Analytic Database (7.0.x) Page 372 of 1539 SQL Reference Manual SQL Functions lon0 A value of type DOUBLE PRECISION indicating the longitude of the point to which you want to compare the lon value. d A value of type DOUBLE PRECISION indicating the distance, in kilometers, that defines your range. Example The following examples determine that the longitude 15 is between 1600 and 1700 kilometers from the point (16,0): => SELECT LON_WITHIN(15,16,0,1600); LON_WITHIN -----------f (1 row) => SELECT LON_WITHIN(15,16,0,1700); LON_WITHIN -----------t (1 row) MILES2KM Converts a value from miles to kilometers. The return value is of type DOUBLE PRECISION. This function is available only if you install the HP Vertica Geospatial Package. See Geospatial Package SQL Functions for information on installing the package. Behavior Type Immutable Syntax MILES2KM (miles) Parameters miles A value of type DOUBLE PRECISION indicating the number of miles you want to convert. Example The following example converts 1.0 miles to kilometers: HP Vertica Analytic Database (7.0.x) Page 373 of 1539 SQL Reference Manual SQL Functions => SELECT MILES2KM(1.0); MILES2KM ---------1.609344 (1 row) RADIUS_LON Returns the radius of the circle of longitude in kilometers at a given latitude. The return value has the type DOUBLE PRECISION. This function is available only if you install the HP Vertica Geospatial Package. See Geospatial Package SQL Functions for information on installing the package. Behavior Type Immutable Syntax RADIUS_LON (lat) Parameters lat A value of type DOUBLE PRECISION indicating latitude at which you want to measure the radius. Example The following example calculates the circle of longitude in kilometers at a latitude of 45: => SELECT RADIUS_LON(45); RADIUS_LON --------------------4517.59087884893 (1 row) RADIUS_M Returns the earth's radius of curvature in kilometers along the meridian at the given latitude. The return value has the type DOUBLE PRECISION. This function is available only if you install the HP Vertica Geospatial Package. See Geospatial Package SQL Functions for information on installing the package. HP Vertica Analytic Database (7.0.x) Page 374 of 1539 SQL Reference Manual SQL Functions Behavior Type Immutable Syntax RADIUS_M (lat) Parameters lat A value of type DOUBLE PRECISION indicating latitude at which you want to measure the radius of curvature. Example The following example calculates the earth's radius of curvature in kilometers along the meridian at latitude –90 (the South Pole): => SELECT RADIUS_M(-90); RADIUS_M ----------------6399.5936257585 (1 row) RADIUS_N Returns the earth's radius of curvature in kilometer normal to the meridian at a given latitude. The return value has the type DOUBLE PRECISION. This function is available only if you install the HP Vertica Geospatial Package. See Geospatial Package SQL Functions for information on installing the package. Behavior Type Immutable Syntax RADIUS_N (lat) Parameters lat A value of type DOUBLE PRECISION indicating latitude at which you want to measure the radius of curvature. HP Vertica Analytic Database (7.0.x) Page 375 of 1539 SQL Reference Manual SQL Functions Example The following example calculates the earth's radius of curvature in kilometers normal to the meridian at latitude –90 (the South Pole): => SELECT RADIUS_N(-90); RADIUS_N -----------------6399.59362575849 (1 row) RADIUS_R Returns the WGS-84 radius of the earth (to the center of mass) in kilometers at a given latitude. The return value has the type DOUBLE PRECISION. This function is available only if you install the HP Vertica Geospatial Package. See Geospatial Package SQL Functions for information on installing the package. Behavior Type Immutable Syntax RADIUS_R (lat) Parameters lat A value of type DOUBLE PRECISION indicating latitude at which you want to measure the earth's radius. Example The following example calculates the WGS-84 radius of the earth in kilometers at latitude –90 (the South Pole): => SELECT RADIUS_R(-90); RADIUS_R -----------------6356.75231424518 (1 row) HP Vertica Analytic Database (7.0.x) Page 376 of 1539 SQL Reference Manual SQL Functions RADIUS_Ra Returns the earth's average radius of curvature in kilometers at a given latitude. This function is the geometric mean of RADIUS_M and RADIUS_N. (RADIUS_Rv is a faster approximation of this function.) The return value has the type DOUBLE PRECISION. This function is available only if you install the HP Vertica Geospatial Package. See Geospatial Package SQL Functions for information on installing the package. Behavior Type Immutable Syntax RADIUS_Ra (lat) Parameters lat A value of type DOUBLE PRECISION indicating latitude at which you want to measure the radius of curvature. Example The following example calculates the earth's average radius of curvature in kilometers at latitude – 90 (the South Pole): => SELECT RADIUS_Ra(-90); RADIUS_Ra -----------------6399.59362575849 (1 row) RADIUS_Rc Returns the earth's radius of curvature in kilometers at a given bearing measured clockwise from north. The return value has the type DOUBLE PRECISION. This function is available only if you install the HP Vertica Geospatial Package. See Geospatial Package SQL Functions for information on installing the package. Behavior Type Immutable HP Vertica Analytic Database (7.0.x) Page 377 of 1539 SQL Reference Manual SQL Functions Syntax RADIUS_Rc (lat, bearing) Parameters lat A value of type DOUBLE PRECISION indicating latitude at which you want to measure the radius of curvature. bearing A value of type DOUBLE PRECISION indicating a given bearing. Example The following example measures the earth's radius of curvature in kilometers at latitude 45, with a bearing of 45 measured clockwise from north: => SELECT RADIUS_Rc(45,45); RADIUS_Rc -----------------6378.09200754445 (1 row) RADIUS_Rv Returns the earth's average radius of curvature in kilometers at a given latitude. This value is the geometric mean of RADIUS_M and RADIUS_N. This function is a fast approximation of RADIUS_ Ra. The return value has the type DOUBLE PRECISION. This function is available only if you install the HP Vertica Geospatial Package. See Geospatial Package SQL Functions for information on installing the package. Behavior Type Immutable Syntax RADIUS_Rv (lat) Parameters lat A value of type DOUBLE PRECISION indicating latitude at which you want to measure the radius of curvature. HP Vertica Analytic Database (7.0.x) Page 378 of 1539 SQL Reference Manual SQL Functions Example The following example calculates the earth's average radius of curvature in kilometers at latitude – 90 (the South Pole): => SELECT RADIUS_Rv(-90); RADIUS_Rv -----------------6399.59362575849 (1 row) RADIUS_SI Returns the International System of Units (SI) radius based on the nautical mile. (A nautical mile is a unit of length about one minute of arc of latitude measured along any meridian, or about one minute of arc of longitude measured at the equator.) The return value has the type NUMERIC. This function is available only if you install the HP Vertica Geospatial Package. See Geospatial Package SQL Functions for information on installing the package. Behavior Type Immutable Syntax RADIUS_SI () Example The following example calculates the SI radius based on the nautical mile: => SELECT RADIUS_SI(); RADIUS_SI --------------------6366.70701949370750 (1 row) RAYCROSSING Determines whether a ray traveling to the right from point (x2,y2), in the direction of increasing x, intersects a directed line segment that starts at point (x0,y0) and ends at point (x1,y1). This function returns: HP Vertica Analytic Database (7.0.x) Page 379 of 1539 SQL Reference Manual SQL Functions l 0 if the ray does not intersect the directed line segment. l 1 if the ray intersects the line and y1 is above y0. l –1 if the ray intersects the line and y1 is below or equal to y0. The return value has the type DOUBLE PRECISION. This function is available only if you install the HP Vertica Geospatial Package. See Geospatial Package SQL Functions for information on installing the package. Behavior Type Immutable Syntax RAYCROSSING (x0,y0,x1,y1,x2,y2) Parameters x0 A value of type DOUBLE PRECISION indicating the latitude of the starting point of the line segment. y0 A value of type DOUBLE PRECISION indicating the longitude of the starting point of the line segment x1 A value of type DOUBLE PRECISION indicating the latitude of the ending point of the line segment. y1 A value of type DOUBLE PRECISION indicating the longitude of the the ending point of the line segment. x2 A value of type DOUBLE PRECISION indicating the latitude of the point from which the ray starts. y2 A value of type DOUBLE PRECISION indicating the longitude of the point from which the ray starts. Example The following example checks if a line traveling to the right from the point (0,0) intersects the line from (1,1) to (2,3): => SELECT RAYCROSSING(1,1,2,3,0,0); RAYCROSSING ------------- HP Vertica Analytic Database (7.0.x) Page 380 of 1539 SQL Reference Manual SQL Functions 0 (1 row) The following example checks if a line traveling to the right from the point (0,2) intersects the line from (1,1) to (2,3): => SELECT RAYCROSSING(1,1,2,3,0,2); RAYCROSSING ------------1 (1 row) The following example checks if a line traveling to the right from the point (0,2) intersects the line from (1,3) to (2,1): => SELECT RAYCROSSING(1,3,2,1,0,2); RAYCROSSING -------------1 (1 row) WGS84_a Returns the length, in kilometers, of the earth's semi-major axis. The return value is of type NUMERIC. This function is available only if you install the HP Vertica Geospatial Package. See Geospatial Package SQL Functions for information on installing the package. Behavior Type Immutable Syntax WGS84_a () Example => SELECT WGS84_a(); wgs84_a ------------6378.137000 (1 row) HP Vertica Analytic Database (7.0.x) Page 381 of 1539 SQL Reference Manual SQL Functions WGS84_b Returns the WGS-84 semi-minor axis length value in kilometers. The return value is of type NUMERIC. This function is available only if you install the HP Vertica Geospatial Package. See Geospatial Package SQL Functions for information on installing the package. Behavior Type Immutable Syntax WGS84_b () Example => SELECT WGS84_b(); WGS84_b --------------------6356.75231424517950 (1 row) WGS84_e2 Returns the WGS-84 eccentricity squared value. The return value is of type NUMERIC. This function is available only if you install the HP Vertica Geospatial Package. See Geospatial Package SQL Functions for information on installing the package. Behavior Type Immutable Syntax WGS84_e2 () Example => SELECT WGS84_e2(); WGS84_e2 ----------------------.00669437999014131700 HP Vertica Analytic Database (7.0.x) Page 382 of 1539 SQL Reference Manual SQL Functions (1 row) WGS84_f Returns the WGS-84 flattening value. The return value is of type NUMERIC. This function is available only if you install the HP Vertica Geospatial Package. See Geospatial Package SQL Functions for information on installing the package. Behavior Type Immutable Syntax WGS84_f () Example => SELECT WGS84_f(); WGS84_f ----------------------.00335281066474748072 (1 row) WGS84_if Returns the WGS-84 inverse flattening value. The return value is of type NUMERIC. This function is available only if you install the HP Vertica Geospatial Package. See Geospatial Package SQL Functions for information on installing the package. Behavior Type Immutable Syntax WGS84_if () Example => SELECT WGS84_if(); WGS84_if HP Vertica Analytic Database (7.0.x) Page 383 of 1539 SQL Reference Manual SQL Functions --------------298.257223563 (1 row) WGS84_r1 Returns the International Union of Geodesy and Geophysics (IUGG) mean radius of the earth, in kilometers. The return value is of type NUMERIC. This function is available only if you install the HP Vertica Geospatial Package. See Geospatial Package SQL Functions for information on installing the package. Behavior Type Immutable Syntax WGS84_r1 () Example => SELECT WGS84_r1(); WGS84_r1 --------------------6371.00877141505983 (1 row) HP Vertica Analytic Database (7.0.x) Page 384 of 1539 SQL Reference Manual SQL Functions IP Conversion Functions IP functions perform conversion, calculation, and manipulation operations on IP, network, and subnet addresses. INET_ATON Returns an integer that represents the value of the address in host byte order, given the dotted-quad representation of a network address as a string. Behavior Type Immutable Syntax INET_ATON ( expression ) Parameters expression (VARCHAR) is the string to convert. Notes The following syntax converts an IPv4 address represented as the string A to an integer I. INET_ ATON trims any spaces from the right of A, calls the Linux function inet_pton, and converts the result from network byte order to host byte order using ntohl. INET_ATON(VARCHAR A) -> INT8 I If A is NULL, too long, or inet_pton returns an error, the result is NULL. Examples The generated number is always in host byte order. In the following example, the number is calculated as 209×256^3 + 207×256^2 + 224×256 + 40. > SELECT INET_ATON('209.207.224.40'); inet_aton -----------3520061480 (1 row) > SELECT INET_ATON('1.2.3.4'); HP Vertica Analytic Database (7.0.x) Page 385 of 1539 SQL Reference Manual SQL Functions inet_aton ----------16909060 (1 row) > SELECT TO_HEX(INET_ATON('1.2.3.4')); to_hex --------1020304 (1 row) See Also l INET_NTOA INET_NTOA Returns the dotted-quad representation of the address as a VARCHAR, given a network address as an integer in network byte order. Behavior Type Immutable Syntax INET_NTOA ( expression ) Parameters expression (INTEGER) is the network address to convert. Notes The following syntax converts an IPv4 address represented as integer I to a string A. INET_NTOA converts I from host byte order to network byte order using htonl, and calls the Linux function inet_ntop. INET_NTOA(INT8 I) -> VARCHAR A If I is NULL, greater than 2^32 or negative, the result is NULL. HP Vertica Analytic Database (7.0.x) Page 386 of 1539 SQL Reference Manual SQL Functions Examples > SELECT INET_NTOA(16909060); inet_ntoa ----------1.2.3.4 (1 row) > SELECT INET_NTOA(03021962); inet_ntoa ------------0.46.28.138 (1 row) See Also l INET_ATON V6_ATON Converts an IPv6 address represented as a character string to a binary string. Behavior Type Immutable Syntax V6_ATON ( expression ) Parameters expression (VARCHAR) is the string to convert. Notes The following syntax converts an IPv6 address represented as the character string A to a binary string B. V6_ATON trims any spaces from the right of A and calls the Linux function inet_pton. V6_ATON(VARCHAR A) -> VARBINARY(16) B If A has no colons it is prepended with '::ffff:'. If A is NULL, too long, or if inet_pton returns an error, the result is NULL. HP Vertica Analytic Database (7.0.x) Page 387 of 1539 SQL Reference Manual SQL Functions Examples SELECT V6_ATON('2001:DB8::8:800:200C:417A'); v6_aton -----------------------------------------------------\001\015\270\000\000\000\000\000\010\010\000 \014Az (1 row) SELECT V6_ATON('1.2.3.4'); v6_aton -----------------------------------------------------------------\000\000\000\000\000\000\000\000\000\000\377\377\001\002\003\004 (1 row) SELECT TO_HEX(V6_ATON('2001:DB8::8:800:200C:417A')); to_hex ---------------------------------20010db80000000000080800200c417a (1 row) SELECT V6_ATON('::1.2.3.4'); v6_aton -----------------------------------------------------------------\000\000\000\000\000\000\000\000\000\000\000\000\001\002\003\004 (1 row) See Also l V6_NTOA V6_NTOA Converts an IPv6 address represented as varbinary to a character string. Behavior Type Immutable Syntax V6_NTOA ( expression ) Parameters expression (VARBINARY) is the binary string to convert. Notes The following syntax converts an IPv6 address represented as VARBINARY B to a string A. HP Vertica Analytic Database (7.0.x) Page 388 of 1539 SQL Reference Manual SQL Functions V6_NTOA right-pads B to 16 bytes with zeros, if necessary, and calls the Linux function inet_ntop. V6_NTOA(VARBINARY B) -> VARCHAR A If B is NULL or longer than 16 bytes, the result is NULL. HP Vertica automatically converts the form '::ffff:1.2.3.4' to '1.2.3.4'. Examples > SELECT V6_NTOA(' \001\015\270\000\000\000\000\000\010\010\000 \014Az'); v6_ntoa --------------------------2001:db8::8:800:200c:417a (1 row) > SELECT V6_NTOA(V6_ATON('1.2.3.4')); v6_ntoa --------1.2.3.4 (1 row) > SELECT V6_NTOA(V6_ATON('::1.2.3.4')); v6_ntoa ----------::1.2.3.4 (1 row) See Also l V6_ATON V6_SUBNETA Calculates a subnet address in CIDR (Classless Inter-Domain Routing) format from a binary or alphanumeric IPv6 address. Behavior Type Immutable Syntax V6_SUBNETA ( expression1, expression2 ) Parameters expression1 (VARBINARY or VARCHAR) is the string to calculate. expression2 (INTEGER) is the size of the subnet. HP Vertica Analytic Database (7.0.x) Page 389 of 1539 SQL Reference Manual SQL Functions Notes The following syntax calculates a subnet address in CIDR format from a binary or varchar IPv6 address. V6_SUBNETA masks a binary IPv6 address B so that the N leftmost bits form a subnet address, while the remaining rightmost bits are cleared. It then converts to an alphanumeric IPv6 address, appending a slash and N. V6_SUBNETA(BINARY B, INT8 N) -> VARCHAR C The following syntax calculates a subnet address in CIDR format from an alphanumeric IPv6 address. V6_SUBNETA(VARCHAR A, INT8 N) -> V6_SUBNETA(V6_ATON(A), N) -> VARCHAR C Examples > SELECT V6_SUBNETA(V6_ATON('2001:db8::8:800:200c:417a'), 28); v6_subneta --------------2001:db0::/28 (1 row) See Also l V6_SUBNETN V6_SUBNETN Calculates a subnet address in CIDR (Classless Inter-Domain Routing) format from a varbinary or alphanumeric IPv6 address. Behavior Type Immutable Syntax V6_SUBNETN ( expression1, expression2 ) HP Vertica Analytic Database (7.0.x) Page 390 of 1539 SQL Reference Manual SQL Functions Parameters expression1 (VARBINARY or VARCHAR) is the string to calculate. Notes: l V6_SUBNETN( , ) returns VARBINARY. OR l expression2 V6_SUBNETN( , ) returns VARBINARY, after using V6_ATON to convert the string to . (INTEGER) is the size of the subnet. Notes The following syntax masks a BINARY IPv6 address B so that the N left-most bits of S form a subnet address, while the remaining right-most bits are cleared. V6_SUBNETN right-pads B to 16 bytes with zeros, if necessary and masks B, preserving its N-bit subnet prefix. V6_SUBNETN(VARBINARY B, INT8 N) -> VARBINARY(16) S If B is NULL or longer than 16 bytes, or if N is not between 0 and 128 inclusive, the result is NULL. S = [B]/N in Classless Inter-Domain Routing notation (CIDR notation). The following syntax masks an alphanumeric IPv6 address A so that the N leftmost bits form a subnet address, while the remaining rightmost bits are cleared. V6_SUBNETN(VARCHAR A, INT8 N) -> V6_SUBNETN(V6_ATON(A), N) -> VARBINARY(16) S Example This example returns VARBINARY, after using V6_ATON to convert the VARCHAR string to VARBINARY: > SELECT V6_SUBNETN(V6_ATON('2001:db8::8:800:200c:417a'), 28); v6_subnetn --------------------------------------------------------------\001\015\260\000\000\000\000\000\000\000\000\000\000\000\000 HP Vertica Analytic Database (7.0.x) Page 391 of 1539 SQL Reference Manual SQL Functions See Also l V6_ATON l V6_SUBNETA V6_TYPE Characterizes a binary or alphanumeric IPv6 address B as an integer type. Behavior Type Immutable Syntax V6_TYPE ( expression ) Parameters expression (VARBINARY or VARCHAR) is the type to convert. Notes V6_TYPE(VARBINARY B) returns INT8 T. V6_TYPE(VARCHAR A) -> V6_TYPE(V6_ATON(A)) -> INT8 T The IPv6 types are defined in the Network Working Group's IP Version 6 Addressing Architecture memo. GLOBAL = 0 LINKLOCAL = 1 LOOPBACK = 2 UNSPECIFIED = 3 MULTICAST = 4 Global unicast addresses Link-Local unicast (and Private-Use) addresses Loopback Unspecified Multicast IPv4-mapped and IPv4-compatible IPv6 addresses are also interpreted, as specified in IPv4 Global Unicast Address Assignments. l For IPv4, Private-Use is grouped with Link-Local. l If B is VARBINARY, it is right-padded to 16 bytes with zeros, if necessary. l If B is NULL or longer than 16 bytes, the result is NULL. HP Vertica Analytic Database (7.0.x) Page 392 of 1539 SQL Reference Manual SQL Functions Details IPv4 (either kind): 0.0.0.0/8 127.0.0.0/8 169.254.0.0/16 172.16.0.0/12 192.168.0.0/16 224.0.0.0/4 others UNSPECIFIED LOOPBACK LINKLOCAL LINKLOCAL LINKLOCAL MULTICAST GLOBAL 10.0.0.0/8 ::0/128 fe80::/10 ff00::/8 others UNSPECIFIED LINKLOCAL MULTICAST GLOBAL ::1/128 LINKLOCAL IPv6: LOOPBACK Examples > SELECT V6_TYPE(V6_ATON('192.168.2.10')); v6_type --------1 (1 row) > SELECT V6_TYPE(V6_ATON('2001:db8::8:800:200c:417a')); v6_type --------0 (1 row) See Also l INET_ATON l IP Version 6 Addressing Architecture l IPv4 Global Unicast Address Assignments HP Vertica Analytic Database (7.0.x) Page 393 of 1539 SQL Reference Manual SQL Functions Mathematical Functions Some of these functions are provided in multiple forms with different argument types. Except where noted, any given form of a function returns the same data type as its argument. The functions working with DOUBLE PRECISION data could vary in accuracy and behavior in boundary cases depending on the host system. See Also l Template Pattern Modifiers for Date/Time Formatting ABS Returns the absolute value of the argument. The return value has the same data type as the argument.. Behavior Type Immutable Syntax ABS ( expression ) Parameters expression Is a value of type INTEGER or DOUBLE PRECISION Examples SELECT ABS(-28.7); abs -----28.7 (1 row) ACOS Returns a DOUBLE PRECISION value representing the trigonometric inverse cosine of the argument. HP Vertica Analytic Database (7.0.x) Page 394 of 1539 SQL Reference Manual SQL Functions Behavior Type Immutable Syntax ACOS ( expression ) Parameters expression Is a value of type DOUBLE PRECISION Example SELECT ACOS (1); acos -----0 (1 row) ASIN Returns a DOUBLE PRECISION value representing the trigonometric inverse sine of the argument. Behavior Type Immutable Syntax ASIN ( expression ) Parameters expression Is a value of type DOUBLE PRECISION HP Vertica Analytic Database (7.0.x) Page 395 of 1539 SQL Reference Manual SQL Functions Example SELECT ASIN(1); asin ----------------1.5707963267949 (1 row) ATAN Returns a DOUBLE PRECISION value representing the trigonometric inverse tangent of the argument. Behavior Type Immutable Syntax ATAN ( expression ) Parameters expression Is a value of type DOUBLE PRECISION Example SELECT ATAN(1); atan ------------------0.785398163397448 (1 row) ATAN2 Returns a DOUBLE PRECISION value representing the trigonometric inverse tangent of the arithmetic dividend of the arguments. Behavior Type Immutable HP Vertica Analytic Database (7.0.x) Page 396 of 1539 SQL Reference Manual SQL Functions Syntax ATAN2 ( quotient, divisor ) Parameters quotient Is an expression of type DOUBLE PRECISION representing the quotient divisor Is an expression of type DOUBLE PRECISION representing the divisor Example SELECT ATAN2(2,1); ATAN2 -----------------1.10714871779409 (1 row) CBRT Returns the cube root of the argument. The return value has the type DOUBLE PRECISION. Behavior Type Immutable Syntax CBRT ( expression ) Parameters expression Value of type DOUBLE PRECISION Examples SELECT CBRT(27.0); cbrt -----3 (1 row) HP Vertica Analytic Database (7.0.x) Page 397 of 1539 SQL Reference Manual SQL Functions CEILING (CEIL) Rounds the returned value up to the next whole number. Any expression that contains even a slight decimal is rounded up. Behavior Type Immutable Syntax CEILING ( expression )CEIL ( expression ) Parameters expression Is a value of type INTEGER or DOUBLE PRECISION Notes CEILING is the opposite of FLOOR, which rounds the returned value down: => SELECT CEIL(48.01) AS ceiling, FLOOR(48.01) AS floor; ceiling | floor ---------+------49 | 48 (1 row) Examples => SELECT CEIL(-42.8); CEIL ------42 (1 row) SELECT CEIL(48.01); CEIL -----49 (1 row) COS Returns a DOUBLE PRECISION value representing the trigonometric cosine of the argument. HP Vertica Analytic Database (7.0.x) Page 398 of 1539 SQL Reference Manual SQL Functions Behavior Type Immutable Syntax COS ( expression ) Parameters expression Is a value of type DOUBLE PRECISION Example SELECT COS(-1); cos -----------------0.54030230586814 (1 row) COT Returns a DOUBLE PRECISION value representing the trigonometric cotangent of the argument. Behavior Type Immutable Syntax COT ( expression ) Parameters expression Is a value of type DOUBLE PRECISION Example SELECT COT(1); cot ------------------0.642092615934331 HP Vertica Analytic Database (7.0.x) Page 399 of 1539 SQL Reference Manual SQL Functions (1 row) DEGREES Converts an expression from RADIANS to fractional degrees, or from degrees, minutes, and seconds to fractional degrees. The return value has the type DOUBLE PRECISION. Behavior Type Immutable Syntax 1 DEGREES (radians) Syntax 2 DEGREES (degrees, minutes, seconds) Parameters radians A unit of angular measure, 2π radians is equal to a full rotation. degrees A unit of angular measure, equal to 1/360 of a full rotation. minutes A unit of angular measurement, representing 1/60 of a degree. seconds A unit of angular measurement, representing 1/60 of a minute. Examples SELECT DEGREES(0.5); DEGREES -----------------28.6478897565412 (1 row) SELECT DEGREES(1,2,3); DEGREES -----------------1.03416666666667 (1 row) HP Vertica Analytic Database (7.0.x) Page 400 of 1539 SQL Reference Manual SQL Functions DISTANCE Returns the distance (in kilometers) between two points. You specify the latitude and longitude of both the starting point and the ending point. You can also specify the radius of curvature for greater accuracy when using an ellipsoidal model. Behavior Type Immutable Syntax DISTANCE ( lat0, lon0, lat1, lon1, radius_of_curvature ) Parameters lat0 Specifies the latitude of the starting point. lon0 Specifies the longitude of the starting point. lat1 Specifies the latitude of the ending point lon1 Specifies the longitude of the ending point. radius_of_curvature Specifies the radius of the curvature of the earth at the midpoint between the starting and ending points. This parameter allows for greater accuracy when using an ellipsoidal earth model. If you do not specify this parameter, it defaults to the WGS-84 average r1 radius, about 6371.009 km. Example This example finds the distance in kilometers for 1 degree of longitude at latitude 45 degrees, assuming earth is spherical. SELECT DISTANCE(45,0, 45,1); DISTANCE ---------------------78.6262959272162 (1 row) DISTANCEV Returns the distance (in kilometers) between two points using the Vincenty formula. Because the Vincenty formula includes the parameters of the WGS-84 ellipsoid model, you need not specify a radius of curvature. You specify the latitude and longitude of both the starting point and the ending point. This function is more accurate, but will be slower, than the DISTANCE function. HP Vertica Analytic Database (7.0.x) Page 401 of 1539 SQL Reference Manual SQL Functions Behavior Type Immutable Syntax DISTANCEV (lat0, lon0, lat1, lon1); Parameters lat0 Specifies the latitude of the starting point. lon0 Specifies the longitude of the starting point. lat1 Specifies the latitude of the ending point. lon1 Specifies the longitude of the ending point. Example This example finds the distance in kilometers for 1 degree of longitude at latitude 45 degrees, assuming earth is ellipsoidal. SELECT DISTANCEV(45,0, 45,1); distanceV -----------------78.8463347095916 (1 row) EXP Returns the exponential function, e to the power of a number. The return value has the same data type as the argument. Behavior Type Immutable Syntax EXP ( exponent ) Parameters exponent Is an expression of type INTEGER or DOUBLE PRECISION HP Vertica Analytic Database (7.0.x) Page 402 of 1539 SQL Reference Manual SQL Functions Example SELECT EXP(1.0); exp -----------------2.71828182845905 (1 row) FLOOR Rounds the returned value down to the next whole number. For example, each of these functions evaluates to 5: floor(5.01) floor(5.5) floor(5.99) Behavior Type Immutable Syntax FLOOR ( expression ) Parameters expression Is an expression of type INTEGER or DOUBLE PRECISION. Notes FLOOR is the opposite of CEILING, which rounds the returned value up: => SELECT FLOOR(48.01) AS floor, CEIL(48.01) AS ceiling; floor | ceiling -------+--------48 | 49 (1 row) Examples => SELECT FLOOR((TIMESTAMP '2005-01-17 10:00' - TIMESTAMP '2005-01-01') / INTERVAL '7'); HP Vertica Analytic Database (7.0.x) Page 403 of 1539 SQL Reference Manual SQL Functions FLOOR ------2 (1 row) => SELECT FLOOR(-42.8); FLOOR -------43 (1 row) => SELECT FLOOR(42.8); FLOOR ------42 (1 row) Although the following example looks like an INTEGER, the number on the left is 2^49 as an INTEGER, but the number on the right is a FLOAT: => SELECT 1<<49, FLOOR(1 << 49); ?column? | floor -----------------+----------------562949953421312 | 562949953421312 (1 row) Compare the above example to: => SELECT 1<<50, FLOOR(1 << 50); ?column? | floor ------------------+---------------------1125899906842624 | 1.12589990684262e+15 (1 row) HASH Calculates a hash value over its arguments, producing a value in the range 0 <= x < 263 (two to the sixty-third power or 2^63). Behavior Type Immutable Syntax HASH ( expression [ ,... ] ) Parameters expression Is an expression of any data type. For the purpose of hash segmentation, each expression is a column reference . HP Vertica Analytic Database (7.0.x) Page 404 of 1539 SQL Reference Manual SQL Functions Notes l The HASH() function is used to provide projection segmentation over a set of nodes in a cluster and takes up to 32 arguments, usually column names, and selects a specific node for each row based on the values of the columns for that row. HASH (Col1, Col2). l If your data is fairly regular and you want more even distribution than you get with HASH, consider using MODULARHASH() for project segmentation. Examples SELECT HASH(product_price, product_cost) FROM product_dimension WHERE product_price = '11'; hash --------------------4157497907121511878 1799398249227328285 3250220637492749639 (3 rows) See Also l MODULARHASH LN Returns the natural logarithm of the argument. The return data type is the same as the argument. Behavior Type Immutable Syntax LN ( expression ) Parameters expression Is an expression of type INTEGER or DOUBLE PRECISION Example SELECT LN(2); HP Vertica Analytic Database (7.0.x) Page 405 of 1539 SQL Reference Manual SQL Functions ln ------------------0.693147180559945 (1 row) LOG Returns the logarithm to the specified base of the argument. The return data type is the same as the argument. Behavior Type Immutable Syntax LOG ( [ base, ] expression ) Parameters base Specifies the base (default is base 10) expression Is an expression of type INTEGER or DOUBLE PRECISION Examples SELECT LOG(2.0, 64); log ----6 (1 row) SELECT LOG(100); log ----2 (1 row) MOD Returns the remainder of a division operation. MOD is also called modulo. Behavior Type Immutable HP Vertica Analytic Database (7.0.x) Page 406 of 1539 SQL Reference Manual SQL Functions Syntax MOD( expression1, expression2 ) Parameters expression1 Specifies the dividend (INTEGER, NUMERIC, or FLOAT) expression2 Specifies the divisor (type same as dividend) Notes When computing mod(N,M), the following rules apply: l If either N or M is the null value, then the result is the null value. l If M is zero, then an exception condition is raised: data exception — division by zero. l Otherwise, the result is the unique exact numeric value R with scale 0 (zero) such that all of the following are true: n R has the same sign as N. n The absolute value of R is less than the absolute value of M. n N = M * K + R for some exact numeric value K with scale 0 (zero). Examples SELECT MOD(9,4); mod ----1 (1 row) SELECT MOD(10,3); mod ----1 (1 row) SELECT MOD(-10,3); mod -----1 (1 row) SELECT MOD(-10,-3); mod ----- HP Vertica Analytic Database (7.0.x) Page 407 of 1539 SQL Reference Manual SQL Functions -1 (1 row) SELECT MOD(10,-3); mod ----1 (1 row) MOD( , 0) gives an error: => SELECT MOD(6.2,0); ERROR: numeric division by zero MODULARHASH Calculates a hash value over its arguments for the purpose of projection segmentation. In all other uses, returns 0. If you can hash segment your data using a column with a regular pattern, such as a sequential unique identifier, MODULARHASH distributes the data more evenly than HASH, which distributes data using a normal statistical distribution. Behavior Type Immutable Syntax MODULARHASH ( expression [ ,... ] ) Parameters expression Is a column reference of any data type. Notes The MODULARHASH() function takes up to 32 arguments, usually column names, and selects a specific node for each row based on the values of the columns for that row. Example CREATE PROJECTION fact_ts_2 (f_price, f_cid, f_tid, f_cost, f_date) AS (SELECT price, cid, tid, cost, dwdate FROM fact) HP Vertica Analytic Database (7.0.x) Page 408 of 1539 SQL Reference Manual SQL Functions SEGMENTED BY MODULARHASH(dwdate) ALL NODES OFFSET 2; See Also l HASH PI Returns the constant pi (Π), the ratio of any circle's circumference to its diameter in Euclidean geometry The return type is DOUBLE PRECISION. Behavior Type Immutable Syntax PI() Examples SELECT PI(); pi -----------------3.14159265358979 (1 row) POWER (or POW) Returns a DOUBLE PRECISION value representing one number raised to the power of another number. You can use either POWER or POW as the function name. Behavior Type Immutable Syntax POWER ( expression1, expression2 ) HP Vertica Analytic Database (7.0.x) Page 409 of 1539 SQL Reference Manual SQL Functions Parameters expression1 Is an expression of type DOUBLE PRECISION that represents the base expression2 Is an expression of type DOUBLE PRECISION that represents the exponent Example SELECT POWER(9.0, 3.0); power ------729 (1 row) RADIANS Returns a DOUBLE PRECISION value representing an angle expressed in radians. You can express the input angle in DEGREES, and optionally include minutes and seconds. Behavior Type Immutable Syntax RADIANS (degrees [, minutes, seconds]) Parameters degrees A unit of angular measurement, representing 1/360 of a full rotation. minutes A unit of angular measurement, representing 1/60 of a degree. seconds A unit of angular measurement, representing 1/60 of a minute. Examples SELECT RADIANS(45); RADIANS ------------------0.785398163397448 (1 row) HP Vertica Analytic Database (7.0.x) Page 410 of 1539 SQL Reference Manual SQL Functions SELECT RADIANS (1,2,3); RADIANS ------------------0.018049613347708 (1 row) RANDOM Returns a uniformly-distributed random number x, where 0 <= x < 1. Behavior Type Volatile Syntax RANDOM() Parameters RANDOM has no arguments. Its result is a FLOAT8 data type (also called DOUBLE PRECISION). Notes Typical pseudo-random generators accept a seed, which is set to generate a reproducible pseudorandom sequence. HP Vertica, however, distributes SQL processing over a cluster of nodes, where each node generates its own independent random sequence. Results depending on RANDOM are not reproducible because the work might be divided differently across nodes. Therefore, HP Vertica automatically generates truly random seeds for each node each time a request is executed and does not provide a mechanism for forcing a specific seed. Examples In the following example, the result is a float, which is >= 0 and < 1.0: SELECT RANDOM(); random ------------------0.211625560652465 (1 row) RANDOMINT Returns an INT8 value, and accepts a positive integer (n). RANDOMINT(n) returns one of the n integers from 0 through n – 1. HP Vertica Analytic Database (7.0.x) Page 411 of 1539 SQL Reference Manual SQL Functions Behavior Type Volatile Syntax RANDOMINT ( n ) Example In the following example, the result is an INT8, which is >= 0 and < n, randomly chosen from the set {0,1,2,3,4}. SELECT RANDOMINT(5); RANDOMINT ---------3 (1 row) Following are other examples of using this function: dbt=> select randomint (-2); ERROR 4163: Non-positive value supplied to randomint: -2 dbt=> select randomint(0); dbt=> select randomint(1); 0 dbt=> select randomint(21); randomint ----------15 (1 row) dbt=> select randomint(233333333333321); randomint ---------------21875589868430 (1 row) ROUND Rounds a value to a specified number of decimal places, retaining the original scale and precision. Fractions greater than or equal to .5 are rounded up. Fractions less than .5 are rounded down (truncated). Behavior Type Immutable HP Vertica Analytic Database (7.0.x) Page 412 of 1539 SQL Reference Manual SQL Functions Syntax ROUND ( expression [ , decimal-places ] ) Parameters expression Is an expression of type NUMERIC. decimal-places If positive, specifies the number of decimal places to display to the right of the decimal point; if negative, specifies the number of decimal places to display to the left of the decimal point. Notes NUMERIC ROUND() returns NUMERIC, retaining the original scale and precision: => SELECT ROUND(3.5); ROUND ------4.0 (1 row) The internal floating-point representation used to compute the ROUND function causes the fraction to be evaluated as 3.5, which is rounded up. Examples SELECT ROUND(2.0, 1.0 ) FROM dual; round ------2 (1 row) SELECT ROUND(12.345, 2.0 ); round ------12.35 (1 row) SELECT ROUND(3.444444444444444); ROUND ------------------3.000000000000000 (1 row) SELECT ROUND(3.14159, 3); ROUND --------- HP Vertica Analytic Database (7.0.x) Page 413 of 1539 SQL Reference Manual SQL Functions 3.14200 (1 row) SELECT ROUND(1234567, -3); round --------1235000 (1 row) SELECT ROUND(3.4999, -1); ROUND ------.0000 (1 row) SELECT employee_last_name, ROUND(annual_salary,4) FROM employee_dimension; employee_last_name | ROUND --------------------+-------Li | 1880 Rodriguez | 1704 Goldberg | 2282 Meyer | 1628 Pavlov | 3168 McNulty | 1516 Dobisz | 3006 Pavlov | 2142 Goldberg | 2268 Pavlov | 1918 Robinson | 2366 ... SIGN Returns a DOUBLE PRECISION value of -1, 0, or 1 representing the arithmetic sign of the argument. Behavior Type Immutable Syntax SIGN ( expression ) Parameters expression Is an expression of type DOUBLE PRECISION HP Vertica Analytic Database (7.0.x) Page 414 of 1539 SQL Reference Manual SQL Functions Examples SELECT SIGN(-8.4); sign ------1 (1 row) SIN Returns a DOUBLE PRECISION value representing the trigonometric sine of the argument. Behavior Type Immutable Syntax SIN ( expression ) Parameters expression Is an expression of type DOUBLE PRECISION Example SELECT SIN(30 * 2 * 3.14159 / 360); sin ------------------0.499999616987256 (1 row) SQRT Returns a DOUBLE PRECISION value representing the arithmetic square root of the argument. Behavior Type Immutable Syntax SQRT ( expression ) HP Vertica Analytic Database (7.0.x) Page 415 of 1539 SQL Reference Manual SQL Functions Parameters expression Is an expression of type DOUBLE PRECISION Examples SELECT SQRT(2); sqrt ----------------1.4142135623731 (1 row) TAN Returns a DOUBLE PRECISION value representing the trigonometric tangent of the argument. Behavior Type Immutable Syntax TAN ( expression ) Parameters expression Is an expression of type DOUBLE PRECISION Example SELECT TAN(30); tan -------------------6.40533119664628 (1 row) TRUNC Returns a value representing the argument fully truncated (toward zero) or truncated to a specific number of decimal places, retaining the original scale and precision. HP Vertica Analytic Database (7.0.x) Page 416 of 1539 SQL Reference Manual SQL Functions Behavior Type Immutable Syntax TRUNC ( expression [ , places ] Parameters expression Is an expression of type INTEGER or DOUBLE PRECISION that represents the number to truncate places Is an expression of type INTEGER that specifies the number of decimal places to return Notes NUMERIC TRUNC() returns NUMERIC, retaining the original scale and precision: => SELECT TRUNC(3.5); TRUNC ------3.0 (1 row) Examples => SELECT TRUNC(42.8); TRUNC ------42.0 (1 row) => SELECT TRUNC(42.4382, 2); TRUNC --------42.4300 (1 row) WIDTH_BUCKET Constructs equiwidth histograms, in which the histogram range is divided into intervals (buckets) of identical sizes. In addition, values below the low bucket return 0, and values above the high bucket return bucket_count +1. Returns an integer value. HP Vertica Analytic Database (7.0.x) Page 417 of 1539 SQL Reference Manual SQL Functions Behavior Type Immutable Syntax WIDTH_BUCKET ( expression, hist_min, hist_max, bucket_count ) Parameters expression The expression for which the histogram is created. This expression must evaluate to a numeric or datetime value or to a value that can be implicitly converted to a numeric or datetime value. If expression evaluates to null, then the expression returns null. hist_min An expression that resolves to the low boundary of bucket 1. Must also evaluate to numeric or datetime values and cannot evaluate to null. hist_max An expression that resolves to the high boundary of bucket bucket_count. Must also evaluate to a numeric or datetime value and cannot evaluate to null. bucket_count An expression that resolves to a constant, indicating the number of buckets. This expression always evaluates to a positive INTEGER. Notes l WIDTH_BUCKET divides a data set into buckets of equal width. For example, Age = 0–20, 20– 40, 40–60, 60–80. This is known as an equiwidth histogram. l When using WIDTH_BUCKET pay attention to the minimum and maximum boundary values. Each bucket contains values equal to or greater than the base value of that bucket, so that age ranges of 0–20, 20–40, and so on, are actually 0–19.99 and 20–39.999. l WIDTH_BUCKET accepts the following data types: (FLOAT and/or INT), (TIMESTAMP and/or DATE and/or TIMESTAMPTZ), or (INTERVAL and/or TIME). Examples The following example returns five possible values and has three buckets: 0 [Up to 100), 1 [100– 300), 2 [300–500), 3 [500–700), and 4 [700 and up): SELECT product_description, product_cost, WIDTH_BUCKET(product_cost, 100, 700, 3); The following example creates a nine-bucket histogram on the annual_income column for customers in Connecticut who are female doctors. The results return the bucket number to an “Income” column, divided into eleven buckets, including an underflow and an overflow. Note that if HP Vertica Analytic Database (7.0.x) Page 418 of 1539 SQL Reference Manual SQL Functions customers had an annual incomes greater than the maximum value, they would be assigned to an overflow bucket, 10: SELECT customer_name, annual_income, WIDTH_BUCKET (annual_income, 100000, 1000000, 9) AS "Income" FROM public.customer_dimension WHERE customer_state='CT' AND title='Dr.' AND customer_gender='Female' AND household_id < '1000' ORDER BY "Income"; In the following result set, the reason there is a bucket 0 is because buckets are numbered from 1 to bucket_count. Anything less than the given value of hist_min goes in bucket 0, and anything greater than the given value of hist_max goes in the bucket bucket_count+1. In this example, bucket 9 is empty, and there is no overflow. The value 12,283 is less than 100,000, so it goes into the underflow bucket. customer_name | annual_income | Income --------------------+---------------+-------Joanna A. Nguyen | 12283 | 0 Amy I. Nguyen | 109806 | 1 Juanita L. Taylor | 219002 | 2 Carla E. Brown | 240872 | 2 Kim U. Overstreet | 284011 | 2 Tiffany N. Reyes | 323213 | 3 Rebecca V. Martin | 324493 | 3 Betty . Roy | 476055 | 4 Midori B. Young | 462587 | 4 Martha T. Brown | 687810 | 6 Julie D. Miller | 616509 | 6 Julie Y. Nielson | 894910 | 8 Sarah B. Weaver | 896260 | 8 Jessica C. Nielson | 861066 | 8 (14 rows) See Also l NTILE [Analytic] HP Vertica Analytic Database (7.0.x) Page 419 of 1539 SQL Reference Manual SQL Functions NULL-handling Functions NULL-handling functions take arguments of any type, and their return type is based on their argument types. COALESCE Returns the value of the first non-null expression in the list. If all expressions evaluate to null, then the COALESCE function returns null. Behavior Type Immutable Syntax COALESCE ( expression1, expression2 ); COALESCE ( expression1, expression2, ... expression-n ); Parameters l COALESCE (expression1, expression2) is equivalent to the following CASE expression: CASE WHEN expression1 IS NOT NULL THEN expression1 ELSE expression2 END; l COALESCE (expression1, expression2, ... expression-n), for n >= 3, is equivalent to the following CASE expression: CASE WHEN expression1 IS NOT NULL THEN expression1ELSE COALESCE (expression2, . . . , expression-n) END; Notes COALESCE is an ANSI standard function (SQL-92). Example SELECT product_description, COALESCE(lowest_competitor_price, highest_competitor_price, HP Vertica Analytic Database (7.0.x) Page 420 of 1539 SQL Reference Manual SQL Functions average_competitor_price) AS price FROM product_dimension; product_description | price ------------------------------------+------Brand #54109 kidney beans | 264 Brand #53364 veal | 139 Brand #50720 ice cream sandwiches | 127 Brand #48820 coffee cake | 174 Brand #48151 halibut | 353 Brand #47165 canned olives | 250 Brand #39509 lamb | 306 Brand #36228 tuna | 245 Brand #34156 blueberry muffins | 183 Brand #31207 clams | 163 (10 rows) See Also l CASE Expressions l ISNULL IFNULL Returns the value of the first non-null expression in the list. IFNULL is an alias of NVL. Behavior Type Immutable Syntax IFNULL ( expression1 , expression2 ); Parameters l If expression1 is null, then IFNULL returns expression2. l If expression1 is not null, then IFNULL returns expression1. Notes l COALESCE is the more standard, more general function. l IFNULL is equivalent to ISNULL. HP Vertica Analytic Database (7.0.x) Page 421 of 1539 SQL Reference Manual SQL Functions l IFNULL is equivalent to COALESCE except that IFNULL is called with only two arguments. l ISNULL(a,b) is different from x IS NULL. l The arguments can have any data type supported by HP Vertica. l Implementation is equivalent to the CASE expression. For example: CASE WHEN expression1 IS NULL THEN expression2 ELSE expression1 END; l The following statement returns the value 140: SELECT IFNULL(NULL, 140) FROM employee_dimension; l The following statement returns the value 60: SELECT IFNULL(60, 90) FROM employee_dimension; Examples => SELECT IFNULL (SCORE, 0.0) FROM TESTING; IFNULL -------100.0 87.0 .0 .0 .0 (5 rows) See Also l CASE Expressions l COALESCE l NVL l ISNULL ISNULL Returns the value of the first non-null expression in the list. ISNULL is an alias of NVL. HP Vertica Analytic Database (7.0.x) Page 422 of 1539 SQL Reference Manual SQL Functions Behavior Type Immutable Syntax ISNULL ( expression1 , expression2 ); Parameters l If expression1 is null, then ISNULL returns expression2. l If expression1 is not null, then ISNULL returns expression1. Notes l COALESCE is the more standard, more general function. l ISNULL is equivalent to COALESCE except that ISNULL is called with only two arguments. l ISNULL(a,b) is different from x IS NULL. l The arguments can have any data type supported by HP Vertica. l Implementation is equivalent to the CASE expression. For example: CASE WHEN expression1 IS NULL THEN expression2 ELSE expression1 END; l The following statement returns the value 140: SELECT ISNULL(NULL, 140) FROM employee_dimension; l The following statement returns the value 60: SELECT ISNULL(60, 90) FROM employee_dimension; Examples SELECT product_description, product_price, ISNULL(product_cost, 0.0) AS cost FROM product_dimension; product_description | product_price | cost HP Vertica Analytic Database (7.0.x) Page 423 of 1539 SQL Reference Manual SQL Functions --------------------------------+---------------+-----Brand #59957 wheat bread | 405 | 207 Brand #59052 blueberry muffins | 211 | 140 Brand #59004 english muffins | 399 | 240 Brand #53222 wheat bread | 323 | 94 Brand #52951 croissants | 367 | 121 Brand #50658 croissants | 100 | 94 Brand #49398 white bread | 318 | 25 Brand #46099 wheat bread | 242 | 3 Brand #45283 wheat bread | 111 | 105 Brand #43503 jelly donuts | 259 | 19 (10 rows) See Also l CASE Expressions l COALESCE l NVL NULLIF Compares two expressions. If the expressions are not equal, the function returns the first expression (expression1). If the expressions are equal, the function returns null. Behavior Type Immutable Syntax NULLIF( expression1, expression2 ) Parameters expression1 Is a value of any data type. expression2 Must have the same data type as expr1 or a type that can be implicitly cast to match expression1. The result has the same type as expression1. Examples The following series of statements illustrates one simple use of the NULLIF function. Creates a single-column table t and insert some values: HP Vertica Analytic Database (7.0.x) Page 424 of 1539 SQL Reference Manual SQL Functions CREATE TABLE t (x TIMESTAMPTZ); INSERT INTO t VALUES('2009-09-04 09:14:00-04'); INSERT INTO t VALUES('2010-09-04 09:14:00-04'); Issue a select statement: SELECT x, NULLIF(x, '2009-09-04 09:14:00 EDT') FROM t; x | nullif ------------------------+-----------------------2009-09-04 09:14:00-04 | 2010-09-04 09:14:00-04 | 2010-09-04 09:14:00-04 SELECT NULLIF(1, 2); NULLIF -------1 (1 row) SELECT NULLIF(1, 1); NULLIF -------(1 row) SELECT NULLIF(20.45, 50.80); NULLIF -------20.45 (1 row) NULLIFZERO Evaluates to NULL if the value in the column is 0. Syntax NULLIFZERO(expression) Parameters expression (INTEGER, DOUBLE PRECISION, INTERVAL, or NUMERIC) Is the string to evaluate for 0 values. Example The TESTING table below shows the test scores for 5 students. Note that test scores are missing for S. Robinson and K. Johnson (NULL values appear in the Score column.) => SELECT * FROM TESTING; Name | Score -------------+------J. Doe | 100 HP Vertica Analytic Database (7.0.x) Page 425 of 1539 SQL Reference Manual SQL Functions R. Smith L. White S. Robinson K. Johnson (5 rows) | | | | 87 0 The SELECT statement below specifies that HP Vertica should return any 0 values in the Score column as Null. In the results, you can see that HP Vertica returns L. White's 0 score as Null. => SELECT Name, NULLIFZERO(Score) FROM TESTING; Name | NULLIFZERO -------------+-----------J. Doe | 100 R. Smith | 87 L. White | S. Robinson | K. Johnson | (5 rows) NVL Returns the value of the first non-null expression in the list. Behavior Type Immutable Syntax NVL ( expression1 , expression2 ); Parameters l If expression1 is null, then NVL returns expression2. l If expression1 is not null, then NVL returns expression1. Notes l COALESCE is the more standard, more general function. l NVL is equivalent to COALESCE except that NVL is called with only two arguments. l The arguments can have any data type supported by HP Vertica. l Implementation is equivalent to the CASE expression: HP Vertica Analytic Database (7.0.x) Page 426 of 1539 SQL Reference Manual SQL Functions CASE WHEN expression1 IS NULL THEN expression2 ELSE expression1 END; Examples expression1 is not null, so NVL returns expression1: SELECT NVL('fast', 'database'); nvl -----fast (1 row) expression1 is null, so NVL returns expression2: SELECT NVL(null, 'database'); nvl ---------database (1 row) expression2 is null, so NVL returns expression1: SELECT NVL('fast', null); nvl -----fast (1 row) In the following example, expression1 (title) contains nulls, so NVL returns expression2 and substitutes 'Withheld' for the unknown values: SELECT customer_name, NVL(title, 'Withheld') as title FROM customer_dimension ORDER BY title; customer_name | title ------------------------+------Alexander I. Lang | Dr. Steve S. Harris | Dr. Daniel R. King | Dr. Luigi I. Sanchez | Dr. Duncan U. Carcetti | Dr. Meghan K. Li | Dr. Laura B. Perkins | Dr. Samantha V. Robinson | Dr. Joseph P. Wilson | Mr. Kevin R. Miller | Mr. Lauren D. Nguyen | Mrs. Emily E. Goldberg | Mrs. Darlene K. Harris | Ms. HP Vertica Analytic Database (7.0.x) Page 427 of 1539 SQL Reference Manual SQL Functions Meghan J. Farmer Bettercare Ameristar Initech (17 rows) | | | | Ms. Withheld Withheld Withheld See Also l CASE Expressions l COALESCE l ISNULL l NVL2 NVL2 Takes three arguments. If the first argument is not NULL, it returns the second argument, otherwise it returns the third argument. The data types of the second and third arguments are implicitly cast to a common type if they don't agree, similar to COALESCE. Behavior Type Immutable Syntax NVL2 ( expression1 , expression2 , expression3 ); Parameters l If expression1 is not null, then NVL2 returns expression2. l If expression1 is null, then NVL2 returns expression3. Notes Arguments two and three can have any data type supported by HP Vertica. Implementation is equivalent to the CASE expression: CASE WHEN expression1 IS NOT NULL THEN expression2 ELSE expression3 END; Examples In this example, expression1 is not null, so NVL2 returns expression2: HP Vertica Analytic Database (7.0.x) Page 428 of 1539 SQL Reference Manual SQL Functions SELECT NVL2('very', 'fast', 'database'); nvl2 -----fast (1 row) In this example, expression1 is null, so NVL2 returns expression3: SELECT NVL2(null, 'fast', 'database'); nvl2 ---------database (1 row) In the following example, expression1 (title) contains nulls, so NVL2 returns expression3 ('Withheld') and also substitutes the non-null values with the expression 'Known': SELECT customer_name, NVL2(title, 'Known', 'Withheld') as title FROM customer_dimension ORDER BY title; customer_name | title ------------------------+------Alexander I. Lang | Known Steve S. Harris | Known Daniel R. King | Known Luigi I. Sanchez | Known Duncan U. Carcetti | Known Meghan K. Li | Known Laura B. Perkins | Known Samantha V. Robinson | Known Joseph P. Wilson | Known Kevin R. Miller | Known Lauren D. Nguyen | Known Emily E. Goldberg | Known Darlene K. Harris | Known Meghan J. Farmer | Known Bettercare | Withheld Ameristar | Withheld Initech | Withheld (17 rows) See Also l CASE Expressions l COALESCE l COALESCE ZEROIFNULL Evaluates to 0 if the column is NULL. HP Vertica Analytic Database (7.0.x) Page 429 of 1539 SQL Reference Manual SQL Functions Syntax ZEROIFNULL(expression) Parameters expression (INTEGER, DOUBLE PRECISION, INTERVAL, or NUMERIC) Is the string to evaluate for NULL values. Example The TESTING table below shows the test scores for 5 students. Note that L. White's score is 0, and that scores are missing for S. Robinson and K. Johnson. => SELECT * FROM TESTING; Name | Score -------------+------J. Doe | 100 R. Smith | 87 L. White | 0 S. Robinson | K. Johnson | (5 rows) The next SELECT statement specifies that HP Vertica should return any Null values in the Score column as 0s. In the results, you can see that HP Vertica returns a 0 score for S. Robinson and K. Johnson. => SELECT Name, ZEROIFNULL (Score) FROM TESTING; Name | ZEROIFNULL -------------+-----------J. Doe | 100 R. Smith | 87 L. White | 0 S. Robinson | 0 K. Johnson | 0 (5 rows) HP Vertica Analytic Database (7.0.x) Page 430 of 1539 SQL Reference Manual SQL Functions Pattern Matching Functions Used with the MATCH Clause, the HP Vertica pattern matching functions return additional data about the patterns found/output. For example, you can use these functions to return values representing the name of the event or pattern that matched the input row, the sequential number of the match, or a partition-wide unique identifier for the instance of the pattern that matched. Pattern matching is particularly useful for clickstream analysis where you might want to identify users' actions based on their Web browsing behavior (page clicks). A typical online clickstream funnel is: Company home page -> product home page -> search -> results -> purchase online Using the above clickstream funnel, you can search for a match on the user's sequence of web clicks and identify that the user: l Landed on the company home page. l Navigated to the product page. l Ran a search. l Clicked a link from the search results. l Made a purchase. For examples that use this clickstream model, see Event Series Pattern Matching in the Programmer's Guide. See Also MATCH Clause l l EVENT_NAME Returns a VARCHAR value representing the name of the event that matched the row. Syntax EVENT_NAME() Notes Pattern matching functions must be used in MATCH Clause syntax; for example, if you call EVENT_NAME() on its own, HP Vertica returns the following error message: HP Vertica Analytic Database (7.0.x) Page 431 of 1539 SQL Reference Manual SQL Functions => SELECT event_name(); ERROR: query with pattern matching function event_name must include a MATCH clause Example Note: This example uses the schema defined in Event Series Pattern Matching in the Programmer's Guide. For a more detailed example, see that topic. The following statement analyzes users' browsing history on website2.com and identifies patterns where the user landed on website2.com from another Web site (Entry) and browsed to any number of other pages (Onsite) before making a purchase (Purchase). The query also outputs the values for EVENT_NAME(), which is the name of the event that matched the row. SELECT uid, sid, ts, refurl, pageurl, action, event_name() FROM clickstream_log MATCH (PARTITION BY uid, sid ORDER BY ts DEFINE Entry AS RefURL NOT ILIKE '%website2.com%' AND PageURL ILIKE '%website2.com%', Onsite AS PageURL ILIKE '%website2.com%' AND Action='V', Purchase AS PageURL ILIKE '%website2.com%' AND Action = 'P' PATTERN P AS (Entry Onsite* Purchase) RESULTS ALL ROWS); uid | sid | ts | refurl | pageurl | action | event_name -----+-----+----------+----------------------+----------------------+--------+----------1 | 100 | 12:00:00 | website1.com | website2.com/home | V | Entry 1 | 100 | 12:01:00 | website2.com/home | website2.com/floby | V | Onsite 1 | 100 | 12:02:00 | website2.com/floby | website2.com/shamwow | V | Onsite 1 | 100 | 12:03:00 | website2.com/shamwow | website2.com/buy | P | Purchase 2 | 100 | 12:10:00 | website1.com | website2.com/home | V | Entry 2 | 100 | 12:11:00 | website2.com/home | website2.com/forks | V | Onsite 2 | 100 | 12:13:00 | website2.com/forks | website2.com/buy | P | Purchase (7 rows) See Also l MATCH Clause l MATCH_ID PATTERN_ID l l HP Vertica Analytic Database (7.0.x) Page 432 of 1539 SQL Reference Manual SQL Functions MATCH_ID Returns a successful pattern match as an INTEGER value. The returned value is the ordinal position of a match within a partition. Syntax MATCH_ID() Notes Pattern matching functions must be used in MATCH Clause syntax; for example, if you call MATCH_ID() on its own, HP Vertica returns the following error message: => SELECT match_id(); ERROR: query with pattern matching function match_id must include a MATCH clause Example Note: This example uses the schema defined in Event Series Pattern Matching in the Programmer's Guide. For a more detailed example, see that topic. The following statement analyzes users' browsing history on a site called website2.com and identifies patterns where the user reached website2.com from another Web site (Entry in the MATCH clause) and browsed to any number of other pages (Onsite) before making a purchase (Purchase). The query also outputs values for the MATCH_ID(), which represents a sequential number of the match. SELECT uid, sid, ts, refurl, pageurl, action, match_id() FROM clickstream_log MATCH (PARTITION BY uid, sid ORDER BY ts DEFINE Entry AS RefURL NOT ILIKE '%website2.com%' AND PageURL ILIKE '%website2.com%', Onsite AS PageURL ILIKE '%website2.com%' AND Action='V', Purchase AS PageURL ILIKE '%website2.com%' AND Action = 'P' PATTERN P AS (Entry Onsite* Purchase) RESULTS ALL ROWS); uid | sid | ts | refurl | pageurl | action | match_id -----+-----+----------+----------------------+----------------------+--------+---------- HP Vertica Analytic Database (7.0.x) Page 433 of 1539 SQL Reference Manual SQL Functions 2 | 100 2 | 100 2 | 100 1 | 100 1 | 100 1 | 100 1 | 100 (7 rows) | | | | | | | 12:10:00 12:11:00 12:13:00 12:00:00 12:01:00 12:02:00 12:03:00 | | | | | | | website1.com website2.com/home website2.com/forks website1.com website2.com/home website2.com/floby website2.com/shamwow | | | | | | | website2.com/home website2.com/forks website2.com/buy website2.com/home website2.com/floby website2.com/shamwow website2.com/buy | | | | | | | V V P V V V P | | | | | | | 1 2 3 1 2 3 4 See Also l MATCH Clause l EVENT_NAME PATTERN_ID l l PATTERN_ID Returns an integer value that is a partition-wide unique identifier for the instance of the pattern that matched. Syntax PATTERN_ID() Notes Pattern matching functions must be used in MATCH Clause syntax; for example, if call PATTERN_ID() on its own, HP Vertica returns the following error message: => SELECT pattern_id(); ERROR: query with pattern matching function pattern_id must include a MATCH clause Example Note: This example uses the schema defined in Event Series Pattern Matching in the Programmer's Guide. For a more detailed example, see that topic. The following statement analyzes users' browsing history on website2.com and identifies patterns where the user landed on website2.com from another Web site (Entry) and browsed to any number of other pages (Onsite) before making a purchase (Purchase). The query also outputs values for PATTERN_ID(), which represents the partition-wide identifier for the instance of the pattern that matched. HP Vertica Analytic Database (7.0.x) Page 434 of 1539 SQL Reference Manual SQL Functions SELECT uid, sid, ts, refurl, pageurl, action, pattern_id() FROM clickstream_log MATCH (PARTITION BY uid, sid ORDER BY ts DEFINE Entry AS RefURL NOT ILIKE '%website2.com%' AND PageURL ILIKE '%website2.com%', Onsite AS PageURL ILIKE '%website2.com%' AND Action='V', Purchase AS PageURL ILIKE '%website2.com%' AND Action = 'P' PATTERN P AS (Entry Onsite* Purchase) RESULTS ALL ROWS); uid | sid | ts | refurl | pageurl | action | pattern_id -----+-----+----------+----------------------+----------------------+--------+----------2 | 100 | 12:10:00 | website1.com | website2.com/home | V | 1 2 | 100 | 12:11:00 | website2.com/home | website2.com/forks | V | 1 2 | 100 | 12:13:00 | website2.com/forks | website2.com/buy | P | 1 1 | 100 | 12:00:00 | website1.com | website2.com/home | V | 1 1 | 100 | 12:01:00 | website2.com/home | website2.com/floby | V | 1 1 | 100 | 12:02:00 | website2.com/floby | website2.com/shamwow | V | 1 1 | 100 | 12:03:00 | website2.com/shamwow | website2.com/buy | P | 1 (7 rows) See Also l MATCH Clause l EVENT_NAME MATCH_ID l l HP Vertica Analytic Database (7.0.x) Page 435 of 1539 SQL Reference Manual SQL Functions Regular Expression Functions A regular expression lets you perform pattern matching on strings of characters. The regular expression syntax allows you to very precisely define the pattern used to match strings, giving you much greater control than the wildcard matching used in the LIKE predicate. HP Vertica's regular expression functions let you perform tasks such as determining if a string value matches a pattern, extracting a portion of a string that matches a pattern, or counting the number of times a string matches a pattern. HP Vertica uses the Perl Compatible Regular Expression library (PCRE) to evaluate regular expressions. As its name implies, PCRE's regular expression syntax is compatible with the syntax used by the Perl 5 programming language. You can read PCRE's documentation on its regular expression syntax. However, you might find the Perl Regular Expressions Documentation to be a better introduction, especially if you are unfamiliar with regular expressions. Note: The regular expression functions only operate on valid UTF-8 strings. If you attempt to use a regular expression function on a string that is not valid UTF-8, then the query fails with an error. To prevent an error from occurring, you can use the ISUTF8 function as a clause in the statement to ensure the strings you want to pass to the regular expression functions are actually valid UTF-8 strings, or you can use the 'b' argument to treat the strings as binary octets rather than UTF-8 encoded strings. ISUTF8 Tests whether a string is a valid UTF-8 string. Returns true if the string conforms to UTF-8 standards, and false otherwise. This function is useful to test strings for UTF-8 compliance before passing them to one of the regular expression functions, such as REGEXP_LIKE, which expect UTF-8 characters by default. ISUTF8 checks for invalid UTF8 byte sequences, according to UTF-8 rules: l invalid bytes l an unexpected continuation byte l a start byte not followed by enough continuation bytes l an Overload Encoding The presence of an invalid UTF8 byte sequence results in a return value of false. Syntax ISUTF8( string ); Parameters string The string to test for UTF-8 compliance. HP Vertica Analytic Database (7.0.x) Page 436 of 1539 SQL Reference Manual SQL Functions Examples => SELECT ISUTF8(E'\xC2\xBF'); -- UTF-8 INVERTED QUESTION MARK ISUTF8 -------t (1 row) => SELECT ISUTF8(E'\xC2\xC0'); -- UNDEFINED UTF-8 CHARACTER ISUTF8 -------f (1 row) REGEXP_COUNT Returns the number times a regular expression matches a string. Syntax REGEXP_COUNT( string, pattern [, position [, regexp_modifier ] ] ) Parameters string The string to be searched for matches. pattern The regular expression to search for within the string. The syntax of the regular expression is compatible with the Perl 5 regular expression syntax. See the Perl Regular Expressions Documentation for details. position The number of characters from the start of the string where the function should start searching for matches. The default value, 1, means to start searching for a match at the first (leftmost) character. Setting this parameter to a value greater than 1 starts searching for a match to the pattern that many characters into the string. HP Vertica Analytic Database (7.0.x) Page 437 of 1539 SQL Reference Manual SQL Functions regexp_modifier A string containing one or more single-character flags that change how the regular expression is matched against the string: b Treat strings as binary octets rather than UTF-8 characters. c Forces the match to be case sensitive (the default). i Forces the match to be case insensitive. m Treats the string being matched as multiple lines. With this modifier, the start of line (^) and end of line ($) regular expression operators match line breaks (\n) within the string. Ordinarily, these operators only match the start and end of the string. n Allows the single character regular expression operator (.) to match a newline (\n). Normally, the . operator will match any character except a newline. x Allows you to document your regular expressions. It causes all unescaped space characters and comments in the regular expression to be ignored. Comments start with a hash character (#) and end with a newline. All spaces in the regular expression that you want to be matched in strings must be escaped with a backslash (\) character. Notes This function operates on UTF-8 strings using the default locale, even if the locale has been set to something else. If you are porting a regular expression query from an Oracle database, remember that Oracle considers a zero-length string to be equivalent to NULL, while HP Vertica does not. Examples Count the number of occurrences of the substring "an" in the string "A man, a plan, a canal, Panama." => SELECT REGEXP_COUNT('a man, a plan, a canal: Panama', 'an'); REGEXP_COUNT -------------4 (1 row) Find the number of occurrences of the substring "an" in the string "a man, a plan, a canal: Panama" starting with the fifth character. => SELECT REGEXP_COUNT('a man, a plan, a canal: Panama', 'an',5); REGEXP_COUNT HP Vertica Analytic Database (7.0.x) Page 438 of 1539 SQL Reference Manual SQL Functions -------------3 (1 row) Find the number of occurrences of a substring containing a lower-case character followed by "an." In the first example, the query does not have a modifier. In the second example, the "i" query modifier is used to force the regular expression to ignore case. => SELECT REGEXP_COUNT('a man, a plan, a canal: Panama', '[a-z]an'); REGEXP_COUNT -------------3 (1 row) => SELECT REGEXP_COUNT('a man, a plan, a canal: Panama', '[a-z]an', 1, 'i'); REGEXP_COUNT -------------4 REGEXP_INSTR Returns the starting or ending position in a string where a regular expression matches. This function returns 0 if no match for the regular expression is found in the string. Syntax REGEXP_INSTR( string, pattern [, position [, occurrence ... [, return_position [, regexp_modif ier ] ... [, captured_subexp ] ] ] ] ) Parameters string The string to search for the pattern. pattern The regular expression to search for within the string. The syntax of the regular expression is compatible with the Perl 5 regular expression syntax. See the Perl Regular Expressions Documentation for details. position The number of characters from the start of the string where the function should start searching for matches. The default value, 1, means to start searching for a match at the first (leftmost) character. Setting this parameter to a value greater than 1 starts searching for a match to the pattern that many characters into the string. HP Vertica Analytic Database (7.0.x) Page 439 of 1539 SQL Reference Manual SQL Functions occurrence Controls which occurrence of a match between the string and the pattern is returned. With the default value (1), the function returns the position of the first substring that matches the pattern. You can use this parameter to find the position of additional matches between the string and the pattern. For example, set this parameter to 3 to find the position of the third substring that matched the pattern. return_position Sets the position within the string that is returned. When set to the default value (0), this function returns the position in the string of the first character of the substring that matched the pattern. If you set this value to 1, the function returns the position of the first character after the end of the matching substring. regexp_modifier A string containing one or more single-character flags that change how the regular expression is matched against the string: captured_subexp b Treat strings as binary octets rather than UTF-8 characters. c Forces the match to be case sensitive (the default). i Forces the match to be case insensitive. m Treats the string being matched as multiple lines. With this modifier, the start of line (^) and end of line ($) regular expression operators match line breaks (\n) within the string. Ordinarily, these operators only match the start and end of the string. n Allows the single character regular expression operator (.) to match a newline (\n). Normally, the . operator will match any character except a newline. x Allows you to document your regular expressions. It causes all unescaped space characters and comments in the regular expression to be ignored. Comments start with a hash character (#) and end with a newline. All spaces in the regular expression that you want to be matched in strings must be escaped with a backslash (\) character. The captured subexpression whose position should be returned. If omitted or set to 0, the function returns the position of the first character in the entire string that matched the regular expression. If set to 1 through 9, the function returns the subexpression captured by the corresponding set of parentheses in the regular expression. For example, setting this value to 3 returns the substring captured by the third set of parentheses in the regular expression. Note: The subexpressions are numbered left to right, based on the appearance of opening parenthesis, so nested regular expressions . For example, in the regular expression \s*(\w+\s+(\w+)), subexpression 1 is the one that captures everything but any leading whitespaces. HP Vertica Analytic Database (7.0.x) Page 440 of 1539 SQL Reference Manual SQL Functions Notes This function operates on UTF-8 strings using the default locale, even if the locale has been set to something else. If you are porting a regular expression query from an Oracle database, remember that Oracle considers a zero-length string to be equivalent to NULL, while HP Vertica does not. Examples Find the first occurrence of a sequence of letters starting with the letter e and ending with the letter y in the phrase "easy come, easy go." => SELECT REGEXP_INSTR('easy come, easy go','e\w*y'); REGEXP_INSTR -------------1 (1 row) Find the first occurrence of a sequence of letters starting with the letter e and ending with the letter y starting at the second character in the string "easy come, easy go." => SELECT REGEXP_INSTR('easy come, easy go','e\w*y',2); REGEXP_INSTR -------------12 (1 row) Find the second sequence of letters starting with the letter e and ending with the letter y in the string "easy come, easy go" starting at the first character. => SELECT REGEXP_INSTR('easy come, easy go','e\w*y',1,2); REGEXP_INSTR -------------12 (1 row) Find the position of the first character after the first whitespace in the string "easy come, easy go." => SELECT REGEXP_INSTR('easy come, easy go','\s',1,1,1); REGEXP_INSTR -------------6 (1 row) Find the position of the start of the third word in a string by capturing each word as a subexpression, and returning the third subexpression's start position. => SELECT REGEXP_INSTR('one two three','(\w+)\s+(\w+)\s+(\w+)', 1,1,0,'',3); REGEXP_INSTR -------------9 (1 row) HP Vertica Analytic Database (7.0.x) Page 441 of 1539 SQL Reference Manual SQL Functions REGEXP_LIKE Returns true if the string matches the regular expression. This function is similar to the LIKEpredicate, except that it uses regular expressions rather than simple wildcard character matching. Syntax REGEXP_LIKE( string, pattern [, modifiers ] ) Parameters string The string to match against the regular expression. pattern A string containing the regular expression to match against the string. The syntax of the regular expression is compatible with the Perl 5 regular expression syntax. See the Perl Regular Expressions Documentation for details. modifiers A string containing one or more single-character flags that change how the regular expression is matched against the string: b Treat strings as binary octets rather than UTF-8 characters. c Forces the match to be case sensitive (the default). i Forces the match to be case insensitive. m Treats the string being matched as multiple lines. With this modifier, the start of line (^) and end of line ($) regular expression operators match line breaks (\n) within the string. Ordinarily, these operators only match the start and end of the string. n Allows the single character regular expression operator (.) to match a newline (\n). Normally, the . operator will match any character except a newline. x Allows you to document your regular expressions. It causes all unescaped space characters and comments in the regular expression to be ignored. Comments start with a hash character (#) and end with a newline. All spaces in the regular expression that you want to be matched in strings must be escaped with a backslash (\) character. Notes This function operates on UTF-8 strings using the default locale, even if the locale has been set to something else. If you are porting a regular expression query from an Oracle database, remember that Oracle considers a zero-length string to be equivalent to NULL, while HP Vertica does not. HP Vertica Analytic Database (7.0.x) Page 442 of 1539 SQL Reference Manual SQL Functions Examples This example creates a table containing several strings to demonstrate regular expressions. => CREATE TABLE t (v VARCHAR); CREATE TABLE => CREATE PROJECTION t1 AS SELECT * FROM t; CREATE PROJECTION => COPY t FROM stdin; Enter data to be copied followed by a newline. End with a backslash and a period on a line by itself. >> aaa >> Aaa >> abc >> abc1 >> 123 >> \. => SELECT * FROM t; v ------aaa Aaa abc abc1 123 (5 rows) Select all records in the table that contain the letter "a." => SELECT v FROM t WHERE REGEXP_LIKE(v,'a'); v -----Aaa aaa abc abc1 (4 rows) Select all of the rows in the table that start with the letter "a." => SELECT v FROM t WHERE REGEXP_LIKE(v,'^a'); v -----aaa abc abc1 (3 rows) Select all rows that contain the substring "aa." => SELECT v FROM t WHERE REGEXP_LIKE(v,'aa'); v HP Vertica Analytic Database (7.0.x) Page 443 of 1539 SQL Reference Manual SQL Functions ----Aaa aaa (2 rows) Select all rows that contain a digit. => SELECT v FROM t WHERE REGEXP_LIKE(v,'\d'); v -----123 abc1 (2 rows) Select all rows that contain the substring "aaa." => SELECT v FROM t WHERE REGEXP_LIKE(v,'aaa'); v ----aaa (1 row) Select all rows that contain the substring "aaa" using case insensitive matching. => SELECT v FROM t WHERE REGEXP_LIKE(v,'aaa', 'i'); v ----Aaa aaa (2 rows) Select rows that contain the substring "a b c." => SELECT v FROM t WHERE REGEXP_LIKE(v,'a b c'); v --(0 rows) Select rows that contain the substring "a b c" ignoring space within the regular expression. => SELECT v FROM t WHERE REGEXP_LIKE(v,'a b c','x'); v -----abc abc1 (2 rows) Add multi-line rows to demonstrate using the "m" modifier. => COPY t FROM stdin RECORD TERMINATOR '!'; HP Vertica Analytic Database (7.0.x) Page 444 of 1539 SQL Reference Manual SQL Functions Enter data to be copied followed by a newline. End with a backslash and a period on a line by itself. >> Record 1 line 1 >> Record 1 line 2 >> Record 1 line 3! >> Record 2 line 1 >> Record 2 line 2 >> Record 2 line 3! >> \. Select rows that start with the substring "Record" and end with the substring "line 2." => SELECT v from t WHERE REGEXP_LIKE(v,'^Record.*line 2$'); v --(0 rows) Select rows that start with the substring "Record" and end with the substring "line 2," treating multiple lines as separate strings. => SELECT v from t WHERE REGEXP_LIKE(v,'^Record.*line 2$','m'); v -------------------------------------------------Record 2 Record 2 Record 2 Record 1 Record 1 Record 1 (2 rows) line line line line line line 1 2 3 1 2 3 REGEXP_REPLACE Replace all occurrences of a substring that match a regular expression with another substring. It is similar to the REPLACE function, except it uses a regular expression to select the substring to be replaced. Syntax REGEXP_REPLACE( string, target [, replacement [, position [, occurrence ... [, regexp_modifier s ] ] ] ] ) Parameters string The string whose to be searched and replaced. target The regular expression to search for within the string. The syntax of the regular expression is compatible with the Perl 5 regular expression syntax. See the Perl Regular Expressions Documentation for details. HP Vertica Analytic Database (7.0.x) Page 445 of 1539 SQL Reference Manual SQL Functions replacement The string to replace matched substrings. If not supplied, the matched substrings are deleted. This string can contain baccalaureates for substrings captured by the regular expression. The first captured substring is inserted into the replacement string using \1, the second \2, and so on. position The number of characters from the start of the string where the function should start searching for matches. The default value, 1, means to start searching for a match at the first (leftmost) character. Setting this parameter to a value greater than 1 starts searching for a match to the pattern that many characters into the string. occurrence Controls which occurrence of a match between the string and the pattern is replaced. With the default value (0), the function replaces all matching substrings with the replacement string. For any value above zero, the function replaces just a single occurrence. For example, set this parameter to 3 to replace the third substring that matched the pattern. regexp_modifier A string containing one or more single-character flags that change how the regular expression is matched against the string: b Treat strings as binary octets rather than UTF-8 characters. c Forces the match to be case sensitive (the default). i Forces the match to be case insensitive. m Treats the string being matched as multiple lines. With this modifier, the start of line (^) and end of line ($) regular expression operators match line breaks (\n) within the string. Ordinarily, these operators only match the start and end of the string. n Allows the single character regular expression operator (.) to match a newline (\n). Normally, the . operator will match any character except a newline. x Allows you to document your regular expressions. It causes all unescaped space characters and comments in the regular expression to be ignored. Comments start with a hash character (#) and end with a newline. All spaces in the regular expression that you want to be matched in strings must be escaped with a backslash (\) character. Notes This function operates on UTF-8 strings using the default locale, even if the locale has been set to something else. If you are porting a regular expression query from an Oracle database, remember that Oracle considers a zero-length string to be equivalent to NULL, while HP Vertica does not. HP Vertica Analytic Database (7.0.x) Page 446 of 1539 SQL Reference Manual SQL Functions Another key difference between Oracle and HP Vertica is that HP Vertica can handle an unlimited number of captured subexpressions where Oracle is limited to nine. In HP Vertica, you are able to use \10 in the replacement pattern to access the substring captured by the tenth set of parentheses in the regular expression. In Oracle, \10 is treated as the substring captured by the first set of parentheses followed by a zero. To force the same behavior in HP Vertica, use the \g backreference with the number of the captured subexpression enclosed in curly braces. For example, \g{1}0 is the substring captured by the first set of parentheses followed by a zero. You can also name your captured subexpressions, to make your regular expressions less ambiguous. See the PCRE documentation for details. Examples Find groups of "word characters" (letters, numbers and underscore) ending with "thy" in the string "healthy, wealthy, and wise" and replace them with nothing. => SELECT REGEXP_REPLACE('healthy, wealthy, and wise','\w+thy'); REGEXP_REPLACE ---------------, , and wise (1 row) Find groups of word characters ending with "thy" and replace with the string "something." => SELECT REGEXP_REPLACE('healthy, wealthy, and wise','\w+thy', 'something'); REGEXP_REPLACE -------------------------------something, something, and wise (1 row) Find groups of word characters ending with "thy" and replace with the string "something" starting at the third character in the string. => SELECT REGEXP_REPLACE('healthy, wealthy, and wise','\w+thy', 'something', 3); REGEXP_REPLACE ---------------------------------hesomething, something, and wise (1 row) Replace the second group of word characters ending with "thy" with the string "something." => SELECT REGEXP_REPLACE('healthy, wealthy, and wise','\w+thy', 'something', 1, 2); REGEXP_REPLACE -----------------------------healthy, something, and wise (1 row) Find groups of word characters ending with "thy" capturing the letters before the "thy", and replace with the captured letters plus the letters "ish." => SELECT REGEXP_REPLACE('healthy, wealthy, and wise','(\w+)thy', '\1ish'); HP Vertica Analytic Database (7.0.x) Page 447 of 1539 SQL Reference Manual SQL Functions REGEXP_REPLACE ---------------------------healish, wealish, and wise (1 row) Create a table to demonstrate replacing strings in a query. => CREATE TABLE customers (name varchar(50), phone varchar(11)); CREATE TABLE => CREATE PROJECTION customers1 AS SELECT * FROM customers; CREATE PROJECTION => COPY customers FROM stdin; Enter data to be copied followed by a newline. End with a backslash and a period on a line by itself. >> Able, Adam|17815551234 >> Baker,Bob|18005551111 >> Chu,Cindy|16175559876 >> Dodd,Dinara|15083452121 >> \. Query the customers, using REGEXP_REPLACE to format the phone numbers. => SELECT name, REGEXP_REPLACE(phone, '(\d)(\d{3})(\d{3})(\d{4})', '\1-(\2) \3-\4') as ph one FROM customers; name | phone -------------+-----------------Able, Adam | 1-(781) 555-1234 Baker,Bob | 1-(800) 555-1111 Chu,Cindy | 1-(617) 555-9876 Dodd,Dinara | 1-(508) 345-2121 (4 rows) REGEXP_SUBSTR Returns the substring that matches a regular expression within a string. If no matches are found, this function returns NULL. This is different than an empty string, which can be returned by this function if the regular expression matches a zero-length string. Syntax REGEXP_SUBSTR( string, pattern [, position [, d_subexp ] ] ] ) occurrence ... [, regexp_modifier ] [, capture Parameters string The string to search for the pattern. pattern The regular expression to find the substring to be extracted. The syntax of the regular expression is compatible with the Perl 5 regular expression syntax. See the Perl Regular Expressions Documentation for details. HP Vertica Analytic Database (7.0.x) Page 448 of 1539 SQL Reference Manual SQL Functions position The character in the string where the search for a match should start. The default value, 1, starts the search at the beginning of the string. If you supply a value larger than 1 for this parameter, the function will start searching that many characters into the string. occurrence Controls which matching substring is returned by the function. When given the default value (1), the function will return the first matching substring it finds in the string. By setting this value to a number greater than 1, this function will return subsequent matching substrings. For example, setting this parameter to 3 will return the third substring that matches the regular expression within the string. regexp_modifier A string containing one or more single-character flags that change how the regular expression is matched against the string: captured_subexp b Treat strings as binary octets rather than UTF-8 characters. c Forces the match to be case sensitive (the default). i Forces the match to be case insensitive. m Treats the string being matched as multiple lines. With this modifier, the start of line (^) and end of line ($) regular expression operators match line breaks (\n) within the string. Ordinarily, these operators only match the start and end of the string. n Allows the single character regular expression operator (.) to match a newline (\n). Normally, the . operator will match any character except a newline. x Allows you to document your regular expressions. It causes all unescaped space characters and comments in the regular expression to be ignored. Comments start with a hash character (#) and end with a newline. All spaces in the regular expression that you want to be matched in strings must be escaped with a backslash (\) character. The captured subexpression whose contents should be returned. If omitted or set to 0, the function returns the entire string that matched the regular expression. If set to 1 through 9, the function returns the subexpression captured by the corresponding set of parentheses in the regular expression. For example, setting this value to 3 returns the substring captured by the third set of parentheses in the regular expression. Note: The subexpressions are numbered left to right, based on the appearance of opening parenthesis, so nested regular expressions . For example, in the regular expression \s*(\w+\s+(\w+)), subexpression 1 is the one that captures everything but any leading whitespaces. HP Vertica Analytic Database (7.0.x) Page 449 of 1539 SQL Reference Manual SQL Functions Notes This function operates on UTF-8 strings using the default locale, even if the locale has been set to something else. If you are porting a regular expression query from an Oracle database, remember that Oracle considers a zero-length string to be equivalent to NULL, while HP Vertica does not. Examples Select the first substring of letters that end with "thy." => SELECT REGEXP_SUBSTR('healthy, wealthy, and wise','\w+thy'); REGEXP_SUBSTR --------------healthy (1 row) Select the first substring of letters that ends with "thy" starting at the second character in the string. => SELECT REGEXP_SUBSTR('healthy, wealthy, and wise','\w+thy',2); REGEXP_SUBSTR --------------ealthy (1 row) Select the second substring of letters that ends with "thy." => SELECT REGEXP_SUBSTR('healthy, wealthy, and wise','\w+thy',1,2); REGEXP_SUBSTR --------------wealthy (1 row) Return the contents of the third captured subexpression, which captures the third word in the string. => SELECT REGEXP_SUBSTR('one two three', '(\w+)\s+(\w+)\s+(\w+)', 1, 1, '', 3); REGEXP_SUBSTR --------------three (1 row) HP Vertica Analytic Database (7.0.x) Page 450 of 1539 SQL Reference Manual SQL Functions Sequence Functions The sequence functions provide simple, multiuser-safe methods for obtaining successive sequence values from sequence objects. NEXTVAL Returns the next value in a sequence. Calling NEXTVAL after creating a sequence initializes the sequence with its default value, incrementing a positive value for ascending sequences, and decrementing a negative value for descending sequences. Thereafter, calling NEXTVAL increments the sequence value. NEXTVAL is used in INSERT, COPY, and SELECT statements to create unique values. Behavior Type Volatile Syntax [[db-name.]schema.]sequence_name.NEXTVALNEXTVAL('[[db-name.]schema.]sequence_name') Parameters [[db-name.]schema.] [Optional] Specifies the schema name. Using a schema identifies objects that are not unique within the current search path (see Setting Schema Search Paths). You can optionally precede a schema with a database name, but you must be connected to the database you specify. You cannot make changes to objects in other databases. The ability to specify different database objects (from database and schemas to tables and columns) lets you qualify database objects as explicitly as required. For example, use a table and column (mytable.column1), a schema, table, and column (myschema.mytable.column1), and, as full qualification, a database, schema, table, and column (mydb.myschema.mytable.column1). sequence_name Identifies the sequence for which to determine the next value. HP Vertica Analytic Database (7.0.x) Page 451 of 1539 SQL Reference Manual SQL Functions Permissions l SELECT privilege on sequence l USAGE privilege on sequence schema Examples The following example creates an ascending sequence called my_seq, starting at 101: CREATE SEQUENCE my_seq START 101; The following command generates the first number in the sequence: SELECT NEXTVAL('my_seq'); nextval --------101 (1 row) The following command generates the next number in the sequence: SELECT NEXTVAL('my_seq'); nextval --------102 (1 row) The following command illustrates how NEXTVAL is evaluated on a per-row basis, so in this example, both calls to NEXTVAL yield the same result: SELECT NEXTVAL('my_seq'), NEXTVAL('my_seq'); nextval | nextval ---------+--------103 | 103 (1 row) The following example illustrates how the NEXTVAL is always evaluated first (and here, increments the my_seq sequence from its previous value), even when CURRVAL precedes NEXTVAL: SELECT CURRVAL('my_seq'), NEXTVAL('my_seq'); currval | nextval ---------+--------104 | 104 (1 row) The following example shows how to use NEXTVAL in a table SELECT statement. Notice that the nextval column is incremented by 1 again: HP Vertica Analytic Database (7.0.x) Page 452 of 1539 SQL Reference Manual SQL Functions SELECT NEXTVAL('my_seq'), product_description FROM product_dimension LIMIT 10; nextval | product_description ---------+-----------------------------105 | Brand #2 bagels 106 | Brand #1 butter 107 | Brand #6 chicken noodle soup 108 | Brand #5 golf clubs 109 | Brand #4 brandy 110 | Brand #3 lamb 111 | Brand #11 vanilla ice cream 112 | Brand #10 ground beef 113 | Brand #9 camera case 114 | Brand #8 halibut (10 rows) See Also l ALTER SEQUENCE l CREATE SEQUENCE l CURRVAL l DROP SEQUENCE l Using Named Sequences l Sequence Privileges CURRVAL For a sequence generator, returns the LAST value across all nodes returned by a previous invocation of NEXTVAL in the same session. If there were no calls to NEXTVAL after the sequence was created, an error is returned. Behavior Type Volatile Syntax [[db-name.]schema.]sequence_name.CURRVALCURRVAL('[[db-name.]schema.]sequence_name') HP Vertica Analytic Database (7.0.x) Page 453 of 1539 SQL Reference Manual SQL Functions Parameters [[db-name.]schema.] [Optional] Specifies the schema name. Using a schema identifies objects that are not unique within the current search path (see Setting Schema Search Paths). You can optionally precede a schema with a database name, but you must be connected to the database you specify. You cannot make changes to objects in other databases. The ability to specify different database objects (from database and schemas to tables and columns) lets you qualify database objects as explicitly as required. For example, use a table and column (mytable.column1), a schema, table, and column (myschema.mytable.column1), and, as full qualification, a database, schema, table, and column (mydb.myschema.mytable.column1). sequence_name Identifies the sequence for which to return the current value. Permissions l SELECT privilege on sequence l USAGE privilege on sequence schema Examples The following example creates an ascending sequence called sequential, starting at 101: CREATE SEQUENCE seq2 START 101; You cannot call CURRVAL until after you have initiated the sequence with NEXTVAL or the system returns an error: SELECT CURRVAL('seq2'); ERROR: Sequence seq2 has not been accessed in the session Use the NEXTVAL function to generate the first number for this sequence: SELECT NEXTVAL('seq2'); nextval --------101 (1 row) Now you can use CURRVAL to return the current number from this sequence: HP Vertica Analytic Database (7.0.x) Page 454 of 1539 SQL Reference Manual SQL Functions SELECT CURRVAL('seq2'); currval --------101 (1 row) The following command shows how to use CURRVAL in a SELECT statement: CREATE TABLE customer3 ( lname VARCHAR(25), fname VARCHAR(25), membership_card INTEGER, ID INTEGER ); INSERT INTO customer3 VALUES ('Brown' ,'Sabra', 072753, CURRVAL('my_seq')); SELECT CURRVAL('seq2'), lname FROM customer3; CURRVAL | lname ---------+------101 | Brown (1 row) The following example illustrates how the NEXTVAL is always evaluated first (and here, increments the my_seq sequence from its previous value), even when CURRVAL precedes NEXTVAL: SELECT CURRVAL('my_seq'), NEXTVAL('my_seq'); currval | nextval ---------+--------102 | 102 (1 row) See Also l ALTER SEQUENCE l CREATE SEQUENCE l DROP SEQUENCE NEXTVAL l l l LAST_INSERT_ID Returns the last value of a column whose value is automatically incremented through the AUTO_ INCREMENT or IDENTITY Column-Constraint. If multiple sessions concurrently load the same HP Vertica Analytic Database (7.0.x) Page 455 of 1539 SQL Reference Manual SQL Functions table, the returned value is the last value generated for an AUTO_INCREMENT column by an insert in that session. Behavior Type Volatile Syntax LAST_INSERT_ID() Privileges l Table owner l USAGE privileges on schema Notes l This function works only with AUTO_INCREMENT and IDENTITY columns. See columnconstraints for the CREATE TABLE statement. l LAST_INSERT_ID does not work with sequence generators created through the CREATE SEQUENCE statement. Examples Create a sample table called customer4. => CREATE TABLE customer4( ID IDENTITY(2,2), lname VARCHAR(25), fname VARCHAR(25), membership_card INTEGER ); => INSERT INTO customer4(lname, fname, membership_card) VALUES ('Gupta', 'Saleem', 475987); Notice that the IDENTITY column has a seed of 2, which specifies the value for the first row loaded into the table, and an increment of 2, which specifies the value that is added to the IDENTITY value of the previous row. Query the table you just created: => SELECT * FROM customer4; ID | lname | fname | membership_card ----+-------+--------+----------------2 | Gupta | Saleem | 475987 (1 row) HP Vertica Analytic Database (7.0.x) Page 456 of 1539 SQL Reference Manual SQL Functions Insert some additional values: => INSERT INTO customer4(lname, fname, membership_card) VALUES ('Lee', 'Chen', 598742); Call the LAST_INSERT_ID function: => SELECT LAST_INSERT_ID(); LAST_INSERT_ID ---------------4 (1 row) Query the table again: => SELECT * FROM customer4; ID | lname | fname | membership_card ----+-------+--------+----------------2 | Gupta | Saleem | 475987 4 | Lee | Chen | 598742 (2 rows) Add another row: => INSERT INTO customer4(lname, fname, membership_card) VALUES ('Davis', 'Bill', 469543); Call the LAST_INSERT_ID function: => SELECT LAST_INSERT_ID(); LAST_INSERT_ID ---------------6 (1 row) Query the table again: => SELECT * FROM customer4; ID | lname | fname ----+-------+--------+----------------2 | Gupta | Saleem | 475987 4 | Lee | Chen | 598742 6 | Davis | Bill | 469543 (3 rows) | membership_card See Also l ALTER SEQUENCE l CREATE SEQUENCE HP Vertica Analytic Database (7.0.x) Page 457 of 1539 SQL Reference Manual SQL Functions l DROP SEQUENCE l SEQUENCES l Using Named Sequences l Sequence Privileges HP Vertica Analytic Database (7.0.x) Page 458 of 1539 SQL Reference Manual SQL Functions String Functions String functions perform conversion, extraction, or manipulation operations on strings, or return information about strings. This section describes functions and operators for examining and manipulating string values. Strings in this context include values of the types CHAR, VARCHAR, BINARY, and VARBINARY. Unless otherwise noted, all of the functions listed in this section work on all four data types. As opposed to some other SQL implementations, HP Vertica keeps CHAR strings unpadded internally, padding them only on final output. So converting a CHAR(3) 'ab' to VARCHAR(5) results in a VARCHAR of length 2, not one with length 3 including a trailing space. Some of the functions described here also work on data of non-string types by converting that data to a string representation first. Some functions work only on character strings, while others work only on binary strings. Many work for both. BINARY and VARBINARY functions ignore multibyte UTF-8 character boundaries. Non-binary character string functions handle normalized multibyte UTF-8 characters, as specified by the Unicode Consortium. Unless otherwise specified, those character string functions for which it matters can optionally specify whether VARCHAR arguments should be interpreted as octet (byte) sequences, or as (locale-aware) sequences of UTF-8 characters. This is accomplished by adding "USING OCTETS" or "USING CHARACTERS" (default) as a parameter to the function. Some character string functions are stable because in general UTF-8 case-conversion, searching and sorting can be locale dependent. Thus, LOWER is stable, while LOWERB is immutable. The USING OCTETS clause converts these functions into their "B" forms, so they become immutable. If the locale is set to collation=binary, which is the default, all string functions—except CHAR_ LENGTH/CHARACTER_LENGTH, LENGTH, SUBSTR, and OVERLAY—are converted to their "B" forms and so are immutable. BINARY implicitly converts to VARBINARY, so functions that take VARBINARY arguments work with BINARY. ASCII Converts the first octet of a VARCHAR to an INTEGER. Behavior Type Immutable Syntax ASCII ( expression ) HP Vertica Analytic Database (7.0.x) Page 459 of 1539 SQL Reference Manual SQL Functions Parameters expression (VARCHAR) is the string to convert. Notes l ASCII is the opposite of the CHR function. l ASCII operates on UTF-8 characters, not only on single-byte ASCII characters. It continues to get the same results for the ASCII subset of UTF-8. Examples Expression Result SELECT ASCII('A'); 65 SELECT ASCII('ab'); 97 SELECT ASCII(null); SELECT ASCII(''); BIT_LENGTH Returns the length of the string expression in bits (bytes * 8) as an INTEGER. Behavior Type Immutable Syntax BIT_LENGTH ( expression ) Parameters expression (CHAR or VARCHAR or BINARY or VARBINARY) is the string to convert. Notes BIT_LENGTH applies to the contents of VARCHAR and VARBINARY fields. HP Vertica Analytic Database (7.0.x) Page 460 of 1539 SQL Reference Manual SQL Functions Examples Expression Result SELECT BIT_LENGTH('abc'::varbinary); 24 SELECT BIT_LENGTH('abc'::binary); 8 SELECT BIT_LENGTH(''::varbinary); 0 SELECT BIT_LENGTH(''::binary); 8 SELECT BIT_LENGTH(null::varbinary); SELECT BIT_LENGTH(null::binary); SELECT BIT_LENGTH(VARCHAR 'abc'); 24 SELECT BIT_LENGTH(CHAR 'abc'); 24 SELECT BIT_LENGTH(CHAR(6) 'abc'); 48 SELECT BIT_LENGTH(VARCHAR(6) 'abc'); 24 SELECT BIT_LENGTH(BINARY(6) 'abc'); 48 SELECT BIT_LENGTH(BINARY 'abc'); 24 SELECT BIT_LENGTH(VARBINARY 'abc'); 24 SELECT BIT_LENGTH(VARBINARY(6) 'abc'); 24 See Also l CHARACTER_LENGTH l LENGTH l OCTET_LENGTH BITCOUNT Returns the number of one-bits (sometimes referred to as set-bits) in the given VARBINARY value. This is also referred to as the population count. Behavior Type Immutable Syntax BITCOUNT ( expression ) HP Vertica Analytic Database (7.0.x) Page 461 of 1539 SQL Reference Manual SQL Functions Parameters expression (BINARY or VARBINARY) is the string to return. Examples SELECT BITCOUNT(HEX_TO_BINARY('0x10')); bitcount ---------1 (1 row) SELECT BITCOUNT(HEX_TO_BINARY('0xF0')); bitcount ---------4 (1 row) SELECT BITCOUNT(HEX_TO_BINARY('0xAB')) bitcount ---------5 (1 row) BITSTRING_TO_BINARY Translates the given VARCHAR bitstring representation into a VARBINARY value. Behavior Type Immutable Syntax BITSTRING_TO_BINARY ( expression ) Parameters expression (VARCHAR) is the string to return. Notes VARBINARY BITSTRING_TO_BINARY(VARCHAR) converts data from character type (in bitstring format) to binary type. This function is the inverse of TO_BITSTRING. BITSTRING_TO_BINARY(TO_BITSTRING(x)) = x HP Vertica Analytic Database (7.0.x) Page 462 of 1539 SQL Reference Manual SQL Functions TO_BITSTRING(BITSTRING_TO_BINARY(x)) = x Examples If there are an odd number of characters in the hex value, the first character is treated as the low nibble of the first (furthest to the left) byte. SELECT BITSTRING_TO_BINARY('0110000101100010'); bitstring_to_binary --------------------ab (1 row) If an invalid bitstring is supplied, the system returns an error: SELECT BITSTRING_TO_BINARY('010102010'); ERROR: invalid bitstring "010102010" BTRIM Removes the longest string consisting only of specified characters from the start and end of a string. Behavior Type Immutable Syntax BTRIM ( expression [ , characters-to-remove ] ) Parameters expression (CHAR or VARCHAR) is the string to modify characters-to-remove (CHAR or VARCHAR) specifies the characters to remove. The default is the space character. Example SELECT BTRIM('xyxtrimyyx', 'xy'); btrim ------trim HP Vertica Analytic Database (7.0.x) Page 463 of 1539 SQL Reference Manual SQL Functions (1 row) See Also l LTRIM l RTRIM l TRIM CHARACTER_LENGTH The CHARACTER_LENGTH() function: l Returns the string length in UTF-8 characters for CHAR and VARCHAR columns l Returns the string length in bytes (octets) for BINARY and VARBINARY columns l Strips the padding from CHAR expressions but not from VARCHAR expressions l Is identical to LENGTH() for CHAR and VARCHAR. For binary types, CHARACTER_LENGTH () is identical to OCTET_LENGTH(). Behavior Type Immutable if USING OCTETS, stable otherwise. Syntax [ CHAR_LENGTH | CHARACTER_LENGTH ] ( expression , ... [ USING { CHARACTERS | OCTETS } ] ) Parameters expression (CHAR or VARCHAR) is the string to measure USING CHARACTERS | OCTETS Determines whether the character length is expressed in characters (the default) or octets. Examples SELECT CHAR_LENGTH('1234 char_length ------------4 (1 row) '::CHAR(10), USING OCTETS); HP Vertica Analytic Database (7.0.x) Page 464 of 1539 SQL Reference Manual SQL Functions SELECT CHAR_LENGTH('1234 char_length ------------6 (1 row) '::VARCHAR(10)); SELECT CHAR_LENGTH(NULL::CHAR(10)) IS NULL; ?column? ---------t (1 row) See Also l BIT_LENGTH CHR Converts the first octet of an INTEGER to a VARCHAR. Behavior Type Immutable Syntax CHR ( expression ) Parameters expression (INTEGER) is the string to convert and is masked to a single octet. Notes l CHR is the opposite of the ASCII function. l CHR operates on UTF-8 characters, not only on single-byte ASCII characters. It continues to get the same results for the ASCII subset of UTF-8. Examples Expression Result SELECT CHR(65); A HP Vertica Analytic Database (7.0.x) Page 465 of 1539 SQL Reference Manual SQL Functions SELECT CHR(65+32); a SELECT CHR(null); CONCAT Used to concatenate two or more VARBINARY strings. The return value is of type VARBINARY. Syntax CONCAT ('a','b') Behavior Type Immutable Parameters a First VARBINARY string. b Second VARBINARY string. Example => SELECT CONCAT ('A','B'); CONCAT -------AB (1 row) DECODE Compares expression to each search value one by one. If expression is equal to a search, the function returns the corresponding result. If no match is found, the function returns default. If default is omitted, the function returns null. Behavior Type Immutable Syntax DECODE ( expression, search, result [ , search, result ]...[, default ] ); HP Vertica Analytic Database (7.0.x) Page 466 of 1539 SQL Reference Manual SQL Functions Parameters expression The value to compare. search The value compared against expression. result The value returned, if expression is equal to search. default Optional. If no matches are found, DECODE returns default. If default is omitted, then DECODE returns NULL (if no matches are found). Notes DECODE is similar to the IF-THEN-ELSE and CASE expression: CASE expressionWHEN search THEN result [WHEN search THEN result] [ELSE default]; The arguments can have any data type supported by HP Vertica. The result types of individual results are promoted to the least common type that can be used to represent all of them. This leads to a character string type, an exact numeric type, an approximate numeric type, or a DATETIME type, where all the various result arguments must be of the same type grouping. Example The following example converts numeric values in the weight column from the product_dimension table to descriptive values in the output. SELECT 2, 50, 71, 99, product_description, DECODE(weight, 'Light', 'Medium', 'Heavy', 'Call for help', 'N/A') FROM product_dimension WHERE category_description = 'Food' AND department_description = 'Canned Goods' AND sku_number BETWEEN 'SKU-#49750' AND 'SKU-#49999' LIMIT 15; product_description | case -----------------------------------+--------------Brand #499 canned corn | N/A Brand #49900 fruit cocktail | Medium Brand #49837 canned tomatoes | Heavy Brand #49782 canned peaches | N/A Brand #49805 chicken noodle soup | N/A Brand #49944 canned chicken broth | N/A Brand #49819 canned chili | N/A Brand #49848 baked beans | N/A HP Vertica Analytic Database (7.0.x) Page 467 of 1539 SQL Reference Manual SQL Functions Brand #49989 minestrone soup Brand #49778 canned peaches Brand #49770 canned peaches Brand #4977 fruit cocktail Brand #49933 canned olives Brand #49750 canned olives Brand #49777 canned tomatoes (15 rows) | | | | | | | N/A N/A N/A N/A N/A Call for help N/A GREATEST Returns the largest value in a list of expressions. Behavior Type Stable Syntax GREATEST ( expression1, expression2, ... expression-n ); Parameters expression1, expression2, and expression-n are the expressions to be evaluated. Notes l Works for all data types, and implicitly casts similar types. See Examples. l A NULL value in any one of the expressions returns NULL. l Depends on the collation setting of the locale. Examples This example returns 9 as the greatest in the list of expressions: SELECT GREATEST(7, 5, 9); greatest ---------9 (1 row) Note that putting quotes around the integer expressions returns the same result as the first example: HP Vertica Analytic Database (7.0.x) Page 468 of 1539 SQL Reference Manual SQL Functions SELECT GREATEST('7', '5', '9'); greatest ---------9 (1 row) The next example returns FLOAT 1.5 as the greatest because the integer is implicitly cast to float: SELECT GREATEST(1, 1.5); greatest ---------1.5 (1 row) The following example returns 'vertica' as the greatest: SELECT GREATEST('vertica', 'analytic', 'database'); greatest ---------vertica (1 row) Notice this next command returns NULL: SELECT GREATEST('vertica', 'analytic', 'database', null); greatest ---------(1 row) And one more: SELECT GREATEST('sit', 'site', 'sight'); greatest ---------site (1 row) See Also l LEAST GREATESTB Returns its greatest argument, using binary ordering, not UTF-8 character ordering. Behavior Type Immutable HP Vertica Analytic Database (7.0.x) Page 469 of 1539 SQL Reference Manual SQL Functions Syntax GREATESTB ( expression1, expression2, ... expression-n ); Parameters expression1, expression2, and expression-n are the expressions to be evaluated. Notes l Works for all data types, and implicitly casts similar types. See Examples. l A NULL value in any one of the expressions returns NULL. l Depends on the collation setting of the locale. Examples The following command selects straße as the greatest in the series of inputs: SELECT GREATESTB('straße', 'strasse'); GREATESTB ----------straße (1 row) This example returns 9 as the greatest in the list of expressions: SELECT GREATESTB(7, 5, 9); GREATESTB ----------9 (1 row) Note that putting quotes around the integer expressions returns the same result as the first example: GREATESTB ----------9 (1 row) The next example returns FLOAT 1.5 as the greatest because the integer is implicitly cast to float: SELECT GREATESTB(1, 1.5); GREATESTB ----------1.5 HP Vertica Analytic Database (7.0.x) Page 470 of 1539 SQL Reference Manual SQL Functions (1 row) The following example returns vertica as the greatest: SELECT GREATESTB('vertica', 'analytic', 'database'); GREATESTB ----------vertica (1 row) Notice this next command returns NULL: SELECT GREATESTB('vertica', 'analytic', 'database', null); GREATESTB ----------(1 row) And one more: SELECT GREATESTB('sit', 'site', 'sight'); GREATESTB ----------site (1 row) See Also l LEASTB HEX_TO_BINARY Translates the given VARCHAR hexadecimal representation into a VARBINARY value. Behavior Type Immutable Syntax HEX_TO_BINARY ( [ 0x ] expression ) Parameters expression (BINARY or VARBINARY) String to translate. 0x Optional prefix. HP Vertica Analytic Database (7.0.x) Page 471 of 1539 SQL Reference Manual SQL Functions Notes VARBINARY HEX_TO_BINARY(VARCHAR) converts data from character type in hexadecimal format to binary type. This function is the inverse of TO_HEX. HEX_TO_BINARY(TO_HEX(x)) = x) TO_HEX(HEX_TO_BINARY(x)) = x) If there are an odd number of characters in the hexadecimal value, the first character is treated as the low nibble of the first (furthest to the left) byte. Examples If the given string begins with "0x" the prefix is ignored. For example: => SELECT HEX_TO_BINARY('0x6162') AS hex1, HEX_TO_BINARY('6162') AS hex2; hex1 | hex2 ------+-----ab | ab (1 row) If an invalid hex value is given, HP Vertica returns an “invalid binary representation" error; for example: => SELECT HEX_TO_BINARY('0xffgf'); ERROR: invalid hex string "0xffgf" See Also l TO_HEX HEX_TO_INTEGER Translates the given VARCHAR hexadecimal representation into an INTEGER value. HP Vertica completes this conversion as follows: l Adds the 0x prefix if it is not specified in the input l Casts the VARCHAR string to a NUMERIC l Casts the NUMERIC to an INTEGER Behavior Type Immutable HP Vertica Analytic Database (7.0.x) Page 472 of 1539 SQL Reference Manual SQL Functions Syntax HEX_TO_INTEGER ( [ 0x ] expression ) Parameters expression VARCHAR is the string to translate. 0x Is the optional prefix. Examples You can enter the string with or without the Ox prefix. For example: => SELECT HEX_TO_INTEGER ('0aedc') AS hex1,HEX_TO_INTEGER ('aedc') AS hex2; hex1 | hex2 -------+------44764 | 44764 (1 row) If you pass the function an invalid hex value, HP Vertica returns an invalid input syntax error; for example: => SELECT HEX_TO_INTEGER ('0xffgf'); ERROR 3691: Invalid input syntax for numeric: "0xffgf" See Also l TO_HEX l HEX_TO_BINARY INET_ATON Returns an integer that represents the value of the address in host byte order, given the dotted-quad representation of a network address as a string. Behavior Type Immutable Syntax INET_ATON ( expression ) HP Vertica Analytic Database (7.0.x) Page 473 of 1539 SQL Reference Manual SQL Functions Parameters expression (VARCHAR) is the string to convert. Notes The following syntax converts an IPv4 address represented as the string A to an integer I. INET_ ATON trims any spaces from the right of A, calls the Linux function inet_pton, and converts the result from network byte order to host byte order using ntohl. INET_ATON(VARCHAR A) -> INT8 I If A is NULL, too long, or inet_pton returns an error, the result is NULL. Examples The generated number is always in host byte order. In the following example, the number is calculated as 209×256^3 + 207×256^2 + 224×256 + 40. > SELECT INET_ATON('209.207.224.40'); inet_aton -----------3520061480 (1 row) > SELECT INET_ATON('1.2.3.4'); inet_aton ----------16909060 (1 row) > SELECT TO_HEX(INET_ATON('1.2.3.4')); to_hex --------1020304 (1 row) See Also l INET_NTOA INET_NTOA Returns the dotted-quad representation of the address as a VARCHAR, given a network address as an integer in network byte order. HP Vertica Analytic Database (7.0.x) Page 474 of 1539 SQL Reference Manual SQL Functions Behavior Type Immutable Syntax INET_NTOA ( expression ) Parameters expression (INTEGER) is the network address to convert. Notes The following syntax converts an IPv4 address represented as integer I to a string A. INET_NTOA converts I from host byte order to network byte order using htonl, and calls the Linux function inet_ntop. INET_NTOA(INT8 I) -> VARCHAR A If I is NULL, greater than 2^32 or negative, the result is NULL. Examples > SELECT INET_NTOA(16909060); inet_ntoa ----------1.2.3.4 (1 row) > SELECT INET_NTOA(03021962); inet_ntoa ------------0.46.28.138 (1 row) See Also l INET_ATON INITCAP Starting in Release 5.1, this function treats the string argument as a UTF-8 encoded string, rather than depending on the collation setting of the locale (for example, collation=binary) to identify the HP Vertica Analytic Database (7.0.x) Page 475 of 1539 SQL Reference Manual SQL Functions encoding. Prior to Release 5.1, the behavior type of this function was stable. Capitalizes first letter of each alphanumeric word and puts the rest in lowercase. Behavior Type Immutable Syntax INITCAP ( expression ) Parameters expression (VARCHAR) is the string to format. Notes l Depends on collation setting of the locale. l INITCAP is restricted to 32750 octet inputs, since it is possible for the UTF-8 representation of result to double in size. Examples Expression Result SELECT INITCAP('high speed database'); High Speed Database SELECT INITCAP('LINUX TUTORIAL'); Linux Tutorial SELECT INITCAP('abc DEF 123aVC 124Btd,lAsT'); Abc Def 123Avc 124Btd,Last SELECT INITCAP(''); SELECT INITCAP(null); INITCAPB Capitalizes first letter of each alphanumeric word and puts the rest in lowercase. Multibyte characters are not converted and are skipped. Behavior Type Immutable HP Vertica Analytic Database (7.0.x) Page 476 of 1539 SQL Reference Manual SQL Functions Syntax INITCAPB ( expression ) Parameters expression (VARCHAR) is the string to format. Notes Depends on collation setting of the locale. Examples Expression Result SELECT INITCAPB('étudiant'); éTudiant SELECT INITCAPB('high speed database'); High Speed Database SELECT INITCAPB('LINUX TUTORIAL'); Linux Tutorial SELECT INITCAPB('abc DEF 123aVC 124Btd,lAsT'); Abc Def 123Avc 124Btd,Last SELECT INITCAPB(''); SELECT INITCAPB(null); INSERT Inserts a character string into a specified location in another character string. Syntax INSERT( 'string1', n, m, 'string2'); Behavior Type Immutable Parameters string1 (VARCHAR) Is the string in which to insert the new string. HP Vertica Analytic Database (7.0.x) Page 477 of 1539 SQL Reference Manual SQL Functions n A character of type INTEGER that represents the starting point for the insertion within string1. You specify the number of characters from the first character in string1 as the starting point for the insertion. For example, to insert characters before "c", in the string "abcdef," enter 3. m A character of type INTEGER that represents the the number of characters in string1 (if any) that should be replaced by the insertion. For example,if you want the insertion to replace the letters "cd" in the string "abcdef, " enter 2. string2 (VARCHAR) Is the string to be inserted. Example The following example changes the string Warehouse to Storehouse using the INSERT function: => SELECT INSERT ('Warehouse',1,3,'Stor'); INSERT -----------Storehouse (1 row) INSTR Starting in Release 5.1, this function treats the string argument as a UTF-8 encoded string, rather than depending on the collation setting of the locale (for example, collation=binary) to identify the encoding. Prior to Release 5.1, the behavior type of this function was stable. Searches string for substring and returns an integer indicating the position of the character in string that is the first character of this occurrence. The return value is based on the character position of the identified character. Behavior Type Immutable Syntax INSTR ( string , substring [, position [, occurrence ] ] ) Parameters string (CHAR or VARCHAR, or BINARY or VARBINARY) Text expression to search. substring (CHAR or VARCHAR, or BINARY or VARBINARY) String to search for. HP Vertica Analytic Database (7.0.x) Page 478 of 1539 SQL Reference Manual SQL Functions position Nonzero integer indicating the character of string where HP Vertica begins the search. If position is negative, then HP Vertica counts backward from the end of string and then searches backward from the resulting position. The first character of string occupies the default position 1, and position cannot be 0. occurrence Integer indicating which occurrence of string HP Vertica searches. The value of occurrence must be positive (greater than 0), and the default is 1. Notes Both position and occurrence must be of types that can resolve to an integer. The default values of both parameters are 1, meaning HP Vertica begins searching at the first character of string for the first occurrence of substring. The return value is relative to the beginning of string, regardless of the value of position, and is expressed in characters. If the search is unsuccessful (that is, if substring does not appear occurrence times after the position character of string, the return value is 0. Examples The first example searches forward in string ‘abc’ for substring ‘b’. The search returns the position in ‘abc’ where ‘b’ occurs, or position 2. Because no position parameters are given, the default search starts at ‘a’, position 1. SELECT INSTR('abc', 'b'); INSTR ------2 (1 row) The following three examples use character position to search backward to find the position of a substring. Note: Although it might seem intuitive that the function returns a negative integer, the position of n occurrence is read left to right in the sting, even though the search happens in reverse (from the end—or right side—of the string). In the first example, the function counts backward one character from the end of the string, starting with character ‘c’. The function then searches backward for the first occurrence of ‘a’, which it finds it in the first position in the search string. SELECT INSTR('abc', 'a', -1); INSTR ------1 (1 row) HP Vertica Analytic Database (7.0.x) Page 479 of 1539 SQL Reference Manual SQL Functions In the second example, the function counts backward one byte from the end of the string, starting with character ‘c’. The function then searches backward for the first occurrence of ‘a’, which it finds it in the first position in the search string. SELECT INSTR(VARBINARY 'abc', VARBINARY 'a', -1); INSTR ------1 (1 row) In the third example, the function counts backward one character from the end of the string, starting with character ‘b’, and searches backward for substring ‘bc’, which it finds in the second position of the search string. SELECT INSTR('abcb', 'bc', -1); INSTR ------2 (1 row) In the fourth example, the function counts backward one character from the end of the string, starting with character ‘b’, and searches backward for substring ‘bcef’, which it does not find. The result is 0. SELECT INSTR('abcb', 'bcef', -1); INSTR ------0 (1 row) In the fifth example, the function counts backward one byte from the end of the string, starting with character ‘b’, and searches backward for substring ‘bcef’, which it does not find. The result is 0. SELECT INSTR(VARBINARY 'abcb', VARBINARY 'bcef', -1); INSTR ------0 (1 row) Multibyte characters are treated as a single character: dbadmin=> SELECT INSTR('aébc', 'b'); INSTR ------3 (1 row) Use INSTRB to treat multibyte characters as binary: dbadmin=> SELECT INSTRB('aébc', 'b'); HP Vertica Analytic Database (7.0.x) Page 480 of 1539 SQL Reference Manual SQL Functions INSTRB -------4 (1 row) INSTRB Searches string for substring and returns an integer indicating the octet position within string that is the first occurrence. The return value is based on the octet position of the identified byte. Behavior Type Immutable Syntax INSTRB ( string , substring [, position [, occurrence ] ] ) Parameters string Is the text expression to search. substring Is the string to search for. position Is a nonzero integer indicating the character of string where HP Vertica begins the search. If position is negative, then HP Vertica counts backward from the end of string and then searches backward from the resulting position. The first byte of string occupies the default position 1, and position cannot be 0. occurrence Is an integer indicating which occurrence of string HP Vertica searches. The value of occurrence must be positive (greater than 0), and the default is 1. Notes Both position and occurrence must be of types that can resolve to an integer. The default values of both parameters are 1, meaning HP Vertica begins searching at the first byte of string for the first occurrence of substring. The return value is relative to the beginning of string, regardless of the value of position, and is expressed in octets. If the search is unsuccessful (that is, if substring does not appear occurrence times after the position character of string, then the return value is 0. Example SELECT INSTRB('straße', 'ß'); HP Vertica Analytic Database (7.0.x) Page 481 of 1539 SQL Reference Manual SQL Functions INSTRB -------5 (1 row) See Also l INSTR ISUTF8 Tests whether a string is a valid UTF-8 string. Returns true if the string conforms to UTF-8 standards, and false otherwise. This function is useful to test strings for UTF-8 compliance before passing them to one of the regular expression functions, such as REGEXP_LIKE, which expect UTF-8 characters by default. ISUTF8 checks for invalid UTF8 byte sequences, according to UTF-8 rules: l invalid bytes l an unexpected continuation byte l a start byte not followed by enough continuation bytes l an Overload Encoding The presence of an invalid UTF8 byte sequence results in a return value of false. Syntax ISUTF8( string ); Parameters string The string to test for UTF-8 compliance. Examples => SELECT ISUTF8(E'\xC2\xBF'); -- UTF-8 INVERTED QUESTION MARK ISUTF8 -------t (1 row) => SELECT ISUTF8(E'\xC2\xC0'); -- UNDEFINED UTF-8 CHARACTER ISUTF8 -------f HP Vertica Analytic Database (7.0.x) Page 482 of 1539 SQL Reference Manual SQL Functions (1 row) LEAST Returns the smallest value in a list of expressions. Behavior Type Stable Syntax LEAST ( expression1, expression2, ... expression-n ); Parameters expression1, expression2, and expression-n are the expressions to be evaluated. Notes l Works for all data types, and implicitly casts similar types. See Examples below. l A NULL value in any one of the expressions returns NULL. Examples This example returns 5 as the least: SELECT LEAST(7, 5, 9); least ------5 (1 row) Putting quotes around the integer expressions returns the same result as the first example: SELECT LEAST('7', '5', '9'); least ------5 (1 row) In the above example, the values are being compared as strings, so '10' would be less than '2'. The next example returns 1.5, as INTEGER 2 is implicitly cast to FLOAT: HP Vertica Analytic Database (7.0.x) Page 483 of 1539 SQL Reference Manual SQL Functions SELECT LEAST(2, 1.5); least ------1.5 (1 row) The following example returns 'analytic' as the least: SELECT LEAST('vertica', 'analytic', 'database'); least ---------analytic (1 row) Notice this next command returns NULL: SELECT LEAST('vertica', 'analytic', 'database', null); least ------(1 row) And one more: SELECT LEAST('sit', 'site', 'sight'); least ------sight (1 row) See Also l GREATEST LEASTB Returns the function's least argument, using binary ordering, not UTF-8 character ordering. Behavior Type Immutable Syntax LEASTB ( expression1, expression2, ... expression-n ); HP Vertica Analytic Database (7.0.x) Page 484 of 1539 SQL Reference Manual SQL Functions Parameters expression1, expression2, and expression-n are the expressions to be evaluated. Notes l Works for all data types, and implicitly casts similar types. See Examples below. l A NULL value in any one of the expressions returns NULL. Examples The following command selects strasse as the least in the series of inputs: SELECT LEASTB('straße', 'strasse'); LEASTB --------strasse (1 row) This example returns 5 as the least: SELECT LEASTB(7, 5, 9); LEASTB -------5 (1 row) Putting quotes around the integer expressions returns the same result as the first example: SELECT LEASTB('7', '5', '9'); LEASTB -------5 (1 row) In the above example, the values are being compared as strings, so '10' would be less than '2'. The next example returns 1.5, as INTEGER 2 is implicitly cast to FLOAT: SELECT LEASTB(2, 1.5); LEASTB -------1.5 (1 row) The following example returns 'analytic' as the least in the series of inputs: SELECT LEASTB('vertica', 'analytic', 'database'); HP Vertica Analytic Database (7.0.x) Page 485 of 1539 SQL Reference Manual SQL Functions LEASTB ---------analytic (1 row) Notice this next command returns NULL: SELECT LEASTB('vertica', 'analytic', 'database', null); LEASTB -------(1 row) See Also l GREATESTB LEFT Returns the specified characters from the left side of a string. Behavior Type Immutable Syntax LEFT ( string , length ) Parameters string (CHAR or VARCHAR) is the string to return. length Is an INTEGER value that specifies the count of characters to return. Examples SELECT LEFT('vertica', 3); left -----ver (1 row) SELECT LEFT('straße', 5); LEFT ------- HP Vertica Analytic Database (7.0.x) Page 486 of 1539 SQL Reference Manual SQL Functions straß (1 row) See Also l SUBSTR LENGTH The LENGTH() function: l Returns the string length in UTF-8 characters for CHAR and VARCHAR columns l Returns the string length in bytes (octets) for BINARY and VARBINARY columns l Strips the padding from CHAR expressions but not from VARCHAR expressions l Is is identical to CHARACTER_LENGTH for CHAR and VARCHAR. For binary types, LENGTH() is identical to OCTET_LENGTH. Behavior Type Immutable Syntax LENGTH ( expression ) Parameters expression (CHAR or VARCHAR or BINARY or VARBINARY) String to measure Examples Expression Result SELECT LENGTH('1234 '::CHAR(10)); 4 SELECT LENGTH('1234 '::VARCHAR(10)); 6 SELECT LENGTH('1234 '::BINARY(10)); 10 SELECT LENGTH('1234 '::VARBINARY(10)); 6 SELECT LENGTH(NULL::CHAR(10)) IS NULL; HP Vertica Analytic Database (7.0.x) t Page 487 of 1539 SQL Reference Manual SQL Functions See Also l BIT_LENGTH LOWER Starting in Release 5.1, this function treats the string argument as a UTF-8 encoded string, rather than depending on the collation setting of the locale (for example, collation=binary) to identify the encoding. Prior to Release 5.1, the behavior type of this function was stable. Returns a VARCHAR value containing the argument converted to lowercase letters. Behavior Type Immutable Syntax LOWER ( expression ) Parameters expression (CHAR or VARCHAR) String to convert Notes LOWER is restricted to 32750 octet inputs, since it is possible for the UTF-8 representation of result to double in size. Examples SELECT LOWER('AbCdEfG'); lower ---------abcdefg (1 row) SELECT LOWER('The Cat In The Hat'); lower -------------------the cat in the hat (1 row) dbadmin=> SELECT LOWER('ÉTUDIANT'); LOWER ---------- HP Vertica Analytic Database (7.0.x) Page 488 of 1539 SQL Reference Manual SQL Functions étudiant (1 row) LOWERB Returns a character string with each ASCII character converted to lowercase. Multibyte characters are not converted and are skipped. Behavior Type Immutable Syntax LOWERB ( expression ) Parameters expression (CHAR or VARCHAR) is the string to convert Examples In the following example, the multibyte UTF-8 character É is not converted to lowercase: SELECT LOWERB('ÉTUDIANT'); LOWERB ---------Étudiant (1 row) SELECT LOWER('ÉTUDIANT'); LOWER ---------étudiant (1 row) SELECT LOWERB('AbCdEfG'); LOWERB --------abcdefg (1 row) SELECT LOWERB('The Vertica Database'); LOWERB ---------------------the vertica database (1 row) HP Vertica Analytic Database (7.0.x) Page 489 of 1539 SQL Reference Manual SQL Functions LPAD Returns a VARCHAR value representing a string of a specific length filled on the left with specific characters. Behavior Type Immutable Syntax LPAD ( expression , length [ , fill ] ) Parameters expression (CHAR OR VARCHAR) specifies the string to fill length (INTEGER) specifies the number of characters to return fill (CHAR OR VARCHAR) specifies the repeating string of characters with which to fill the output string. The default is the space character. Examples SELECT LPAD('database', 15, 'xzy'); lpad ----------------xzyxzyxdatabase (1 row) If the string is already longer than the specified length it is truncated on the right: SELECT LPAD('establishment', 10, 'abc'); lpad -----------establishm (1 row) LTRIM Returns a VARCHAR value representing a string with leading blanks removed from the left side (beginning). Behavior Type Immutable HP Vertica Analytic Database (7.0.x) Page 490 of 1539 SQL Reference Manual SQL Functions Syntax LTRIM ( expression [ , characters ] ) Parameters expression (CHAR or VARCHAR) is the string to trim characters (CHAR or VARCHAR) specifies the characters to remove from the left side of expression. The default is the space character. Examples SELECT LTRIM('zzzyyyyyyxxxxxxxxtrim', 'xyz'); LTRIM ------trim (1 row) See Also l BTRIM l RTRIM l TRIM MD5 Calculates the MD5 hash of string, returning the result as a VARCHAR string in hexadecimal. Behavior Type Immutable Syntax MD5 ( string ) Parameters string Is the argument string. HP Vertica Analytic Database (7.0.x) Page 491 of 1539 SQL Reference Manual SQL Functions Examples => SELECT MD5('123'); md5 ---------------------------------202cb962ac59075b964b07152d234b70 (1 row) => SELECT MD5('Vertica'::bytea); md5 ---------------------------------fc45b815747d8236f9f6fdb9c2c3f676 (1 row) OCTET_LENGTH Takes one argument as an input and returns the string length in octets for all string types. Behavior Type Immutable Syntax OCTET_LENGTH ( expression ) Parameters expression (CHAR or VARCHAR or BINARY or VARBINARY) is the string to measure. Notes l If the data type of expression is a CHAR, VARCHAR or VARBINARY, the result is the same as the actual length of expression in octets. For CHAR, the length does not include any trailing spaces. l If the data type of expression is BINARY, the result is the same as the fixed-length of expression. l If the value of expression is NULL, the result is NULL. Examples Expression HP Vertica Analytic Database (7.0.x) Result Page 492 of 1539 SQL Reference Manual SQL Functions SELECT OCTET_LENGTH(CHAR(10) '1234 '); 4 SELECT OCTET_LENGTH(CHAR(10) '1234'); 4 SELECT OCTET_LENGTH(CHAR(10) ' 6 1234'); SELECT OCTET_LENGTH(VARCHAR(10) '1234 '); 6 SELECT OCTET_LENGTH(VARCHAR(10) '1234 '); 5 SELECT OCTET_LENGTH(VARCHAR(10) '1234'); 4 SELECT OCTET_LENGTH(VARCHAR(10) ' 7 1234'); SELECT OCTET_LENGTH('abc'::VARBINARY); 3 SELECT OCTET_LENGTH(VARBINARY 'abc'); 3 SELECT OCTET_LENGTH(VARBINARY 'abc 5 '); SELECT OCTET_LENGTH(BINARY(6) 'abc'); 6 SELECT OCTET_LENGTH(VARBINARY ''); 0 SELECT OCTET_LENGTH(''::BINARY); 1 SELECT OCTET_LENGTH(null::VARBINARY); SELECT OCTET_LENGTH(null::BINARY); See Also l BIT_LENGTH l CHARACTER_LENGTH l LENGTH OVERLAY Returns a VARCHAR value representing a string having had a substring replaced by another string. Behavior Type Immutable if using OCTETS, Stable otherwise Syntax OVERLAY ( expression1 PLACING expression2 FROM position ... [ FOR extent ] ... [ USING { CHARACTERS | OCTETS } ] ) HP Vertica Analytic Database (7.0.x) Page 493 of 1539 SQL Reference Manual SQL Functions Parameters expression1 (CHAR or VARCHAR) is the string to process expression2 (CHAR or VARCHAR) is the substring to overlay position (INTEGER) is the character or octet position (counting from one) at which to begin the overlay extent (INTEGER) specifies the number of characters or octets to replace with the overlay USING CHARACTERS | OCTETS Determines whether OVERLAY uses characters (the default) or octets Examples SELECT OVERLAY('123456789' PLACING 'xxx' FROM 2); OVERLAY ----------1xxx56789 (1 row) SELECT OVERLAY('123456789' PLACING 'XXX' FROM 2 USING OCTETS); OVERLAY ----------1XXX56789 (1 row) SELECT OVERLAY('123456789' PLACING 'xxx' FROM 2 FOR 4); OVERLAY ---------1xxx6789 (1 row) SELECT OVERLAY('123456789' PLACING 'xxx' FROM 2 FOR 5); OVERLAY --------1xxx789 (1 row) SELECT OVERLAY('123456789' PLACING 'xxx' FROM 2 FOR 6); OVERLAY --------1xxx89 (1 row) OVERLAYB Returns an octet value representing a string having had a substring replaced by another string. HP Vertica Analytic Database (7.0.x) Page 494 of 1539 SQL Reference Manual SQL Functions Behavior Type Immutable Syntax OVERLAYB ( expression1, expression2, position [ , extent ] ) Parameters expression1 (CHAR or VARCHAR) is the string to process expression2 (CHAR or VARCHAR) is the substring to overlay position (INTEGER) is the octet position (counting from one) at which to begin the overlay extent (INTEGER) specifies the number of octets to replace with the overlay Notes The OVERLAYB function treats the multibyte character string as a string of octets (bytes) and use octet numbers as incoming and outgoing position specifiers and lengths. The strings themselves are type VARCHAR, but they treated as if each byte was a separate character. Examples SELECT OVERLAYB('123456789', 'ééé', 2); OVERLAYB ---------1ééé89 (1 row) SELECT OVERLAYB('123456789', 'ßßß', 2); OVERLAYB ---------1ßßß89 (1 row) SELECT OVERLAYB('123456789', 'xxx', 2); OVERLAYB ----------1xxx56789 (1 row) SELECT OVERLAYB('123456789', 'xxx', 2, 4); OVERLAYB ---------1xxx6789 (1 row) HP Vertica Analytic Database (7.0.x) Page 495 of 1539 SQL Reference Manual SQL Functions SELECT OVERLAYB('123456789', 'xxx', 2, 5); OVERLAYB --------1xxx789 (1 row) SELECT OVERLAYB('123456789', 'xxx', 2, 6); OVERLAYB --------1xxx89 (1 row) POSITION Starting in Release 5.1, this function treats the string argument as a UTF-8 encoded string, rather than depending on the collation setting of the locale (for example, collation=binary) to identify the encoding. Prior to Release 5.1, the behavior type of this function was stable. Returns an INTEGER value representing the character location of a specified substring with a string (counting from one). Behavior Type Immutable Syntax 1 POSITION ( substring IN string [ USING { CHARACTERS | OCTETS } ] ) Parameters substring (CHAR or VARCHAR) is the substring to locate string (CHAR or VARCHAR) is the string in which to locate the substring USING CHARACTERS | OCTETS Determines whether the position is reported by using characters (the default) or octets. Syntax 2 POSITION ( substring IN string ) Parameters substring (VARBINARY) is the substring to locate string (VARBINARY) is the string in which to locate the substring HP Vertica Analytic Database (7.0.x) Page 496 of 1539 SQL Reference Manual SQL Functions Notes l When the string and substring are CHAR or VARCHAR, the return value is based on either the character or octet position of the substring. l When the string and substring are VARBINARY, the return value is always based on the octet position of the substring. l The string and substring must be consistent. Do not mix VARBINARY with CHAR or VARCHAR. Examples SELECT POSITION('é' IN 'étudiant' USING CHARACTERS); position ---------1 (1 row) SELECT POSITION('ß' IN 'straße' USING OCTETS); position ---------5 (1 row) SELECT POSITION('c' IN 'abcd' USING CHARACTERS); position ---------3 (1 row) SELECT POSITION(VARBINARY '456' IN VARBINARY '123456789'); position ---------4 (1 row) SELECT POSITION('n' in 'León') as 'default', POSITIONB('León', 'n') as 'POSITIONB', POSITION('n' in 'León' USING CHARACTERS) as 'pos_chars', POSITION('n' in 'León' USING OCTETS) as 'pos_oct',INSTR('León','n'), INSTRB('León','n'),REGEXP_INSTR('León','n'); -[ RECORD 1 ]+-default | 4 POSITIONB | 5 pos_chars | 4 pos_oct | 5 INSTR | 4 INSTRB | 5 REGEXP_INSTR | 4 HP Vertica Analytic Database (7.0.x) Page 497 of 1539 SQL Reference Manual SQL Functions POSITIONB Returns an INTEGER value representing the octet location of a specified substring with a string (counting from one). Behavior Type Immutable Syntax POSITIONB ( string, substring ) Parameters string (CHAR or VARCHAR) is the string in which to locate the substring substring (CHAR or VARCHAR) is the substring to locate Examples SELECT POSITIONB('straße', 'ße'); POSITIONB ----------5 (1 row) SELECT POSITIONB('étudiant', 'é'); position ---------1 (1 row) QUOTE_IDENT Returns the given string, suitably quoted, to be used as an identifier in a SQL statement string. Quotes are added only if necessary; that is, if the string contains non-identifier characters, is a SQL keyword, such as '1time', 'Next week' and 'Select'. Embedded double quotes are doubled. Behavior Type Immutable Syntax QUOTE_IDENT( string ) HP Vertica Analytic Database (7.0.x) Page 498 of 1539 SQL Reference Manual SQL Functions Parameters string Is the argument string. Notes l SQL identifiers, such as table and column names, are stored as created, and references to them are resolved using case-insensitive compares. Thus, you do not need to double-quote mixedcase identifiers. l HP Vertica quotes all currently-reserved keywords, even those not currently being used. Examples Quoted identifiers are case-insensitive, and HP Vertica does not supply the quotes: SELECT QUOTE_IDENT('VErtIcA'); QUOTE_IDENT ------------VErtIcA (1 row) SELECT QUOTE_IDENT('Vertica database'); QUOTE_IDENT -------------------"Vertica database" (1 row) Embedded double quotes are doubled: SELECT QUOTE_IDENT('Vertica "!" database'); QUOTE_IDENT ------------------------"Vertica ""!"" database" (1 row) The following example uses the SQL keyword, SELECT; results are double quoted: SELECT QUOTE_IDENT('select'); QUOTE_IDENT ------------"select" (1 row) QUOTE_LITERAL Returns the given string, suitably quoted, to be used as a string literal in a SQL statement string. Embedded single quotes and backslashes are doubled. HP Vertica Analytic Database (7.0.x) Page 499 of 1539 SQL Reference Manual SQL Functions Behavior Type Immutable Syntax QUOTE_LITERAL ( string ) Parameters string Is the argument string. Notes HP Vertica recognizes two consecutive single quotes within a string literal as one single quote character. For example, 'You''re here!'. This is the SQL standard representation and is preferred over the form, 'You\'re here!', as backslashes are not parsed as before. Examples SELECT QUOTE_LITERAL('You''re here!'); QUOTE_LITERAL ----------------'You''re here!' (1 row) SELECT QUOTE_LITERAL('You\'re here!'); WARNING: nonstandard use of \' in a string literal at character 22 HINT: Use '' to write quotes in strings, or use the escape string syntax (E'\''). See Also l Character String Literals REPEAT Returns a VARCHAR or VARBINARY value that repeats the given value COUNT times, given a value and a count this function. If the return value is truncated the given value might not be repeated count times, and the last occurrence of the given value might be truncated. Behavior Type Immutable HP Vertica Analytic Database (7.0.x) Page 500 of 1539 SQL Reference Manual SQL Functions Syntax REPEAT ( string , repetitions ) Parameters string (CHAR or VARCHAR or BINARY or VARBINARY) is the string to repeat repetitions (INTEGER) is the number of times to repeat the string Notes If the repetitions field depends on the contents of a column (is not a constant), then the repeat operator maximum length is 65000 bytes. You can add a cast of the repeat to cast the result down to a size big enough for your purposes (reflects the actual maximum size) so you can do other things with the result. REPEAT () and || check for result strings longer than 65000. REPEAT () silently truncates to 65000 octets, and || reports an error (including the octet length). Examples The following example repeats vmart three times: SELECT REPEAT ('vmart', 3); repeat ----------------vmartvmartvmart (1 row) If you run the following example, you get an error message: SELECT '123456' || REPEAT('a', colx); ERROR: Operator || may give a 65006-byte Varchar result; the limit is 65000 bytes. If you know that colx can never be greater than 3, the solution is to add a cast (::VARCHAR(3)): SELECT '123456' || REPEAT('a', colx)::VARCHAR(3); If colx is greater than 3, the repeat is truncated to exactly three instalnce of a. REPLACE Replaces all occurrences of characters in a string with another set of characters. HP Vertica Analytic Database (7.0.x) Page 501 of 1539 SQL Reference Manual SQL Functions Behavior Type Immutable Syntax REPLACE ( string , target , replacement ) Parameters string (CHAR OR VARCHAR) is the string to which to perform the replacement target (CHAR OR VARCHAR) is the string to replace replacement (CHAR OR VARCHAR) is the string with which to replace the target Examples SELECT REPLACE('Documentation%20Library', '%20', ' '); replace ----------------------Documentation Library (1 row) SELECT REPLACE('This & That', '&', 'and'); replace --------------This and That (1 row) SELECT REPLACE('straße', 'ß', 'ss'); REPLACE --------strasse (1 row) RIGHT Returns the specified characters from the right side of a string. Behavior Type Immutable Syntax RIGHT ( string , length ) HP Vertica Analytic Database (7.0.x) Page 502 of 1539 SQL Reference Manual SQL Functions Parameters string (CHAR or VARCHAR) is the string to return. length Is an INTEGER value that specifies the count of characters to return. Examples The following command returns the last three characters of the string 'vertica': SELECT RIGHT('vertica', 3);| right ------ica (1 row) The following command returns the last two characters of the string 'straße': SELECT RIGHT('straße', 2); RIGHT ------ße (1 row) See Also l SUBSTR RPAD Returns a VARCHAR value representing a string of a specific length filled on the right with specific characters. Behavior Type Immutable Syntax RPAD ( expression , length [ , fill ] ) Parameters expression (CHAR OR VARCHAR) specifies the string to fill HP Vertica Analytic Database (7.0.x) Page 503 of 1539 SQL Reference Manual SQL Functions length (INTEGER) specifies the number of characters to return fill (CHAR OR VARCHAR) specifies the repeating string of characters with which to fill the output string. The default is the space character. Examples SELECT RPAD('database', 15, 'xzy'); rpad ----------------databasexzyxzyx (1 row) If the string is already longer than the specified length it is truncated on the right: SELECT RPAD('database', 6, 'xzy'); rpad -------databa (1 row) RTRIM Returns a VARCHAR value representing a string with trailing blanks removed from the right side (end). Behavior Type Immutable Syntax RTRIM ( expression [ , characters ] ) Parameters expression (CHAR or VARCHAR) is the string to trim characters (CHAR or VARCHAR) specifies the characters to remove from the right side of expression. The default is the space character. Examples SELECT RTRIM('trimzzzyyyyyyxxxxxxxx', 'xyz'); HP Vertica Analytic Database (7.0.x) Page 504 of 1539 SQL Reference Manual SQL Functions ltrim ------trim (1 row) See Also l BTRIM l LTRIM l TRIM SPACE Inserts blank spaces into a specified location within a character string. Behavior Type Immutable Syntax SELECT INSERT( 'string1', || SPACE (n) || 'string2'); Parameters string1 (VARCHAR) Is the string after which to insert the space. n A character of type INTEGER that represents the number of spaces to insert. string2 ( VARCHAR) Is the remainder of the string that appears after the inserted spaces Example The following example inserts 10 spaces between the strings 'x' and 'y': SELECT 'x' || SPACE(10) || 'y'; ?column? -------------x y (1 row) HP Vertica Analytic Database (7.0.x) Page 505 of 1539 SQL Reference Manual SQL Functions SPLIT_PART Starting in Release 5.1, this function treats the string argument as a UTF-8 encoded string, rather than depending on the collation setting of the locale (for example, collation=binary) to identify the encoding. Prior to Release 5.1, the behavior type of this function was stable. Splits string on the delimiter and returns the location of the beginning of the given field (counting from one). Behavior Type Immutable Syntax SPLIT_PART ( string , delimiter , field ) Parameters string Is the argument string. delimiter Is the given delimiter. field (INTEGER) is the number of the part to return. Notes Use this with the character form of the subfield. Examples The specified integer of 2 returns the second string, or def. SELECT SPLIT_PART('abc~@~def~@~ghi', '~@~', 2); SPLIT_PART -----------def (1 row) In the next example, specify 3, which returns the third string, or 789. SELECT SPLIT_PART('123~|~456~|~789', '~|~', 3); SPLIT_PART -----------789 (1 row) The tildes are for readability only. Omitting them returns the same results: HP Vertica Analytic Database (7.0.x) Page 506 of 1539 SQL Reference Manual SQL Functions SELECT SPLIT_PART('123|456|789', '|', 3); SPLIT_PART -----------789 (1 row) See what happens if you specify an integer that exceeds the number of strings: No results. SELECT SPLIT_PART('123|456|789', '|', 4); SPLIT_PART -----------(1 row) The previous result is not null, it is an empty string. SELECT SPLIT_PART('123|456|789', '|', 4) IS NULL; ?column? ---------f (1 row) If SPLIT_PART had returned NULL, LENGTH would have returned null. SELECT LENGTH (SPLIT_PART('123|456|789', '|', 4)); LENGTH -------0 (1 row) SPLIT_PARTB Splits string on the delimiter and returns the location of the beginning of the given field (counting from one). The VARCHAR arguments are treated as octets rather than UTF-8 characters. Behavior Type Immutable Syntax SPLIT_PARTB ( string , delimiter , field ) Parameters string (VARCHAR) Is the argument string. delimiter (VARCHAR) Is the given delimiter. field (INTEGER) is the number of the part to return. HP Vertica Analytic Database (7.0.x) Page 507 of 1539 SQL Reference Manual SQL Functions Notes Use this function with the character form of the subfield. Examples The specified integer of 3 returns the third string, or soupçon. SELECT SPLIT_PARTB('straße~@~café~@~soupçon', '~@~', 3); SPLIT_PARTB ------------soupçon (1 row) The tildes are for readability only. Omitting them returns the same results: SELECT SPLIT_PARTB('straße @ café @ soupçon', '@', 3); SPLIT_PARTB ------------soupçon (1 row) See what happens if you specify an integer that exceeds the number of strings: No results. SELECT SPLIT_PARTB('straße @ café @ soupçon', '@', 4); SPLIT_PARTB ------------(1 row) The above result is not null, it is an empty string. SELECT SPLIT_PARTB('straße @ café @ soupçon', '@', 4) IS NULL; ?column? ---------f (1 row) STRPOS Starting in Release 5.1, this function treats the string argument as a UTF-8 encoded string, rather than depending on the collation setting of the locale (for example, collation=binary) to identify the encoding. Prior to Release 5.1, the behavior type of this function was stable. Returns an INTEGER value representing the character location of a specified substring within a string (counting from one). Behavior Type Immutable HP Vertica Analytic Database (7.0.x) Page 508 of 1539 SQL Reference Manual SQL Functions Syntax STRPOS ( string , substring ) Parameters string (CHAR or VARCHAR) is the string in which to locate the substring substring (CHAR or VARCHAR) is the substring to locate Notes STRPOS is identical to POSITION except for the order of the arguments. Examples SELECT STRPOS('abcd','c'); strpos -------3 (1 row) STRPOSB Returns an INTEGER value representing the location of a specified substring within a string, counting from one, where each octet in the string is counted (as opposed to characters). Behavior Type Immutable Syntax STRPOSB ( string , substring ) Parameters string (CHAR or VARCHAR) is the string in which to locate the substring substring (CHAR or VARCHAR) is the substring to locate Notes STRPOSB is identical to POSITIONB except for the order of the arguments. HP Vertica Analytic Database (7.0.x) Page 509 of 1539 SQL Reference Manual SQL Functions Examples => SELECT STRPOSB('straße', 'e'); STRPOSB --------7 (1 row) => SELECT STRPOSB('étudiant', 'tud'); STRPOSB --------3 (1 row) SUBSTR Returns VARCHAR or VARBINARY value representing a substring of a specified string. Behavior Type Immutable Syntax SUBSTR ( string , position [ , extent ] ) Parameters string (CHAR/VARCHAR or BINARY/VARBINARY) is the string from which to extract a substring. position (INTEGER or DOUBLE PRECISION) is the starting position of the substring (counting from one by characters). extent (INTEGER or DOUBLE PRECISION) is the length of the substring to extract (in characters). The default is the end of the string. Notes SUBSTR truncates DOUBLE PRECISION input values. Examples => SELECT SUBSTR('abc'::binary(3),1); HP Vertica Analytic Database (7.0.x) Page 510 of 1539 SQL Reference Manual SQL Functions substr -------abc (1 row) => SELECT SUBSTR('123456789', 3, 2); substr -------34 (1 row) => SELECT SUBSTR('123456789', 3); substr --------3456789 (1 row) => SELECT SUBSTR(TO_BITSTRING(HEX_TO_BINARY('0x10')), 2, 2); substr -------00 (1 row) => SELECT SUBSTR(TO_HEX(10010), 2, 2); substr -------71 (1 row) SUBSTRB Returns an octet value representing the substring of a specified string. Behavior Type Immutable Syntax SUBSTRB ( string , position [ , extent ] ) Parameters string (CHAR/VARCHAR) is the string from which to extract a substring. position (INTEGER or DOUBLE PRECISION) is the starting position of the substring (counting from one in octets). extent (INTEGER or DOUBLE PRECISION) is the length of the substring to extract (in octets). The default is the end of the string HP Vertica Analytic Database (7.0.x) Page 511 of 1539 SQL Reference Manual SQL Functions Notes l This function treats the multibyte character string as a string of octets (bytes) and uses octet numbers as incoming and outgoing position specifiers and lengths. The strings themselves are type VARCHAR, but they treated as if each octet were a separate character. l SUBSTRB truncates DOUBLE PRECISION input values. Examples => SELECT SUBSTRB('soupçon', 5); SUBSTRB --------çon (1 row) => SELECT SUBSTRB('soupçon', 5, 2); SUBSTRB --------ç (1 row) HP Vertica returns the following error message if you use BINARY/VARBINARY: =>SELECT SUBSTRB('abc'::binary(3),1); ERROR: function substrb(binary, int) does not exist, or permission is denied for substrb( binary, int) HINT: No function matches the given name and argument types. You may need to add explicit type casts. SUBSTRING Returns a value representing a substring of the specified string at the given position, given a value, a position, and an optional length. Behavior Type Immutable if USING OCTETS, stable otherwise. Syntax SUBSTRING ( string , position [ , length ] ... [USING {CHARACTERS | OCTETS } ] ) SUBSTRING ( string FROM position [ FOR length ] ... [USING { CHARACTERS | OCTETS } ] ) HP Vertica Analytic Database (7.0.x) Page 512 of 1539 SQL Reference Manual SQL Functions Parameters string (CHAR/VARCHAR or BINARY/VARBINARY) is the string from which to extract a substring position (INTEGER or DOUBLE PRECISION) is the starting position of the substring (counting from one by either characters or octets). (The default is characters.) If position is greater than the length of the given value, an empty value is returned. length (INTEGER or DOUBLE PRECISION) is the length of the substring to extract in either characters or octets. (The default is characters.) The default is the end of the string.If a length is given the result is at most that many bytes. The maximum length is the length of the given value less the given position. If no length is given or if the given length is greater than the maximum length then the length is set to the maximum length. USING CHARACTERS | OCTETS Determines whether the value is expressed in characters (the default) or octets. Notes l SUBSTRING truncates DOUBLE PRECISION input values. l Neither length nor position can be negative, and the position cannot be zero because it is one based. If these forms are violated, the system returns an error: SELECT SUBSTRING('ab'::binary(2), -1, 2); ERROR: negative or zero substring start position not allowed Examples => SELECT SUBSTRING('abc'::binary(3),1); SUBSTRING ----------abc (1 row) => SELECT SUBSTRING('soupçon', 5, 2 USING CHARACTERS); SUBSTRING ----------ço (1 row) => SELECT SUBSTRING('soupçon', 5, 2 USING OCTETS); SUBSTRB --------ç HP Vertica Analytic Database (7.0.x) Page 513 of 1539 SQL Reference Manual SQL Functions (1 row) TO_BITSTRING Returns a VARCHAR that represents the given VARBINARY value in bitstring format. Behavior Type Immutable Syntax TO_BITSTRING ( expression ) Parameters expression (VARCHAR) is the string to return. Notes VARCHAR TO_BITSTRING(VARBINARY) converts data from binary type to character type (where the character representation is the bitstring format). This function is the inverse of BITSTRING_TO_BINARY: TO_BITSTRING(BITSTRING_TO_BINARY(x)) = x) BITSTRING_TO_BINARY(TO_BITSTRING(x)) = x) Examples SELECT TO_BITSTRING('ab'::BINARY(2)); to_bitstring -----------------0110000101100010 (1 row) SELECT TO_BITSTRING(HEX_TO_BINARY('0x10')); to_bitstring -------------00010000 (1 row) SELECT TO_BITSTRING(HEX_TO_BINARY('0xF0')); to_bitstring -------------- HP Vertica Analytic Database (7.0.x) Page 514 of 1539 SQL Reference Manual SQL Functions 11110000 (1 row) See Also l BITCOUNT l BITSTRING_TO_BINARY TO_HEX Returns a VARCHAR or VARBINARY representing the hexadecimal equivalent of a number. Behavior Type Immutable Syntax TO_HEX ( number ) Parameters number (INTEGER) is the number to convert to hexadecimal Notes VARCHAR TO_HEX(INTEGER) and VARCHAR TO_HEX(VARBINARY) are similar. The function converts data from binary type to character type (where the character representation is in hexadecimal format). This function is the inverse of HEX_TO_BINARY. TO_HEX(HEX_TO_BINARY(x)) = x); HEX_TO_BINARY(TO_HEX(x)) = x); Examples SELECT TO_HEX(123456789); TO_HEX --------75bcd15 (1 row) For VARBINARY inputs, the returned value is not preceded by "0x". For example: HP Vertica Analytic Database (7.0.x) Page 515 of 1539 SQL Reference Manual SQL Functions SELECT TO_HEX('ab'::binary(2)); TO_HEX -------6162 (1 row) TRANSLATE Replaces individual characters in string_to_replace with other characters. Behavior Type Immutable Syntax TRANSLATE ( string_to_replace , from_string , to_string ); Parameters string_to_replace String to be translated. from_string Contains characters that should be replaced in string_to_replace. to_string Any character in string_to_replace that matches a character in from_string is replaced by the corresponding character in to_string. Example SELECT TRANSLATE('straße', 'ß', 'ss'); TRANSLATE ----------strase (1 row) TRIM Combines the BTRIM, LTRIM, and RTRIM functions into a single function. Behavior Type Immutable HP Vertica Analytic Database (7.0.x) Page 516 of 1539 SQL Reference Manual SQL Functions Syntax TRIM ( [ [ LEADING | TRAILING | BOTH ] characters FROM ] expression ) Parameters LEADING Removes the specified characters from the left side of the string TRAILING Removes the specified characters from the right side of the string BOTH Removes the specified characters from both sides of the string (default) characters (CHAR or VARCHAR) specifies the characters to remove from expression. The default is the space character. expression (CHAR or VARCHAR) is the string to trim Examples SELECT '-' || TRIM(LEADING 'x' FROM 'xxdatabasexx') || '-'; ?column? --------------databasexx(1 row) SELECT '-' || TRIM(TRAILING 'x' FROM 'xxdatabasexx') || '-'; ?column? --------------xxdatabase(1 row) SELECT '-' || TRIM(BOTH 'x' FROM 'xxdatabasexx') || '-'; ?column? ------------database(1 row) SELECT '-' || TRIM('x' FROM 'xxdatabasexx') || '-'; ?column? ------------database(1 row) SELECT '-' || TRIM(LEADING FROM ' ?column? --------------database (1 row) SELECT '-' || TRIM(' ?column? ------------database(1 row) database HP Vertica Analytic Database (7.0.x) database ') || '-'; ') || '-'; Page 517 of 1539 SQL Reference Manual SQL Functions See Also l BTRIM l LTRIM l RTRIM UPPER Starting in Release 5.1, this function treats the string argument as a UTF-8 encoded string, rather than depending on the collation setting of the locale (for example, collation=binary) to identify the encoding. Prior to Release 5.1, the behavior type of this function was stable. Returns a VARCHAR value containing the argument converted to uppercase letters. Behavior Type Immutable Syntax UPPER ( expression ) Parameters expression (CHAR or VARCHAR) is the string to convert Notes UPPER is restricted to 32750 octet inputs, since it is possible for the UTF-8 representation of result to double in size. Examples => SELECT UPPER('AbCdEfG'); UPPER ---------ABCDEFG (1 row) => SELECT UPPER('étudiant'); UPPER ---------ÉTUDIANT (1 row) HP Vertica Analytic Database (7.0.x) Page 518 of 1539 SQL Reference Manual SQL Functions UPPERB Returns a character string with each ASCII character converted to uppercase. Multibyte characters are not converted and are skipped. Behavior Type Immutable Syntax UPPERB ( expression ) Parameters expression (CHAR or VARCHAR) is the string to convert Examples In the following example, the multibyte UTF-8 character é is not converted to uppercase: => SELECT UPPERB('étudiant'); UPPERB ---------éTUDIANT (1 row) => SELECT UPPERB('AbCdEfG'); UPPERB --------ABCDEFG (1 row) => SELECT UPPERB('The Vertica Database'); UPPERB ---------------------THE VERTICA DATABASE (1 row) V6_ATON Converts an IPv6 address represented as a character string to a binary string. Behavior Type Immutable HP Vertica Analytic Database (7.0.x) Page 519 of 1539 SQL Reference Manual SQL Functions Syntax V6_ATON ( expression ) Parameters expression (VARCHAR) is the string to convert. Notes The following syntax converts an IPv6 address represented as the character string A to a binary string B. V6_ATON trims any spaces from the right of A and calls the Linux function inet_pton. V6_ATON(VARCHAR A) -> VARBINARY(16) B If A has no colons it is prepended with '::ffff:'. If A is NULL, too long, or if inet_pton returns an error, the result is NULL. Examples SELECT V6_ATON('2001:DB8::8:800:200C:417A'); v6_aton -----------------------------------------------------\001\015\270\000\000\000\000\000\010\010\000 \014Az (1 row) SELECT V6_ATON('1.2.3.4'); v6_aton -----------------------------------------------------------------\000\000\000\000\000\000\000\000\000\000\377\377\001\002\003\004 (1 row) SELECT TO_HEX(V6_ATON('2001:DB8::8:800:200C:417A')); to_hex ---------------------------------20010db80000000000080800200c417a (1 row) SELECT V6_ATON('::1.2.3.4'); v6_aton -----------------------------------------------------------------\000\000\000\000\000\000\000\000\000\000\000\000\001\002\003\004 (1 row) HP Vertica Analytic Database (7.0.x) Page 520 of 1539 SQL Reference Manual SQL Functions See Also l V6_NTOA V6_NTOA Converts an IPv6 address represented as varbinary to a character string. Behavior Type Immutable Syntax V6_NTOA ( expression ) Parameters expression (VARBINARY) is the binary string to convert. Notes The following syntax converts an IPv6 address represented as VARBINARY B to a string A. V6_NTOA right-pads B to 16 bytes with zeros, if necessary, and calls the Linux function inet_ntop. V6_NTOA(VARBINARY B) -> VARCHAR A If B is NULL or longer than 16 bytes, the result is NULL. HP Vertica automatically converts the form '::ffff:1.2.3.4' to '1.2.3.4'. Examples > SELECT V6_NTOA(' \001\015\270\000\000\000\000\000\010\010\000 \014Az'); v6_ntoa --------------------------2001:db8::8:800:200c:417a (1 row) > SELECT V6_NTOA(V6_ATON('1.2.3.4')); v6_ntoa --------1.2.3.4 (1 row) HP Vertica Analytic Database (7.0.x) Page 521 of 1539 SQL Reference Manual SQL Functions > SELECT V6_NTOA(V6_ATON('::1.2.3.4')); v6_ntoa ----------::1.2.3.4 (1 row) See Also l V6_ATON V6_SUBNETA Calculates a subnet address in CIDR (Classless Inter-Domain Routing) format from a binary or alphanumeric IPv6 address. Behavior Type Immutable Syntax V6_SUBNETA ( expression1, expression2 ) Parameters expression1 (VARBINARY or VARCHAR) is the string to calculate. expression2 (INTEGER) is the size of the subnet. Notes The following syntax calculates a subnet address in CIDR format from a binary or varchar IPv6 address. V6_SUBNETA masks a binary IPv6 address B so that the N leftmost bits form a subnet address, while the remaining rightmost bits are cleared. It then converts to an alphanumeric IPv6 address, appending a slash and N. V6_SUBNETA(BINARY B, INT8 N) -> VARCHAR C The following syntax calculates a subnet address in CIDR format from an alphanumeric IPv6 address. V6_SUBNETA(VARCHAR A, INT8 N) -> V6_SUBNETA(V6_ATON(A), N) -> VARCHAR C HP Vertica Analytic Database (7.0.x) Page 522 of 1539 SQL Reference Manual SQL Functions Examples > SELECT V6_SUBNETA(V6_ATON('2001:db8::8:800:200c:417a'), 28); v6_subneta --------------2001:db0::/28 (1 row) See Also l V6_SUBNETN V6_SUBNETN Calculates a subnet address in CIDR (Classless Inter-Domain Routing) format from a varbinary or alphanumeric IPv6 address. Behavior Type Immutable Syntax V6_SUBNETN ( expression1, expression2 ) Parameters expression1 (VARBINARY or VARCHAR) is the string to calculate. Notes: l V6_SUBNETN( , ) returns VARBINARY. OR l expression2 V6_SUBNETN( , ) returns VARBINARY, after using V6_ATON to convert the string to . (INTEGER) is the size of the subnet. Notes The following syntax masks a BINARY IPv6 address B so that the N left-most bits of S form a subnet address, while the remaining right-most bits are cleared. HP Vertica Analytic Database (7.0.x) Page 523 of 1539 SQL Reference Manual SQL Functions V6_SUBNETN right-pads B to 16 bytes with zeros, if necessary and masks B, preserving its N-bit subnet prefix. V6_SUBNETN(VARBINARY B, INT8 N) -> VARBINARY(16) S If B is NULL or longer than 16 bytes, or if N is not between 0 and 128 inclusive, the result is NULL. S = [B]/N in Classless Inter-Domain Routing notation (CIDR notation). The following syntax masks an alphanumeric IPv6 address A so that the N leftmost bits form a subnet address, while the remaining rightmost bits are cleared. V6_SUBNETN(VARCHAR A, INT8 N) -> V6_SUBNETN(V6_ATON(A), N) -> VARBINARY(16) S Example This example returns VARBINARY, after using V6_ATON to convert the VARCHAR string to VARBINARY: > SELECT V6_SUBNETN(V6_ATON('2001:db8::8:800:200c:417a'), 28); v6_subnetn --------------------------------------------------------------\001\015\260\000\000\000\000\000\000\000\000\000\000\000\000 See Also l V6_ATON l V6_SUBNETA V6_TYPE Characterizes a binary or alphanumeric IPv6 address B as an integer type. Behavior Type Immutable Syntax V6_TYPE ( expression ) Parameters expression (VARBINARY or VARCHAR) is the type to convert. HP Vertica Analytic Database (7.0.x) Page 524 of 1539 SQL Reference Manual SQL Functions Notes V6_TYPE(VARBINARY B) returns INT8 T. V6_TYPE(VARCHAR A) -> V6_TYPE(V6_ATON(A)) -> INT8 T The IPv6 types are defined in the Network Working Group's IP Version 6 Addressing Architecture memo. GLOBAL = 0 LINKLOCAL = 1 LOOPBACK = 2 UNSPECIFIED = 3 MULTICAST = 4 Global unicast addresses Link-Local unicast (and Private-Use) addresses Loopback Unspecified Multicast IPv4-mapped and IPv4-compatible IPv6 addresses are also interpreted, as specified in IPv4 Global Unicast Address Assignments. l For IPv4, Private-Use is grouped with Link-Local. l If B is VARBINARY, it is right-padded to 16 bytes with zeros, if necessary. l If B is NULL or longer than 16 bytes, the result is NULL. Details IPv4 (either kind): 0.0.0.0/8 127.0.0.0/8 169.254.0.0/16 172.16.0.0/12 192.168.0.0/16 224.0.0.0/4 others UNSPECIFIED LOOPBACK LINKLOCAL LINKLOCAL LINKLOCAL MULTICAST GLOBAL 10.0.0.0/8 ::0/128 fe80::/10 ff00::/8 others UNSPECIFIED LINKLOCAL MULTICAST GLOBAL ::1/128 LINKLOCAL IPv6: LOOPBACK Examples > SELECT V6_TYPE(V6_ATON('192.168.2.10')); v6_type HP Vertica Analytic Database (7.0.x) Page 525 of 1539 SQL Reference Manual SQL Functions --------1 (1 row) > SELECT V6_TYPE(V6_ATON('2001:db8::8:800:200c:417a')); v6_type --------0 (1 row) See Also l INET_ATON l IP Version 6 Addressing Architecture l IPv4 Global Unicast Address Assignments HP Vertica Analytic Database (7.0.x) Page 526 of 1539 SQL Reference Manual SQL Functions System Information Functions These functions provide system information regarding user sessions. A superuser has unrestricted access to all system information, but users can view only information about their own, current sessions. CURRENT_DATABASE Returns a VARCHAR value containing the name of the database to which you are connected. Behavior Type Immutable Syntax CURRENT_DATABASE() Notes l The parentheses following the CURRENT_DATABASE function are optional. l This function is equivalent to DBNAME. Examples SELECT CURRENT_DATABASE(); CURRENT_DATABASE -----------------VMart (1 row) The following command returns the same results without the parentheses: SELECT CURRENT_DATABASE; CURRENT_DATABASE -----------------VMart (1 row) CURRENT_SCHEMA Returns the name of the current schema. HP Vertica Analytic Database (7.0.x) Page 527 of 1539 SQL Reference Manual SQL Functions Behavior Type Stable Syntax CURRENT_SCHEMA() Privileges None Notes The CURRENT_SCHEMA function does not require parentheses. Examples The following command returns the name of the current schema: => SELECT CURRENT_SCHEMA(); current_schema ---------------public (1 row) The following command returns the same results without the parentheses: => SELECT CURRENT_SCHEMA; current_schema ---------------public (1 row) The following command shows the current schema, listed after the current user, in the search path: => SHOW SEARCH_PATH; name | setting -------------+--------------------------------------------------search_path | "$user", public, v_catalog, v_monitor, v_internal (1 row) See Also l SET SEARCH_PATH HP Vertica Analytic Database (7.0.x) Page 528 of 1539 SQL Reference Manual SQL Functions CURRENT_USER Returns a VARCHAR containing the name of the user who initiated the current database connection. Behavior Type Stable Syntax CURRENT_USER() Notes l The CURRENT_USER function does not require parentheses. l This function is useful for permission checking. l CURRENT_USER is equivalent to SESSION_USER, USER, and USERNAME. Examples SELECT CURRENT_USER(); CURRENT_USER -------------dbadmin (1 row) The following command returns the same results without the parentheses: SELECT CURRENT_USER; CURRENT_USER -------------dbadmin (1 row) DBNAME (function) Returns a VARCHAR value containing the name of the database to which you are connected. DBNAME is equivalent to CURRENT_DATABASE. Behavior Type Immutable HP Vertica Analytic Database (7.0.x) Page 529 of 1539 SQL Reference Manual SQL Functions Syntax DBNAME() Examples SELECT DBNAME(); dbname -----------------VMart (1 row) HAS_TABLE_PRIVILEGE Indicates whether a user can access a table in a particular way. The function returns a true (t) or false (f) value. A superuser can check all other user's table privileges. Users without superuser privileges can use HAS_TABLE_PRIVILEGE to check: l Any tables they own. l Tables in a schema to which they have been granted USAGE privileges, and at least one other table privilege, as described in GRANT (Table). Behavior Type Stable Syntax HAS_TABLE_PRIVILEGE ( [ user, ] [[db-name.]schema-name.]table , privilege ) Parameters user Specifies the name or OID of a database user. The default is the CURRENT_USER. HP Vertica Analytic Database (7.0.x) Page 530 of 1539 SQL Reference Manual SQL Functions [[db-name.]schema.] [Optional] Specifies the schema name. Using a schema identifies objects that are not unique within the current search path (see Setting Schema Search Paths). You can optionally precede a schema with a database name, but you must be connected to the database you specify. You cannot make changes to objects in other databases. The ability to specify different database objects (from database and schemas to tables and columns) lets you qualify database objects as explicitly as required. For example, use a table and column (mytable.column1), a schema, table, and column (myschema.mytable.column1), and, as full qualification, a database, schema, table, and column (mydb.myschema.mytable.column1). table privilege Specifies the name or OID of a table in the logical schema. If necessary, specify the database and schema, as noted above. l SELECT Allows the user to SELECT from any column of the specified table. l INSERT Allows the user to INSERT records into the specified table and to use the COPY command to load the table. l UPDATE Allows the user to UPDATE records in the specified table. l DELETE Allows the user to delete a row from the specified table. l REFERENCES Allows the user to create a foreign key constraint (privileges required on both the referencing and referenced tables). Examples SELECT HAS_TABLE_PRIVILEGE('store.store_dimension', 'SELECT'); HAS_TABLE_PRIVILEGE --------------------t (1 row) SELECT HAS_TABLE_PRIVILEGE('release', 'store.store_dimension', 'INSERT'); HAS_TABLE_PRIVILEGE --------------------t (1 row) SELECT HAS_TABLE_PRIVILEGE('store.store_dimension', 'UPDATE'); HAS_TABLE_PRIVILEGE --------------------t (1 row) SELECT HAS_TABLE_PRIVILEGE('store.store_dimension', 'REFERENCES'); HAS_TABLE_PRIVILEGE HP Vertica Analytic Database (7.0.x) Page 531 of 1539 SQL Reference Manual SQL Functions --------------------t (1 row) SELECT HAS_TABLE_PRIVILEGE(45035996273711159, 45035996273711160, 'select'); HAS_TABLE_PRIVILEGE --------------------t (1 row) SESSION_USER Returns a VARCHAR containing the name of the user who initiated the current database session. Behavior Type Stable Syntax SESSION_USER() Notes l The SESSION_USER function does not require parentheses. l SESSION_USER is equivalent to CURRENT_USER, USER, and USERNAME. Examples SELECT SESSION_USER(); session_user -------------dbadmin (1 row) The following command returns the same results without the parentheses: SELECT SESSION_USER; session_user -------------dbadmin (1 row) HP Vertica Analytic Database (7.0.x) Page 532 of 1539 SQL Reference Manual SQL Functions USER Returns a VARCHAR containing the name of the user who initiated the current database connection. Behavior Type Stable Syntax USER() Notes l The USER function does not require parentheses. l USER is equivalent to CURRENT_USER, SESSION_USER, and USERNAME. Examples SELECT USER(); current_user -------------dbadmin (1 row) The following command returns the same results without the parentheses: SELECT USER; current_user -------------dbadmin (1 row) USERNAME Returns a VARCHAR containing the name of the user who initiated the current database connection. Behavior Type Stable HP Vertica Analytic Database (7.0.x) Page 533 of 1539 SQL Reference Manual SQL Functions Syntax USERNAME() Notes l This function is useful for permission checking. l USERNAME is equivalent to CURRENT_USER, SESSION_USER and USER. Examples SELECT USERNAME(); username -------------dbadmin (1 row) VERSION Returns a VARCHAR containing an HP Vertica node's version information. Behavior Type Stable Syntax VERSION() Examples SELECT VERSION(); VERSION -------------------------------------------------Vertica Analytic Database v4.0.12-20100513010203 (1 row) The parentheses are required. If you omit them, the system returns an error: SELECT VERSION; ERROR: column "version" does not exist HP Vertica Analytic Database (7.0.x) Page 534 of 1539 SQL Reference Manual SQL Functions Timeseries Functions Timeseries aggregate functions evaluate the values of a given set of variables over time and group those values into a window for analysis and aggregation. One output row is produced per time slice—or per partition per time slice—if partition expressions are present. See Also l TIMESERIES Clause l CONDITIONAL_CHANGE_EVENT [Analytic] CONDITIONAL_TRUE_EVENT [Analytic] l l TS_FIRST_VALUE Processes the data that belongs to each time slice. A time series aggregate (TSA) function, TS_ FIRST_VALUE returns the value at the start of the time slice, where an interpolation scheme is applied if the timeslice is missing, in which case the value is determined by the values corresponding to the previous (and next) timeslices based on the interpolation scheme of const (linear). There is one value per time slice per partition. Behavior Type Immutable Syntax TS_FIRST_VALUE ( expression [ IGNORE NULLS ] ... [, { 'CONST' | 'LINEAR' } ] ) Parameters expression Argument expression on which to aggregate and interpolate. expression is data type INTEGER or FLOAT. IGNORE NULLS The IGNORE NULLS behavior changes depending on a CONST or LINEAR interpolation scheme. See When Time Series Data Contains Nulls in the Programmer's Guide for details. HP Vertica Analytic Database (7.0.x) Page 535 of 1539 SQL Reference Manual SQL Functions 'CONST' | 'LINEAR' (Default CONST) Optionally specifies the interpolation value as either constant or linear. l CONST—New value are interpolated based on previous input records. l LINEAR—Values are interpolated in a linear slope based on the specified time slice. Notes l The function returns one output row per time slice or one output row per partition per time slice if partition expressions are specified. l Multiple time series aggregate functions can exists in the same query. They share the same gap-filling policy as defined by the TIMESERIES Clause; however, each time series aggregate function can specify its own interpolation policy. For example: SELECT slice_time, symbol, TS_FIRST_VALUE(bid, 'const') fv_c, TS_FIRST_VALUE(bid, 'linear') fv_l, TS_LAST_VALUE(bid, 'const') lv_c FROM TickStore TIMESERIES slice_time AS '3 seconds' OVER(PARTITION BY symbol ORDER BY ts); You must use an ORDER BY clause with a TIMESTAMP column. l Example For detailed examples, see Gap Filling and Interpolation in the Programmer's Guide. See Also TIMESERIES Clause l TS_LAST_VALUE l l TS_LAST_VALUE Processes the data that belongs to each time slice. A time series aggregate (TSA) function, TS_ LAST_VALUE returns the value at the end of the time slice, where an interpolation scheme is applied if the timeslice is missing, in which case the value is determined by the values corresponding to the previous (and next) timeslices based on the interpolation scheme of const (linear). There is one value per time slice per partition. HP Vertica Analytic Database (7.0.x) Page 536 of 1539 SQL Reference Manual SQL Functions Behavior Type Immutable Syntax TS_LAST_VALUE ( expression [ IGNORE NULLS ] ... [, { 'CONST' | 'LINEAR' } ] ) Parameters expression Argument expression on which to aggregate and interpolate. expression is data type INTEGER or FLOAT. IGNORE NULLS The IGNORE NULLS behavior changes depending on a CONST or LINEAR interpolation scheme. See When Time Series Data Contains Nulls in the Programmer's Guide for details. 'CONST' | 'LINEAR' (Default CONST) Optionally specifies the interpolation value as either constant or linear. l CONST—New value are interpolated based on previous input records. l LINEAR—Values are interpolated in a linear slope based on the specified time slice. Notes l The function returns one output row per time slice or one output row per partition per time slice if partition expressions are specified. l Multiple time series aggregate functions can exists in the same query. They share the same gap-filling policy as defined by the TIMESERIES Clause; however, each time series aggregate function can specify its own interpolation policy. For example: SELECT slice_time, symbol, TS_FIRST_VALUE(bid, 'const') fv_c, TS_FIRST_VALUE(bid, 'linear') fv_l, TS_LAST_VALUE(bid, 'const') lv_c FROM TickStore TIMESERIES slice_time AS 3 seconds OVER(PARTITION BY symbol ORDER BY ts); l You must use the ORDER BY clause with a TIMESTAMP column. HP Vertica Analytic Database (7.0.x) Page 537 of 1539 SQL Reference Manual SQL Functions Example For detailed examples, see Gap Filling and Interpolation in the Programmer's Guide. See Also TIMESERIES Clause l TS_FIRST_VALUE l l HP Vertica Analytic Database (7.0.x) Page 538 of 1539 SQL Reference Manual SQL Functions URI Encode/Decode Functions The functions in this section follow the RFC 3986 standard for percent-encoding a Universal Resource Identifier (URI). URI_PERCENT_DECODE Decodes a percent-encoded Universal Resource Identifier (URI) according to the RFC 3986 standard. Syntax URI_PERCENT_DECODE (expression) Behavior Type Immutable Parameters expression (VARCHAR) is the string to convert. Examples The following example invokes uri_percent_decode on the Websites column of the URI table and returns a decoded URI: => SELECT URI_PERCENT_DECODE(Websites) from URI; URI_PERCENT_DECODE ----------------------------------------------http://www.faqs.org/rfcs/rfc3986.html x xj%a% (1 row) The following example returns the original URI in the Websites column and its decoded version: => SELECT Websites, URI_PERCENT_DECODE (Websites) from URI; Websites | URI_PERCENT_DECODE ---------------------------------------------------+-------------------------------------------http://www.faqs.org/rfcs/rfc3986.html+x%20x%6a%a% | http://www.faqs.org/rfcs/rfc3986.htm l x xj%a% (1 row) HP Vertica Analytic Database (7.0.x) Page 539 of 1539 SQL Reference Manual SQL Functions URI_PERCENT_ENCODE Encodes a Universal Resource Identifier (URI) according to the RFC 3986 standard for percent encoding. In addition, for compatibility with older encoders this function converts '+' to space; space is converted to %20 by uri_percent_encode. Syntax URI_PERCENT_ENCODE (expression) Behavior Type Immutable Parameters expression (VARCHAR) is the string to convert. Examples The following example shows how the uri_percent_encode function is invoked on a the Websites column of the URI table and returns an encoded URI: => SELECT URI_PERCENT_ENCODE(Websites) from URI; URI_PERCENT_ENCODE -----------------------------------------http%3A%2F%2Fexample.com%2F%3F%3D11%2F15 (1 row) The following example returns the original URI in the Websites column and it's encoded form: => SELECT Websites, URI_PERCENT_ENCODE(Websites) from URI; Websites URI_PERCENT_ENCODE ----------------------------+-----------------------------------------http://example.com/?=11/15 | http%3A%2F%2Fexample.com%2F%3F%3D11%2F15 (1 row) HP Vertica Analytic Database (7.0.x) | Page 540 of 1539 SQL Reference Manual SQL Functions HP Vertica Meta-Functions HP Vertica built-in (meta) functions access the internal state of HP Vertica and are used in SELECT queries with the function name and an argument (where required). These functions are not part of the SQL standard and take the following form: SELECT ( ); Note: The query cannot contain other clauses, such as FROM or WHERE. The behavior type of HP Vertica meta-functions is immutable. HP Vertica Analytic Database (7.0.x) Page 541 of 1539 SQL Reference Manual SQL Functions Alphabetical List of HP Vertica Meta-Functions This section contains the HP Vertica meta-functions, listed alphabetically. These functions are also grouped into their appropriate category. ADD_LOCATION Adds a storage location to the cluster. Use this function to add a new location, optionally with a location label. You can also add a location specifically for user access, and then grant one or more users access to the location. Syntax ADD_LOCATION ( 'path' [, 'node' , 'usage', 'location_label' ] ) Parameters path [Required] Specifies where the storage location is mounted. Path must be an empty directory with write permissions for user, group, or all. node [Optional] Indicates the cluster node on which a storage location resides. If you omit this parameter, the function adds the location to only the initiator node. Specifying the node parameter as an empty string ('') adds a storage location to all cluster nodes in a single transaction. Note: If you specify a node, you must also add a usage parameter. HP Vertica Analytic Database (7.0.x) Page 542 of 1539 SQL Reference Manual SQL Functions usage [Optional] Specifies what the storage location will be used for: l DATA: Stores only data files. Use this option for labeled storage locations. l TEMP: Stores only temporary files, created during loads or queries. l DATA,TEMP: Stores both types of files in the location. l USER: Allows non-dbadmin users access to the storage location for data files (not temp files), once they are granted privileges. DO NOT create a storage location for later use in a storage policy. Storage locations with policies must be for DATA usage. Also, note that this keyword is orthogonal to DATA and TEMP, and does not specify a particular usage, other than being accessible to non-dbadmin users with assigned privileges. You cannot alter a storage location to or from USER usage. NOTE: You can use this parameter only in conjunction with the node option. If you omit the usage parameter, the default is DATA,TEMP. location_label [Optional] Specifies a location label as a string, for example, SSD. Labeling a storage location lets you use the location label to create storage policies and as part of a multi-tenanted storage scheme. Privileges Must be a superuser. Storage Location Subdirectories You cannot create a storage location in a subdirectory of an existing location. For example, if you create a storage location at one location, you cannot add a second storage location in a subdirectory of the first: dbt=> select add_location ('/myvertica/Test/KMM','','DATA','SSD'); add_location -----------------------------------------/myvertica/Test/KMM added. (1 row) dbt=> select add_location ('/myvertica/Test/KMM/SSD','','DATA','SSD'); ERROR 5615: Location [/myvertica/Test/KMM/SSD] conflicts with existing location [/myvert ica/Test/KMM] on node v_node0001 ERROR 5615: Location [/myvertica/Test/KMM/SSD] conflicts with existing location [/myvert ica/Test/KMM] on node v_node0002 ERROR 5615: Location [/myvertica/Test/KMM/SSD] conflicts with existing location [/myvert ica/Test/KMM] on node v_node0003 Example This example adds a location that stores data and temporary files on the initiator node: HP Vertica Analytic Database (7.0.x) Page 543 of 1539 SQL Reference Manual SQL Functions => SELECT ADD_LOCATION('/secondverticaStorageLocation/'); This example adds a location to store data on v_vmartdb_node0004: => SELECT ADD_LOCATION('/secondverticaStorageLocation/' , 'v_vmartdb_node0004' , 'DATA'); This example adds a new DATA storage location with a label, SSD. The label identifies the location when you create storage policies. Specifying the node parameter as an empty string adds the storage location to all cluster nodes in a single transaction: VMART=> select add_location ('home/dbadmin/SSD/schemas', '', 'DATA', 'SSD'); add_location --------------------------------home/dbadmin/SSD/schemas added. (1 row) See Also l l ALTER_LOCATION_USE l DROP_LOCATION l RESTORE_LOCATION l RETIRE_LOCATION l GRANT (Storage Location) l REVOKE (Storage Location) ADVANCE_EPOCH Manually closes the current epoch and begins a new epoch. Syntax ADVANCE_EPOCH ( [ integer ] ) Parameters integer Specifies the number of epochs to advance. HP Vertica Analytic Database (7.0.x) Page 544 of 1539 SQL Reference Manual SQL Functions Privileges Must be a superuser. Notes This function is primarily maintained for backward compatibility with earlier versions of HP Vertica. Example The following command increments the epoch number by 1: => SELECT ADVANCE_EPOCH(1); See Also l ALTER PROJECTION RENAME ALTER_LOCATION_USE Alters the type of files that can be stored at the specified storage location. Syntax ALTER_LOCATION_USE ( 'path' , [ 'node' ] , 'usage' ) Parameters path Specifies where the storage location is mounted. node [Optional] The HP Vertica node with the storage location. Specifying the node parameter as an empty string ('') alters the location across all cluster nodes in a single transaction. If you omit this parameter, node defaults to the initiator. usage Is one of the following: l DATA: The storage location stores only data files. This is the supported use for both a USER storage location, and a labeled storage location. l TEMP: The location stores only temporary files that are created during loads or queries. l DATA,TEMP: The location can store both types of files. HP Vertica Analytic Database (7.0.x) Page 545 of 1539 SQL Reference Manual SQL Functions Privileges Must be a superuser. USER Storage Location Restrictions You cannot change a storage location from a USER usage type if you created the location that way, or to a USER type if you did not. You can change a USER storage location to specify DATA (storing TEMP files is not supported). However, doing so does not affect the primary objective of a USER storage location, to be accessible by non-dbadmin users with assigned privileges. Monitoring Storage Locations Disk storage information that the database uses on each node is available in the V_ MONITOR.DISK_STORAGE system table. Example The following example alters the storage location across all cluster nodes to store only data: => SELECT ALTER_LOCATION_USE ('/thirdVerticaStorageLocation/' , '' , 'DATA'); See Also l l ADD_LOCATION l DROP_LOCATION l RESTORE_LOCATION l RETIRE_LOCATION l GRANT (Storage Location) l REVOKE (Storage Location) ALTER_LOCATION_LABEL Alters the location label. Use this function to add, change, or remove a location label. You change a location label only if it is not currently in use as part of a storage policy. HP Vertica Analytic Database (7.0.x) Page 546 of 1539 SQL Reference Manual SQL Functions You can use this function to remove a location label. However, you cannot remove a location label if the name being removed is used in a storage policy, and the location from which you are removing the label is the last available storage for its associated objects. Note: If you label an existing storage location that already contains data, and then include the labeled location in one or more storage policies, existing data could be moved. If the ATM determines data stored on a labeled location does not comply with a storage policy, the ATM moves the data elsewhere. Syntax ALTER_LOCATION_LABEL ( 'path' , 'node' , 'location_label' ) Parameters path Specifies the path of the storage location. node The HP Vertica node for the storage location. If you enter node as an empty string (''), the function performs a cluster-wide label change to all nodes. Any node that is unavailable generates an error. location_label Specifies a storage label as a string, for instance SSD. You can change an existing label assigned to a storage location, or add a new label. Specifying an empty string ('') removes an existing label. Privileges Must be a superuser. Example The following example alters (or adds) the label SSD to the storage location at the given path on all cluster nodes: VMART=> select alter_location_label('/home/dbadmin/SSD/tables','', 'SSD'); alter_location_label --------------------------------------/home/dbadmin/SSD/tables label changed. (1 row) See Also l HP Vertica Analytic Database (7.0.x) Page 547 of 1539 SQL Reference Manual SQL Functions l CLEAR_OBJECT_STORAGE_POLICY l SET_OBJECT_STORAGE_POLICY ANALYZE_CONSTRAINTS Analyzes and reports on constraint violations within the current schema search path, or external to that path if you specify a database name (noted in the syntax statement and parameter table). You can check for constraint violations by passing arguments to the function as follows: l An empty argument (' '), which returns violations on all tables within the current schema l One argument, referencing a table l Two arguments, referencing a table name and a column or list of columns Syntax ANALYZE_CONSTRAINTS [ ( '' ) ... | ( '[[db-name.]schema.]table [.column_name]' ) ... | ( '[[db-name.]schema.]table' , 'column' ) ] Parameters ('') Analyzes and reports on all tables within the current schema search path. [[db-name.]schema.] [Optional] Specifies the database name and optional schema name. Using a database name identifies objects that are not unique within the current search path (see Setting Search Paths). You must be connected to the database you specify, and you cannot change objects in other databases. Specifying different database objects lets you qualify database objects as explicitly as required. For example, you can use a database and a schema name (mydb.myschema). table Analyzes and reports on all constraints referring to the specified table. column Analyzes and reports on all constraints referring to the specified table that contains the column. Privileges l SELECT privilege on table l USAGE privilege on schema HP Vertica Analytic Database (7.0.x) Page 548 of 1539 SQL Reference Manual SQL Functions Notes ANALYZE_CONSTRAINTS() performs a lock in the same way that SELECT * FROM t1 holds a lock on table t1. See LOCKS for additional information. Detecting Constraint Violations During a Load Process HP Vertica checks for constraint violations when queries are run, not when data is loaded. To detect constraint violations as part of the load process, use a COPY statement with the NO COMMIT option. By loading data without committing it, you can run a post-load check of your data using the ANALYZE_CONSTRAINTS function. If the function finds constraint violations, you can roll back the load because you have not committed it. If ANALYZE_CONSTRAINTS finds violations, such as when you insert a duplicate value into a primary key, you can correct errors using the following functions. Effects last until the end of the session only: l SELECT DISABLE_DUPLICATE_KEY_ERROR l SELECT REENABLE_DUPLICATE_KEY_ERROR Return Values ANALYZE_CONSTRAINTS returns results in a structured set (see table below) that lists the schema name, table name, column name, constraint name, constraint type, and the column values that caused the violation. If the result set is empty, then no constraint violations exist; for example: > SELECT ANALYZE_CONSTRAINTS ('public.product_dimension', 'product_key'); Schema Name | Table Name | Column Names | Constraint Name | Constraint Type | Column Valu es -------------+------------+--------------+-----------------+-----------------+-------------(0 rows) The following result set shows a primary key violation, along with the value that caused the violation ('10'): => SELECT ANALYZE_CONSTRAINTS (''); Schema Name | Table Name | Column Names | Constraint Name | Constraint Type | Column Valu es -------------+------------+--------------+-----------------+-----------------+-------------store t1 c1 pk_t1 PRIMARY ('10') (1 row) The result set columns are described in further detail in the following table: HP Vertica Analytic Database (7.0.x) Page 549 of 1539 SQL Reference Manual SQL Functions Column Name Data Type Description Schema Name VARCHAR The name of the schema. Table Name VARCHAR The name of the table, if specified. Column Names VARCHAR Names of columns containing constraints. Multiple columns are in a comma-separated list: store_key, store_key, date_key, Constraint Name VARCHAR The given name of the primary key, foreign key, unique, or not null constraint, if specified. Constraint Type VARCHAR Identified by one of the following strings: 'PRIMARY KEY', 'FOREIGN KEY', 'UNIQUE', or 'NOT NULL'. Column Values VARCHAR Value of the constraint column, in the same order in which Column Names contains the value of that column in the violating row. When interpreted as SQL, the value of this column forms a list of values of the same type as the columns in Column Names; for example: ('1'), ('1', 'z') Understanding Function Failures If ANALYZE_CONSTRAINTS() fails, HP Vertica returns an error identifying the failure condition, such as if there are insufficient resources for the database to perform constraint checks. If you specify the wrong table, the system returns an error message: > SELECT ANALYZE_CONSTRAINTS('abc'); ERROR 2069: 'abc' is not a table in the current search_path If you run the function on a table that has no constraints declared (even if duplicates are present), the system returns an error message: > SELECT ANALYZE_CONSTRAINTS('source'); ERROR 4072: No constraints defined If you run the function with incorrect syntax, the system returns an error message with a hint; for example, if you run one of the following: l ANALYZE ALL CONSTRAINT; l ANALYZE CONSTRAINT abc; The system returns an informative error with hint: HP Vertica Analytic Database (7.0.x) Page 550 of 1539 SQL Reference Manual SQL Functions ERROR: ANALYZE CONSTRAINT is not supported. HINT: You may consider using analyze_constraints(). If you run ANALYZE_CONSTRAINTS from a non-default locale, the function returns an error with a hint: > \locale LENINFO 2567: Canonical locale: 'en' Standard collation: 'LEN' English > SELECT ANALYZE_CONSTRAINTS('t1'); ERROR: ANALYZE_CONSTRAINTS is currently not supported in non-default locales HINT: Set the locale in this session to en_US@collation=binary using the command "\locale en_US@collation=binary" Examples Given the following inputs, HP Vertica returns one row, indicating one violation, because the same primary key value (10) was inserted into table t1 twice: CREATE TABLE t1(c1 INT); ALTER TABLE t1 ADD CONSTRAINT pk_t1 PRIMARY KEY (c1); CREATE PROJECTION t1_p (c1) AS SELECT * FROM t1 UNSEGMENTED ALL NODES; INSERT INTO t1 values (10); INSERT INTO t1 values (10); --Duplicate primary key value \x Expanded display is on. SELECT ANALYZE_CONSTRAINTS('t1'); -[ RECORD 1 ]---+-------Schema Name | public Table Name | t1 Column Names | c1 Constraint Name | pk_t1 Constraint Type | PRIMARY Column Values | ('10') If the second INSERT statement above had contained any different value, the result would have been 0 rows (no violations). In the following example, create a table that contains three integer columns, one a unique key and one a primary key: CREATE TABLE table_1( a INTEGER, b_UK INTEGER UNIQUE, c_PK INTEGER PRIMARY KEY ); HP Vertica Analytic Database (7.0.x) Page 551 of 1539 SQL Reference Manual SQL Functions Issue a command that refers to a nonexistent table and column: SELECT ANALYZE_CONSTRAINTS('a_BB'); ERROR: 'a_BB' is not a table name in the current search path Issue a command that refers to a nonexistent column: SELECT ANALYZE_CONSTRAINTS('table_1','x'); ERROR 41614: Nonexistent columns: 'x ' Insert some values into table table_1 and commit the changes: INSERT INTO table_1 values (1, 1, 1); COMMIT; Run ANALYZE_CONSTRAINTS on table table_1. No constraint violations are reported: SELECT ANALYZE_CONSTRAINTS('table_1'); (No rows) Insert duplicate unique and primary key values and run ANALYZE_CONSTRAINTS on table table_1 again. The system shows two violations: one against the primary key and one against the unique key: INSERT INTO table_1 VALUES (1, 1, 1); COMMIT; SELECT ANALYZE_CONSTRAINTS('table_1'); -[ RECORD 1 ]---+---------Schema Name | public Table Name | table_1 Column Names | b_UK Constraint Name | C_UNIQUE Constraint Type | UNIQUE Column Values | ('1') -[ RECORD 2 ]---+---------Schema Name | public Table Name | table_1 Column Names | c_PK Constraint Name | C_PRIMARY Constraint Type | PRIMARY Column Values | ('1') The following command looks for constraint validations on only the unique key in the table table_1, qualified with its schema name: => SELECT ANALYZE_CONSTRAINTS('public.table_1', 'b_UK'); -[ RECORD 1 ]---+--------Schema Name | public HP Vertica Analytic Database (7.0.x) Page 552 of 1539 SQL Reference Manual SQL Functions Table Name Column Names Constraint Name Constraint Type Column Values | | | | | table_1 b_UK C_UNIQUE UNIQUE ('1') (1 row) The following example shows that you can specify the same column more than once; ANALYZE_ CONSTRAINTS, however, returns the violation only once: SELECT ANALYZE_CONSTRAINTS('table_1', 'c_PK, C_PK'); -[ RECORD 1 ]---+---------Schema Name | public Table Name | table_1 Column Names | c_PK Constraint Name | C_PRIMARY Constraint Type | PRIMARY Column Values | ('1') The following example creates a new table, table_2, and inserts a foreign key and different (character) data types: CREATE TABLE table_2 ( x VARCHAR(3), y_PK VARCHAR(4), z_FK INTEGER REFERENCES table_1(c_PK)); Alter the table to create a multicolumn unique key and multicolumn foreign key and create superprojections: ALTER TABLE table_2 ADD CONSTRAINT table_2_multiuk PRIMARY KEY (x, y_PK); WARNING 2623: Column "x" definition changed to NOT NULL WARNING 2623: Column "y_PK" definition changed to NOT NULL The following command inserts a missing foreign key (0) into table dim_1 and commits the changes: INSERT INTO table_2 VALUES ('r1', 'Xpk1', 0); COMMIT; Checking for constraints on the table table_2 in the public schema detects a foreign key violation: => SELECT ANALYZE_CONSTRAINTS('public.table_2'); -[ RECORD 1 ]---+---------Schema Name | public Table Name | table_2 HP Vertica Analytic Database (7.0.x) Page 553 of 1539 SQL Reference Manual SQL Functions Column Names Constraint Name Constraint Type Column Values | | | | z_FK C_FOREIGN FOREIGN ('0') Now add a duplicate value into the unique key and commit the changes: INSERT INTO table_2 VALUES ('r2', 'Xpk1', 1); INSERT INTO table_2 VALUES ('r1', 'Xpk1', 1); COMMIT; Checking for constraint violations on table table_2 detects the duplicate unique key error: SELECT ANALYZE_CONSTRAINTS('table_2'); -[ RECORD 1 ]---+---------------Schema Name | public Table Name | table_2 Column Names | z_FK Constraint Name | C_FOREIGN Constraint Type | FOREIGN Column Values | ('0') -[ RECORD 2 ]---+---------------Schema Name | public Table Name | table_2 Column Names | x, y_PK Constraint Name | table_2_multiuk Constraint Type | PRIMARY Column Values | ('r1', 'Xpk1') Create a table with multicolumn foreign key and create the superprojections: CREATE TABLE table_3( z_fk1 VARCHAR(3), z_fk2 VARCHAR(4)); ALTER TABLE table_3 ADD CONSTRAINT table_3_multifk FOREIGN KEY (z_fk1, z_fk2) REFERENCES table_2(x, y_PK); Insert a foreign key that matches a foreign key in table table_2 and commit the changes: INSERT INTO table_3 VALUES ('r1', 'Xpk1'); COMMIT; Checking for constraints on table table_3 detects no violations: SELECT ANALYZE_CONSTRAINTS('table_3'); (No rows) Add a value that does not match and commit the change: INSERT INTO table_3 VALUES ('r1', 'NONE'); HP Vertica Analytic Database (7.0.x) Page 554 of 1539 SQL Reference Manual SQL Functions COMMIT; Checking for constraints on table dim_2 detects a foreign key violation: SELECT ANALYZE_CONSTRAINTS('table_3'); -[ RECORD 1 ]---+---------------Schema Name | public Table Name | table_3 Column Names | z_fk1, z_fk2 Constraint Name | table_3_multifk Constraint Type | FOREIGN Column Values | ('r1', 'NONE') Analyze all constraints on all tables: SELECT ANALYZE_CONSTRAINTS(''); -[ RECORD 1 ]---+---------------Schema Name | public Table Name | table_3 Column Names | z_fk1, z_fk2 Constraint Name | table_3_multifk Constraint Type | FOREIGN Column Values | ('r1', 'NONE') -[ RECORD 2 ]---+---------------Schema Name | public Table Name | table_2 Column Names | x, y_PK Constraint Name | table_2_multiuk Constraint Type | PRIMARY Column Values | ('r1', 'Xpk1') -[ RECORD 3 ]---+---------------Schema Name | public Table Name | table_2 Column Names | z_FK Constraint Name | C_FOREIGN Constraint Type | FOREIGN Column Values | ('0') -[ RECORD 4 ]---+---------------Schema Name | public Table Name | t1 Column Names | c1 Constraint Name | pk_t1 Constraint Type | PRIMARY Column Values | ('10') -[ RECORD 5 ]---+---------------Schema Name | public Table Name | table_1 Column Names | b_UK Constraint Name | C_UNIQUE Constraint Type | UNIQUE Column Values | ('1') -[ RECORD 6 ]---+---------------Schema Name | public Table Name | table_1 HP Vertica Analytic Database (7.0.x) Page 555 of 1539 SQL Reference Manual SQL Functions Column Names | c_PK Constraint Name | C_PRIMARY Constraint Type | PRIMARY Column Values | ('1') -[ RECORD 7 ]---+---------------Schema Name | public Table Name | target Column Names | a Constraint Name | C_PRIMARY Constraint Type | PRIMARY Column Values | ('1') (5 rows) To quickly clean up your database, issue the following command: DROP TABLE table_1 CASCADE; DROP TABLE table_2 CASCADE; DROP TABLE table_3 CASCADE; To learn how to remove violating rows, see the DISABLE_DUPLICATE_KEY_ERROR function. ANALYZE_CORRELATIONS Analyzes the specified tables for columns that are strongly correlated. In addition, ANALYZE_ CORRELATIONS also collects statistics. For example, state name and country name columns are strongly correlated because the city name usually, but perhaps not always, identifies the state name. The city of Conshohoken is uniquely associated with Pennsylvania, whereas the city of Boston exists in Georgia, Indiana, Kentucky, New York, Virginia, and Massachusetts. In this case, city name is strongly correlated with state name. For Database Designer to take advantage of these correlations, run Database Designer programmatically. Use DESIGNER_SET_ANALYZE_CORRELATIONS_MODE to specify that Database Designer should consider existing column correlations. Make sure to specify that Database Designer not analyze statistics so that Database Designer does not override the existing statistics. Behavior Type Immutable Syntax ANALYZE_CORRELATIONS ( '[database_name.][schema_name.]table_name', [recalculate] ) HP Vertica Analytic Database (7.0.x) Page 556 of 1539 SQL Reference Manual SQL Functions Parameters database_name Specifies the table(s) for which to analyze correlated columns, type VARCHAR. schema_name table_name recalculate Specifies to analyze the correlated columns, even if they have been analyzed before, type BOOLEAN. Default: 'false'. Permissions l To run ANALYZE_CORRELATIONS on a table, you must be a superuser, or a user w USAGE privilege on the design schema. Notes l Column correlation analysis typically needs to be done only once. l Currently, ANALYZE_CORRELATIONS can analyze only pairwise single-column correlations. l Projections do not change based on the analysis results. To implement the results of ANALYZE_CORRELATIONS, you must run Database Designer. Example In the following example, ANALYZE_CORRELATIONS analyzes column correlations for all tables in the public schema,even if they currently exist. The correlations that ANALYZE_ CORRELATIONS finds are saved so that Database Designer can use them the next time it runs on the VMart database: => SELECT ANALYZE_CORRELATIONS ( 'public.*', 'true'); ANALYZE_CORRELATIONS ---------------------0 (1 row) See Also l DESIGNER_SET_ANALYZE_CORRELATIONS_MODE HP Vertica Analytic Database (7.0.x) Page 557 of 1539 SQL Reference Manual SQL Functions ANALYZE_HISTOGRAM Collects and aggregates data samples and storage information from all nodes that store projections associated with the specified table or column. If the function returns successfully (0), HP Vertica writes the returned statistics to the catalog. The query optimizer uses this collected data to recommend the best possible plan to execute a query. Without analyzing table statistics, the query optimizer would assume uniform distribution of data values and equal storage usage for all projections. ANALYZE_HISTOGRAM is a DDL operation that auto-commits the current transaction, if any. The ANALYZE_HISTOGRAM function reads a variable amount of disk contents to aggregate sample data for statistical analysis. Use the function's percent float parameter to specify the total disk space from which HP Vertica collects sample data. The ANALYZE_STATISTICS function returns similar data, but uses a fixed disk space amount (10 percent). Analyzing more than 10 percent disk space takes proportionally longer to process, but produces a higher level of sampling accuracy. ANALYZE_HISTOGRAM is supported on local temporary tables, but not on global temporary tables. Syntax ANALYZE_HISTOGRAM ('') ... | ( '[ [ db-name.]schema.]table [.column-name ]' [, percent ] ) Return Value 0 - For success. If an error occurs, refer to vertica.log for details. Parameters '' Empty string. Collects statistics for all tables. [[db-name.]schema.] [Optional] Specifies the schema name. Using a schema identifies objects that are not unique within the current search path (see Setting Schema Search Paths). You can optionally precede a schema with a database name, but you must be connected to the database you specify. You cannot make changes to objects in other databases. The ability to specify different database objects (from database and schemas to tables and columns) lets you qualify database objects as explicitly as required. For example, use a table and column (mytable.column1), a schema, table, and column (myschema.mytable.column1), and, as full qualification, a database, schema, table, and column (mydb.myschema.mytable.column1). HP Vertica Analytic Database (7.0.x) Page 558 of 1539 SQL Reference Manual SQL Functions table Specifies the name of the table and collects statistics for all projections of that table. If you are using more than one schema, specify the schema that contains the projection, as noted in the [[db-name.]schema.] entry. [.column-name] [Optional] Specifies the name of a single column, typically a predicate column. Using this option with a table specification lets you collect statistics for only that column. Note: If you alter a table to add or drop a column, or add a new column to a table and populate its contents with either default or other values, HP Vertica recommends calling this function on the new table column to get the most current statistics. percent [Optional] Specifies what percentage of data to read from disk (not the amount of data to analyze). Specify a float from 1 – 100, such as 33.3. By default, the function reads 10% of the table data from disk. For more information, see Collecting Statistics in the Administrator's Guide. Privileges l Any INSERT/UPDATE/DELETE privilege on table l USAGE privilege on schema that contains the table Use the HP Vertica statistics functions as follows: Use this function... ANALYZE_ STATISTICS To obtain... A fixed-size statistical data sampling (10 percent per disk). This function returns results quickly, but is less accurate than using ANALYZE_HISTOGRAM to get a larger sampling of disk data. ANALYZE_ A specified percentage of disk data sampling (from 1–100). If you analyze more HISTOGRAM than 10 percent data per disk, this function is more accurate than ANALYZE_ STATISTICS, but requires proportionately longer to return statistics. Analyzing Results To retrieve hints about under-performing queries and the associated root causes, use the ANALYZE_WORKLOAD function. This function runs the Workload Analyzer and returns tuning recommendations, such as "run analyze_statistics on schema.table.column". You or your database administrator should act upon the tuning recommendations. You can also find database tuning recommendations on the Management Console. HP Vertica Analytic Database (7.0.x) Page 559 of 1539 SQL Reference Manual SQL Functions Canceling ANALYZE_HISTOGRAM You can cancel this function mid-analysis by issuing CTRL-C in a vsql shell or by invoking the INTERRUPT_STATEMENT() function. Notes By default, HP Vertica analyzes more than one column (subject to resource limits) in a single-query execution plan to: l Reduce plan execution latency l Help speed up analysis of relatively small tables that have a large number of columns Examples In this example, the ANALYZE_STATISTICS() function reads 10 percent of the disk data. This is the static default value for this function. The function returns 0 for success: => SELECT ANALYZE_STATISTICS('shipping_dimension.shipping_key'); ANALYZE_STATISTICS -------------------0 (1 row) This example uses ANALYZE_HISTOGRAM () without specifying a percentage value. Since this function has a default value of 10 percent, it returns the identical data as the ANALYZE_ STATISTICS() function, and returns 0 for success: => SELECT ANALYZE_HISTOGRAM('shipping_dimension.shipping_key'); ANALYZE_HISTOGRAM ------------------0 (1 row) This example uses ANALYZE_HISTOGRAM (), specifying its percent parameter as 100, indicating it will read the entire disk to gather data. After the function performs a full column scan, it returns 0 for success: => SELECT ANALYZE_HISTOGRAM('shipping_dimension.shipping_key', 100); ANALYZE_HISTOGRAM ------------------0 (1 row) In this command, only 0.1% (1/1000) of the disk is read: HP Vertica Analytic Database (7.0.x) Page 560 of 1539 SQL Reference Manual SQL Functions => SELECT ANALYZE_HISTOGRAM('shipping_dimension.shipping_key', 0.1); ANALYZE_HISTOGRAM ------------------0 (1 row) See Also l ANALYZE_STATISTICS l ANALYZE_WORKLOAD l DROP_STATISTICS l EXPORT_STATISTICS l IMPORT_STATISTICS INTERRUPT_STATEMENT l l ANALYZE_STATISTICS Collects and aggregates data samples and storage information from all nodes that store projections associated with the specified table or column. If the function returns successfully (0), HP Vertica writes the returned statistics to the catalog. The query optimizer uses this collected data to recommend the best possible plan to execute a query. Without analyzing table statistics, the query optimizer would assume uniform distribution of data values and equal storage usage for all projections. ANALYZE_STATISTICS is a DDL operation that auto-commits the current transaction, if any. The ANALYZE_STATISTICS function reads a fixed, 10 percent of disk contents to aggregate sample data for statistical analysis. To obtain a larger (or smaller) data sampling, use the ANALYZE_ HISTOGRAM function, which lets you specify the percent of disk to read. Analyzing more that 10 percent disk space takes proportionally longer to process, but results in a higher level of sampling accuracy. ANALYZE_STATISTICS is supported on local temporary tables, but not on global temporary tables. Syntax ANALYZE_STATISTICS [ ('') ... | ( '[ [ db-name.]schema.]table [.column-name ]' ) ] Return value 0 - For success. If an error occurs, refer to vertica.log for details. HP Vertica Analytic Database (7.0.x) Page 561 of 1539 SQL Reference Manual SQL Functions Parameters '' Empty string. Collects statistics for all tables. [[db-name.]schema.] [Optional] Specifies the schema name. Using a schema identifies objects that are not unique within the current search path (see Setting Schema Search Paths). You can optionally precede a schema with a database name, but you must be connected to the database you specify. You cannot make changes to objects in other databases. The ability to specify different database objects (from database and schemas to tables and columns) lets you qualify database objects as explicitly as required. For example, use a table and column (mytable.column1), a schema, table, and column (myschema.mytable.column1), and, as full qualification, a database, schema, table, and column (mydb.myschema.mytable.column1). table Specifies the name of the table and collects statistics for all projections of that table. Note: If you are using more than one schema, specify the schema that contains the projection, as noted as noted in the [[db-name.] schema.] entry. [.column-name] [Optional] Specifies the name of a single column, typically a predicate column. Using this option with a table specification lets you collect statistics for only that column. Note: If you alter a table to add or drop a column, or add a new column to a table and populate its contents with either default or other values, HP Vertica recommends calling this function on the new table column to get the most current statistics. Privileges l Any INSERT/UPDATE/DELETE privilege on table l USAGE privilege on schema that contains the table Use the HP Vertica statistics functions as follows: HP Vertica Analytic Database (7.0.x) Page 562 of 1539 SQL Reference Manual SQL Functions Use this function... ANALYZE_ STATISTICS To obtain... A fixed-size statistical data sampling (10 percent per disk). This function returns results quickly, but is less accurate than using ANALYZE_HISTOGRAM to get a larger sampling of disk data. ANALYZE_ A specified percentage of disk data sampling (from 1–100). If you analyze more HISTOGRAM than 10 percent data per disk, this function is more accurate than ANALYZE_ STATISTICS, but requires proportionately longer to return statistics. Analyzing results To retrieve hints about under-performing queries and the associated root causes, use the ANALYZE_WORKLOAD function. This function runs the Workload Analyzer and returns tuning recommendations, such as "run analyze_statistics on schema.table.column". You or your database administrator should act upon the tuning recommendations. You can also find database tuning recommendations on the Management Console. Canceling this function You can cancel statistics analysis by issuing CTRL+C in a vsql shell or by invoking the INTERRUPT_STATEMENT() function. Notes l Always run ANALYZE_STATISTICS on a table or column rather than a projection. l By default, HP Vertica analyzes more than one column (subject to resource limits) in a singlequery execution plan to: l n Reduce plan execution latency n Help speed up analysis of relatively small tables that have a large number of columns Pre-join projection statistics are updated on any pre-joined tables. Examples Computes statistics on all projections in the VMart database and returns 0 (success): => SELECT ANALYZE_STATISTICS (''); analyze_statistics -------------------0 (1 row) HP Vertica Analytic Database (7.0.x) Page 563 of 1539 SQL Reference Manual SQL Functions Computes statistics on a single table (shipping_dimension) and returns 0 (success): => SELECT ANALYZE_STATISTICS ('shipping_dimension'); analyze_statistics -------------------0 (1 row) Computes statistics on a single column (shipping_key) across all projections for the shipping_ dimension table and returns 0 (success): => SELECT ANALYZE_STATISTICS('shipping_dimension.shipping_key'); analyze_statistics -------------------0 (1 row) For use cases, see Collecting Statistics in the Administrator's Guide See Also l ANALYZE_HISTOGRAM l ANALYZE_WORKLOAD l DROP_STATISTICS l EXPORT_STATISTICS l IMPORT_STATISTICS l INTERRUPT_STATEMENT ANALYZE_WORKLOAD Runs the Workload Analyzer (WLA), a utility that analyzes system information held in system tables. The Workload Analyzer intelligently monitors the performance of SQL queries and workload history, resources, and configurations to identify the root causes for poor query performance. Calling the ANALYZE_WORKLOAD function returns tuning recommendations for all events within the scope and time that you specify. Tuning recommendations are based on a combination of statistics, system and data collector events, and database-table-projection design. WLA's recommendations let database administrators quickly and easily tune query performance without needing sophisticated skills. See Understanding WLA Triggering Conditions in the Administrator's Guide for the most common triggering conditions and recommendations. HP Vertica Analytic Database (7.0.x) Page 564 of 1539 SQL Reference Manual SQL Functions Syntax 1 ANALYZE_WORKLOAD ( 'scope' , 'since_time' ); Syntax 2 ANALYZE_WORKLOAD ( 'scope' , [ true ] ); Parameters scope Specifies which HP Vertica catalog objects to analyze. Can be one of: sinc e_tim e l An empty string ('') returns recommendations for all database objects l 'table_name' returns all recommendations related to the specified table l 'schema_name' returns recommendations on all database objects in the specified schema Limits the recommendations from all events that you specified in 'scope' since the specified time in this argument, up to the current system status. If you omit the since_ time parameter, ANALYZE_WORKLOAD returns recommendations on events since the last recorded time that you called this function. Note: You must explicitly cast strings that you use for the since_time parameter to TIMESTAMP or TIMESTAMPTZ. For example: SELECT ANALYZE_WORKLOAD('T1', '2010-10-04 11:18:15'::TIMESTAMPTZ);SELECT ANALYZE_WO RKLOAD('T1', TIMESTAMPTZ '2010-10-04 11:18:15'); true [Optional] Tells HP Vertica to record this particular call of WORKLOAD_ANALYZER() in the system. The default value is false (do not record). If recorded, subsequent calls to ANALYZE_WORKLOAD analyze only the events that have occurred since this recorded time, ignoring all prior events. Return Value Column Data type Description observation_coun t INTEGER Integer for the total number of events observed for this tuning recommendation. For example, if you see a return value of 1, WLA is making its first tuning recommendation for the event in 'scope'. HP Vertica Analytic Database (7.0.x) Page 565 of 1539 SQL Reference Manual SQL Functions Column Data type Description first_observatio n_time TIMESTAM PTZ Timestamp when the event first occurred. If this column returns a null value, the tuning recommendation is from the current status of the system instead of from any prior event. last_observatio n_time TIMESTAM PTZ Timestamp when the event last occurred. If this column returns a null value, the tuning recommendation is from the current status of the system instead of from any prior event. tuning_parameter VARCHAR Objects on which you should perform a tuning action. For example, a return value of: tuning_descripti on VARCHAR HP Vertica Analytic Database (7.0.x) l public.t informs the DBA to run Database Designer on table t in the public schema l bsmith notifies a DBA to set a password for user bsmith Textual description of the tuning recommendation from the Workload Analyzer to perform on the tuning_parameter object. Examples of some of the returned values include, but are not limited to: l Run database designer on table schema.table l Create replicated projection for table schema.table l Consider incremental design on query l Reset configuration parameter with SELECT set_config_ parameter('parameter', 'new_value') l Re-segment projection projection-name on highcardinality column(s) l Drop the projection projection-name l Alter a table's partition expression l Reorganize data in partitioned table l Decrease the MoveOutInterval configuration parameter setting Page 566 of 1539 SQL Reference Manual SQL Functions Column Data type Description tuning_command VARCHAR Command string if tuning action is a SQL command. For example, the following example statements recommend that the DBA: Update statistics on a particular schema's table.column: SELECT ANALYZE_STATISTICS('public.table.column'); Resolve mismatched configuration parameter 'LockTimeout': SELECT * FROM CONFIGURATION_PARAMETERSWHERE parameter_name = 'LockTimeout'; Set the password for user bsmith: ALTER USER (user) IDENTIFIED BY ('new_password'); tuning_cost VARCHAR Cost is based on the type of tuning recommendation and is one of: l LOW—minimal impact on resources from running the tuning command l MEDIUM—moderate impact on resources from running the tuning command l HIGH—maximum impact on resources from running the tuning command Depending on the size of your database or table, consider running high-cost operations after hours instead of during peak load times. ANALYZE_WORKLOAD() returns aggregated tuning recommendations, as described in the TUNING_RECOMMENDATIONS table. Privileges Must be a superuser. Examples See Analyzing Workloads through an API in the Administrator's Guide for examples. HP Vertica Analytic Database (7.0.x) Page 567 of 1539 SQL Reference Manual SQL Functions See Also TUNING_RECOMMENDATIONS l l l AUDIT Estimates the raw data size of a database, a schema, a projection, or a table as it is counted in an audit of the database size. The AUDIT function estimates the size using the same data sampling method as the audit that HP Vertica performs to determine if a database is compliant with the database size allowances in its license. The results of this function are not considered when HP Vertica determines whether the size of the database complies with the HP Vertica license's data allowance. See How HP Vertica Calculates Database Size in the Administrator's Guide for details. Note: This function can only audit the size of tables, projections, schemas, and databases which the user has permission to access. If a non-superuser attempts to audit the entire database, the audit will only estimate the size of the data that the user is allowed to read. Syntax AUDIT([name] [, granularity] [, error_tolerance [, confidence_level]]) Parameters name Specifies the schema, projection, or table to audit. Enter name as a string, in single quotes (''). If the name string is empty (''), the entire database is audited. HP Vertica Analytic Database (7.0.x) Page 568 of 1539 SQL Reference Manual SQL Functions granularity Indicates the level at which the audit reports its results. The recognized levels are: l 'schema' l 'table' l 'projection' By default, the granularity is the same level as name. For example, if name is a schema, then the size of the entire schema is reported. If you instead specify 'table' as the granularity, AUDIT reports the size of each table in the schema. The granularity must be finer than that of object: specifying 'schema' for an audit of a table has no effect. The results of an audit with a granularity are reported in the V_ CATALOG.USER_AUDITS system table. error_tolerance Specifies the percentage margin of error allowed in the audit estimate. Enter the tolerance value as a decimal number, between 0 and 100. The default value is 5, for a 5% margin of error. Note: The lower this value is, the more resources the audit uses since it will perform more data sampling. Setting this value to 0 results in a full audit of the database, which is very resource intensive, as all of the data in the database is analyzed. Doing a full audit of the database significantly impacts performance and is not recommended on a production database. confidence_level Specifies the statistical confidence level percentage of the estimate. Enter the confidence value as a decimal number, between 0 and 100. The default value is 99, indicating a confidence level of 99%. Note: The higher the confidence value, the more resources the function uses since it will perform more data sampling. Setting this value to 1 results in a full audit of the database, which is very resource intensive, as all of the database is analyzed. Doing a full audit of the database significantly impacts performance and is not recommended on a production database. Permissions l SELECT privilege on table l USAGE privilege on schema Note: AUDIT() works only on the tables where the user calling the function has SELECT permissions. HP Vertica Analytic Database (7.0.x) Page 569 of 1539 SQL Reference Manual SQL Functions Notes Due to the iterative sampling used in the auditing process, making the error tolerance a small fraction of a percent (0.00001, for example) can cause the AUDIT function to run for a longer period than a full database audit. Examples To audit the entire database: => SELECT AUDIT(''); AUDIT ---------76376696 (1 row) To audit the database with a 25% error tolerance: => SELECT AUDIT('',25); AUDIT ---------75797126 (1 row) To audit the database with a 25% level of tolerance and a 90% confidence level: => SELECT AUDIT('',25,90); AUDIT ---------76402672 (1 row) To audit just the online_sales schema in the VMart example database: VMart=> SELECT AUDIT('online_sales'); AUDIT ---------35716504 (1 row) To audit the online_sales schema and report the results by table: => SELECT AUDIT('online_sales','table'); AUDIT -----------------------------------------------------------------See table sizes in v_catalog.user_audits for schema online_sales (1 row) HP Vertica Analytic Database (7.0.x) Page 570 of 1539 SQL Reference Manual SQL Functions => \x Expanded display is on. => SELECT * FROM user_audits WHERE object_schema = 'online_sales'; -[ RECORD 1 ]-------------------------+-----------------------------size_bytes | 64960 user_id | 45035996273704962 user_name | dbadmin object_id | 45035996273717636 object_type | TABLE object_schema | online_sales object_name | online_page_dimension audit_start_timestamp | 2011-04-05 09:24:48.224081-04 audit_end_timestamp | 2011-04-05 09:24:48.337551-04 confidence_level_percent | 99 error_tolerance_percent | 5 used_sampling | f confidence_interval_lower_bound_bytes | 64960 confidence_interval_upper_bound_bytes | 64960 sample_count | 0 cell_count | 0 -[ RECORD 2 ]-------------------------+-----------------------------size_bytes | 20197 user_id | 45035996273704962 user_name | dbadmin object_id | 45035996273717640 object_type | TABLE object_schema | online_sales object_name | call_center_dimension audit_start_timestamp | 2011-04-05 09:24:48.340206-04 audit_end_timestamp | 2011-04-05 09:24:48.365915-04 confidence_level_percent | 99 error_tolerance_percent | 5 used_sampling | f confidence_interval_lower_bound_bytes | 20197 confidence_interval_upper_bound_bytes | 20197 sample_count | 0 cell_count | 0 -[ RECORD 3 ]-------------------------+-----------------------------size_bytes | 35614800 user_id | 45035996273704962 user_name | dbadmin object_id | 45035996273717644 object_type | TABLE object_schema | online_sales object_name | online_sales_fact audit_start_timestamp | 2011-04-05 09:24:48.368575-04 audit_end_timestamp | 2011-04-05 09:24:48.379307-04 confidence_level_percent | 99 error_tolerance_percent | 5 used_sampling | t confidence_interval_lower_bound_bytes | 34692956 confidence_interval_upper_bound_bytes | 36536644 sample_count | 10000 cell_count | 9000000 HP Vertica Analytic Database (7.0.x) Page 571 of 1539 SQL Reference Manual SQL Functions AUDIT_FLEX Estimates the ROS size of one or more flexible tables contained in a database, schema, or projection. Use this function for flex tables only. Invoking audit_flex() with a columnar table results in an error. The audit_flex() function measures encoded, compressed data stored in ROS containers for the __raw__ column of one or more flexible tables. The function does not audit other flex table columns that are created as, or promoted to, real columns. Temporary flex tables are not included in the audit. Each time a user calls audit_flex(), HP Vertica stores the results in the V_CATALOG.USER_ AUDITS system table. Syntax AUDIT_FLEX (name) Parameters name Specifies what database entity to audit. Enter the entity name as a string in single quotes (''), as follows: l Empty string ('') — Return the size of the ROS containers for all flexible tables in the database. You cannot enter the database name. l Schema name ('schema_name') — Return the size of all __raw__ columns of flexible tables in schema_name. l A projection name ('proj_name') — Return the ROS size of a projection for a __raw__ column. l A flex table name ('flex_table_name') — Return the ROS size of a flex table's __ raw__ column. Permissions l SELECT privilege on table l USAGE privilege on schema Note: AUDIT_FLEX() works only on the flexible tables, projections, schemas, and databases to which the user has permissions. HP Vertica Analytic Database (7.0.x) Page 572 of 1539 SQL Reference Manual SQL Functions Examples To audit the flex tables in the database: dbs=> select audit_flex(''); audit_flex -----------8567679 (1 row) To audit the flex tables in a specific schema, such as public: dbs=> select audit_flex('public'); audit_flex -----------8567679 (1 row) To audit the flex tables in a specific projection, such as bakery_b0: dbs=> select audit_flex('bakery_b0'); audit_flex -----------8566723 (1 row) To audit a flex table, such as bakery: dbs=> select audit_flex('bakery'); audit_flex -----------8566723 (1 row) To report the results of all audits saved in the USER_AUDITS, the following shows part of an extended display from the system table showing an audit run on a schema called test, and the entire database, dbs: dbs=> \x Expanded display is on. dbs=> select * from user_audits; -[ RECORD 1 ]-------------------------+-----------------------------size_bytes | 0 user_id | 45035996273704962 user_name | release object_id | 45035996273736664 object_type | SCHEMA object_schema | object_name | test HP Vertica Analytic Database (7.0.x) Page 573 of 1539 SQL Reference Manual SQL Functions audit_start_timestamp | 2014-02-04 14:52:15.126592-05 audit_end_timestamp | 2014-02-04 14:52:15.139475-05 confidence_level_percent | 99 error_tolerance_percent | 5 used_sampling | f confidence_interval_lower_bound_bytes | 0 confidence_interval_upper_bound_bytes | 0 sample_count | 0 cell_count | 0 -[ RECORD 2 ]-------------------------+-----------------------------size_bytes | 38051 user_id | 45035996273704962 user_name | release object_id | 45035996273704974 object_type | DATABASE object_schema | object_name | dbs audit_start_timestamp | 2014-02-05 13:44:41.11926-05 audit_end_timestamp | 2014-02-05 13:44:41.227035-05 confidence_level_percent | 99 error_tolerance_percent | 5 used_sampling | f confidence_interval_lower_bound_bytes | 38051 confidence_interval_upper_bound_bytes | 38051 sample_count | 0 cell_count | 0 -[ RECORD 3 ]-------------------------+-----------------------------. . . AUDIT_LICENSE_SIZE Triggers an immediate audit of the database size to determine if it is in compliance with the raw data storage allowance included in your HP Vertica license. The audit is performed in the background, so this function call returns immediately. To see the results of the audit when it is done, use the GET_ COMPLIANCE_STATUS function. Syntax AUDIT_LICENSE_SIZE() Privileges Must be a superuser. Example => SELECT audit_license_size(); HP Vertica Analytic Database (7.0.x) Page 574 of 1539 SQL Reference Manual SQL Functions audit_license_size -------------------Service hurried (1 row) AUDIT_LICENSE_TERM Triggers an immediate audit to determine if the HP Vertica license has expired. The audit happens in the background, so this function returns immediately. To see the result of the audit, use the GET_ COMPLIANCE_STATUS function. Syntax AUDIT_LICENSE_TERM() Privileges Must be a superuser. Example => SELECT AUDIT_LICENSE_TERM(); AUDIT_LICENSE_TERM -------------------Service hurried (1 row) BUILD_FLEXTABLE_VIEW Creates, or recreates, a view for a default or user-defined _keys table. If you do not specify a view_ name argument, the default name is the flex table name with a _view suffix. For example, if you specify the table darkdata as the sole argument to this function, the default view is called darkdata_view. You cannot specify a custom view name with the same name as the default view flex_table_ view, unless you first do the following: 1. Drop the default-named view 2. Create your own view of the same name Usage build_flextable_view('flex_table' [ [,'view_name'] [,'user_keys_table'] ]) HP Vertica Analytic Database (7.0.x) Page 575 of 1539 SQL Reference Manual SQL Functions Arguments flex_table The flex table name. By default, this function builds or rebuilds a view for the input table with the current contents of the associated flex_table_keys table. view_name [Optional] A custom view name. Use this option to build or rebuild a new or existing view of your choice for the input table with the current contents of the associated flex_ table_keys table, rather than the default view ( flex_ table_view). user_keys_ table [Optional] Specifies a keys table from which to create a view. Use this option if you created a custom user_keys table for keys of interest from the flex table map data, rather than the default flex_table_keys table. The function builds a view from the keys in user_keys table, rather than from the flex_ table_keys table. Examples Following are examples of calling build_flextable_view with 1, 2, or 3 arguments. Creating a Default View To create, or recreate, a default view: 1. Call the function with a single argument of a flex table, darkdata, in this example: kdb=> select build_flextable_view('darkdata'); build_flextable_view ----------------------------------------------------The view public.darkdata_view is ready for querying (1 row) The function creates a view from the darkdata_keys table. 2. Query from the default view name (darkdata_view): kdb=> select "user.id" from darkdata_view; user.id ----------340857907 727774963 390498773 288187825 HP Vertica Analytic Database (7.0.x) Page 576 of 1539 SQL Reference Manual SQL Functions 164464905 125434448 601328899 352494946 (12 rows) Creating a Custom Name View To create, or recreate, a default view with a custom name: 1. Call the function with two arguments, a flex table, darkdata, and the name of the view to create, dd_view, in this example: kdb=> select build_flextable_view('darkdata', 'dd_view'); build_flextable_view ----------------------------------------------The view public.dd_view is ready for querying (1 row) 2. Query from the custom view name (dd_view): kdb=> select "user.lang" from dd_view; user.lang ----------tr en es en en it es en (12 rows) Creating a View From a Custom Keys Table To create a view from a custom _keys table with build_flextable_view, the table must already exist. The custom table must have the same schema and table definition as the default table (darkdata_keys). Following are a couple of ways to create a custom keys table: 1. Create a table with the all keys from the keys table: kdb=> create table new_darkdata_keys as select * from darkdata_keys; HP Vertica Analytic Database (7.0.x) Page 577 of 1539 SQL Reference Manual SQL Functions CREATE TABLE 2. Alternatively, create a table based on the default keys table, but without content: kdb=> create table new_darkdata_keys as select * from darkdata_keys LIMIT 0; CREATE TABLE kdb=> select * from new_darkdata_keys; key_name | frequency | data_type_guess ----------+-----------+----------------(0 rows) 3. Given an existing table (or creating one with no data), insert one or more keys: kdb=> create table dd_keys as select * from darkdata_keys limit 0; CREATE TABLE kdb=> insert into dd_keys (key_name) values ('user.lang'); OUTPUT -------1 (1 row) kdb=> insert into dd_keys (key_name) values ('user.name'); OUTPUT -------1 (1 row) kdb=> select * from dd_keys; key_name | frequency | data_type_guess -----------+-----------+----------------user.lang | | user.name | | (2 rows) Continue once your custom keys table exists. 1. Call the function with all arguments, a flex table, the name of the view to create, and the custom keys table: kdb=> select build_flextable_view('darkdata', 'dd_view', 'new_darkdata_keys'); build_flextable_view ----------------------------------------------The view public.dd_view is ready for querying (1 row) 2. Query the new view: SELECT * from dd_view; HP Vertica Analytic Database (7.0.x) Page 578 of 1539 SQL Reference Manual SQL Functions See Also l COMPUTE_FLEXTABLE_KEYS l COMPUTE_FLEXTABLE_KEYS_AND_BUILD_VIEW l MATERIALIZE_FLEXTABLE_COLUMNS l RESTORE_FLEXTABLE_DEFAULT_KEYS_TABLE_AND_VIEW CANCEL_REBALANCE_CLUSTER Stops any rebalance task currently in progress. Syntax CANCEL_REBALANCE_CLUSTER() Privileges Must be a superuser. Example => SELECT CANCEL_REBALANCE_CLUSTER(); CANCEL_REBALANCE_CLUSTER -------------------------CANCELED (1 row) See Also l START_REBALANCE_CLUSTER l REBALANCE_CLUSTER CANCEL_REFRESH Cancels refresh-related internal operations initiated by START_REFRESH(). Syntax CANCEL_REFRESH() HP Vertica Analytic Database (7.0.x) Page 579 of 1539 SQL Reference Manual SQL Functions Privileges None Notes l Refresh tasks run in a background thread in an internal session, so you cannot use INTERRUPT_STATEMENT to cancel those statements. Instead, use CANCEL_REFRESH to cancel statements that are run by refresh-related internal sessions. l Run CANCEL_REFRESH() on the same node on which START_REFRESH() was initiated. l CANCEL_REFRESH() cancels the refresh operation running on a node, waits for the cancelation to complete, and returns SUCCESS. l Only one set of refresh operations runs on a node at any time. Example Cancel a refresh operation executing in the background. t=> SELECT START_REFRESH(); START_REFRESH ---------------------------------------Starting refresh background process. (1 row) => SELECT CANCEL_REFRESH(); CANCEL_REFRESH ---------------------------------------Stopping background refresh process. (1 row) See Also l INTERRUPT_STATEMENT l SESSIONS l START_REFRESH l PROJECTION_REFRESHES CHANGE_CURRENT_STATEMENT_RUNTIME_PRIORITY Changes the run-time priority of a query that is actively running. HP Vertica Analytic Database (7.0.x) Page 580 of 1539 SQL Reference Manual SQL Functions Syntax CHANGE_CURRENT_STATEMENT_RUNTIME_PRIORITY(TRANSACTION_ID, 'value') Parameters TRANSACTION_ID An identifier for the transaction within the session. TRANSACTION_ID cannot be NULL. You can find the transaction ID in the Sessions table. 'value' The RUNTIMEPRIORITY value. Can be HIGH, MEDIUM, or LOW. Privileges No special privileges required. However, non-super users can change the run-time priority of their own queries only. In addition, non-superusers can never raise the run-time priority of a query to a level higher than that of the resource pool. Example => SELECT CHANGE_CURRENT_STATEMENT_RUNTIME_PRIORITY(45035996273705748, 'low'); CHANGE_RUNTIME_PRIORITY Changes the run-time priority of a query that is actively running. Note that, while this function is still valid, you should instead use CHANGE_CURRENT_STATEMENT_RUNTIME_PRIORITY to change run-time priority. CHANGE_RUNTIME_PRIORITY will be deprecated in a future release of Vertica. Syntax CHANGE_RUNTIME_PRIORITY(TRANSACTION_ID,STATEMENT_ID, 'value') Parameters TRANSACTION_ID An identifier for the transaction within the session. TRANSACTION_ID cannot be NULL. You can find the transaction ID in the Sessions table. HP Vertica Analytic Database (7.0.x) Page 581 of 1539 SQL Reference Manual SQL Functions STATEMENT_ID A unique numeric ID assigned by the HP Vertica catalog, which identifies the currently executing statement. You can find the statement ID in the Sessions table. You can specify NULL to change the run-time priority of the currently running query within the transaction. 'value' The RUNTIMEPRIORITY value. Can be HIGH, MEDIUM, or LOW. Privileges No special privileges required. However, non-super users can change the run-time priority of their own queries only. In addition, non-superusers can never raise the run-time priority of a query to a level higher than that of the resource pool. Example => SELECT CHANGE_RUNTIME_PRIORITY(45035996273705748, NULL, 'low'); CLEAR_CACHES Clears the HP Vertica internal cache files. Syntax CLEAR_CACHES ( ) Privileges Must be a superuser. Notes If you want to run benchmark tests for your queries, in addition to clearing the internal HP Vertica cache files, clear the Linux file system cache. The kernel uses unallocated memory as a cache to hold clean disk blocks. If you are running version 2.6.16 or later of Linux and you have root access, you can clear the kernel filesystem cache as follows: 1. Make sure that all data is the cache is written to disk: # sync HP Vertica Analytic Database (7.0.x) Page 582 of 1539 SQL Reference Manual SQL Functions 2. Writing to the drop_caches file causes the kernel to drop clean caches, dentries, and inodes from memory, causing that memory to become free, as follows: n To clear the page cache: # echo 1 > /proc/sys/vm/drop_caches n To clear the dentries and inodes: # echo 2 > /proc/sys/vm/drop_caches n To clear the page cache, dentries, and inodes: # echo 3 > /proc/sys/vm/drop_caches Example The following example clears the HP Vertica internal cache files: => CLEAR_CACHES(); CLEAR_CACHES -------------Cleared (1 row) CLEAR_DATA_COLLECTOR Clears all memory and disk records on the Data Collector tables and functions and resets collection statistics in the V_MONITOR.DATA_COLLECTOR system table. A superuser can clear Data Collector data for all components or specify an individual component After you clear the Data Collector log, the information is no longer available for querying. Syntax CLEAR_DATA_COLLECTOR( [ 'component' ] ) HP Vertica Analytic Database (7.0.x) Page 583 of 1539 SQL Reference Manual SQL Functions Parameters component Clears memory and disk records for the specified component only. If you provide no argument, the function clears all Data Collector memory and disk records for all components. For the current list of component names, query the V_MONITOR.DATA_ COLLECTOR system table. Privileges Must be a superuser. Examples The following command clears memory and disk records for the ResourceAcquisitions component: => SELECT clear_data_collector('ResourceAcquisitions'); clear_data_collector ---------------------CLEAR (1 row) The following command clears data collection for all components on all nodes: => SELECT clear_data_collector(); clear_data_collector ---------------------CLEAR (1 row) See Also DATA_COLLECTOR l l CLEAR_PROFILING HP Vertica stores profiled data is in memory, so depending on how much data you collect, profiling could be memory intensive. You can use this function to clear profiled data from memory. Syntax CLEAR_PROFILING( 'profiling-type' ) HP Vertica Analytic Database (7.0.x) Page 584 of 1539 SQL Reference Manual SQL Functions Parameters profiling-type The type of profiling data you want to clear. Can be one of: l session—clears profiling for basic session parameters and lock time out data l query—clears profiling for general information about queries that ran, such as the query strings used and the duration of queries l ee—clears profiling for information about the execution run of each query Example The following statement clears profiled data for queries: => SELECT CLEAR_PROFILING('query'); See Also l DISABLE_PROFILING l ENABLE_PROFILING l Profiling Database Performance CLEAR_PROJECTION_REFRESHES Triggers HP Vertica to clear information about refresh operations for projections immediately. Syntax CLEAR_PROJECTION_REFRESHES() Notes Information about a refresh operation—whether successful or unsuccessful—is maintained in the PROJECTION_REFRESHES system table until either the CLEAR_PROJECTION_ REFRESHES() function is executed or the storage quota for the table is exceeded. The PROJECTION_REFRESHES.IS_EXECUTING column returns a boolean value that indicates whether the refresh is currently running (t) or occurred in the past (f). HP Vertica Analytic Database (7.0.x) Page 585 of 1539 SQL Reference Manual SQL Functions Privileges Must be a superuser. Example To immediately purge projection refresh history, use the CLEAR_PROJECTION_REFRESHES() function: => SELECT CLEAR_PROJECTION_REFRESHES(); CLEAR_PROJECTION_REFRESHES ---------------------------CLEAR (1 row) Only the rows where the PROJECTION_REFRESHES.IS_EXECUTING column equals false are cleared. See Also l PROJECTION_REFRESHES l REFRESH START_REFRESH l l CLEAR_RESOURCE_REJECTIONS Clears the content of the RESOURCE_REJECTIONS and DISK_RESOURCE_REJECTIONS system tables. Normally, these tables are only cleared during a node restart. This function lets you clear the tables whenever you need. For example, you might want to clear the system tables after you resolved a disk space issue that was causing disk resource rejections. Syntax CLEAR_RESOURCE_REJECTIONS(); Privileges Must be a superuser. HP Vertica Analytic Database (7.0.x) Page 586 of 1539 SQL Reference Manual SQL Functions Example The following command clears the content of the RESOURCE_REJECTIONS and DISK_ RESOURCE_REJECTIONS system tables: => SELECT clear_resource_rejections(); clear_resource_rejections --------------------------OK (1 row) See Also l DISK_RESOURCE_REJECTIONS l RESOURCE_REJECTIONS CLEAR_OBJECT_STORAGE_POLICY Removes an existing storage policy. The specified object will no longer use a default storage location. Any existing data stored currently at the labeled location in the object's storage policy is moved to default storage during the next TM moveout operation. Syntax CLEAR_OBJECT_STORAGE_POLICY ( 'object_name' , [', key_min, key_max ']) Parameters object_name Specifies the database object with a storage policy to clear. key_min, key_max Specifies the table partition key value ranges stored at the labeled location. These parameters are applicable only when object_name is a table. Privileges Must be a superuser. Example This example clears the storage policy for the object lineorder: release=> select clear_object_storage_policy('lineorder'); HP Vertica Analytic Database (7.0.x) Page 587 of 1539 SQL Reference Manual SQL Functions clear_object_storage_policy ----------------------------------Default storage policy cleared. (1 row) See Also l Clearing Storage Policies l ALTER_LOCATION_LABEL l SET_OBJECT_STORAGE_POLICY CLOSE_SESSION Interrupts the specified external session, rolls back the current transaction, if any, and closes the socket. Syntax CLOSE_SESSION ( 'sessionid' ) Parameters sessionid A string that specifies the session to close. This identifier is unique within the cluster at any point in time but can be reused when the session closes. Privileges None; however, a non-superuser can only close his or her own session. Notes l Closing of the session is processed asynchronously. It could take some time for the session to be closed. Check the SESSIONS table for the status. l Database shutdown is prevented if new sessions connect after the CLOSE_SESSION() command is invoked (and before the database is actually shut down. See Controlling Sessions below. Messages The following are the messages you could encounter: HP Vertica Analytic Database (7.0.x) Page 588 of 1539 SQL Reference Manual SQL Functions l For a badly formatted sessionID close_session | Session close command sent. Check SESSIONS for progress.Error: invalid Session ID format l For an incorrect sessionID parameter Error: Invalid session ID or statement key Examples User session opened. RECORD 2 shows the user session running COPY DIRECT statement. => SELECT * FROM sessions; -[ RECORD 1 ]--------------+----------------------------------------------node_name | v_vmartdb_node0001 user_name | dbadmin client_hostname | 127.0.0.1:52110 client_pid | 4554 login_timestamp | 2011-01-03 14:05:40.252625-05 session_id | stress04-4325:0x14 client_label | transaction_start | 2011-01-03 14:05:44.325781 transaction_id | 45035996273728326 transaction_description | user dbadmin (SELECT * FROM sessions;) statement_start | 2011-01-03 15:36:13.896288 statement_id | 10 last_statement_duration_us | 14978 current_statement | select * from sessions; ssl_state | None authentication_method | Trust -[ RECORD 2 ]--------------+----------------------------------------------node_name | v_vmartdb_node0002 user_name | dbadmin client_hostname | 127.0.0.1:57174 client_pid | 30117 login_timestamp | 2011-01-03 15:33:00.842021-05 session_id | stress05-27944:0xc1a client_label | transaction_start | 2011-01-03 15:34:46.538102 transaction_id | -1 transaction_description | user dbadmin (COPY ClickStream_Fact FROM '/data/clickstream/1g/ClickStream_Fact.tbl' DELIMITER '|' NULL '\\n' DIRECT;) statement_start | 2011-01-03 15:34:46.538862 statement_id | last_statement_duration_us | 26250 current_statement | COPY ClickStream_Fact FROM '/data/clickstream /1g/ClickStream_Fact.tbl' DELIMITER '|' NULL '\\n' DIRECT; ssl_state | None HP Vertica Analytic Database (7.0.x) Page 589 of 1539 SQL Reference Manual SQL Functions authentication_method | Trust Close user session stress05-27944:0xc1a => \xExpanded display is off. => SELECT CLOSE_SESSION('stress05-27944:0xc1a'); CLOSE_SESSION -------------------------------------------------------------------Session close command sent. Check v_monitor.sessions for progress. (1 row) Query the sessions table again for current status, and you can see that the second session has been closed: => SELECT * FROM SESSIONS; -[ RECORD 1 ]--------------+-------------------------------------------node_name | v_vmartdb_node0001 user_name | dbadmin client_hostname | 127.0.0.1:52110 client_pid | 4554 login_timestamp | 2011-01-03 14:05:40.252625-05 session_id | stress04-4325:0x14 client_label | transaction_start | 2011-01-03 14:05:44.325781 transaction_id | 45035996273728326 transaction_description | user dbadmin (select * from SESSIONS;) statement_start | 2011-01-03 16:12:07.841298 statement_id | 20 last_statement_duration_us | 2099 current_statement | SELECT * FROM SESSIONS; ssl_state | None authentication_method | Trust Controlling Sessions The database administrator must be able to disallow new incoming connections in order to shut down the database. On a busy system, database shutdown is prevented if new sessions connect after the CLOSE_SESSION or CLOSE_ALL_SESSIONS() command is invoked—and before the database actually shuts down. One option is for the administrator to issue the SHUTDOWN('true') command, which forces the database to shut down and disallow new connections. See SHUTDOWN in the SQL Reference Manual. Another option is to modify the MaxClientSessions parameter from its original value to 0, in order to prevent new non-dbadmin users from connecting to the database. 1. Determine the original value for the MaxClientSessions parameter by querying the V_ MONITOR.CONFIGURATIONS_PARAMETERS system table: HP Vertica Analytic Database (7.0.x) Page 590 of 1539 SQL Reference Manual SQL Functions => SELECT CURRENT_VALUE FROM CONFIGURATION_PARAMETERS WHERE parameter_name='MaxClient Sessions'; CURRENT_VALUE --------------50 (1 row) 2. Set the MaxClientSessions parameter to 0 to prevent new non-dbadmin connections: => SELECT SET_CONFIG_PARAMETER('MaxClientSessions', 0); Note: The previous command allows up to five administrators to log in. 3. Issue the CLOSE_ALL_SESSIONS() command to remove existing sessions: => SELECT CLOSE_ALL_SESSIONS(); 4. Query the SESSIONS table: => SELECT * FROM SESSIONS; When the session no longer appears in the SESSIONS table, disconnect and run the Stop Database command. 5. Restart the database. 6. Restore the MaxClientSessions parameter to its original value: => SELECT SET_CONFIG_PARAMETER('MaxClientSessions', 50); See Also l CLOSE_ALL_SESSIONS l CONFIGURATION_PARAMETERS l SESSIONS SHUTDOWN l l l HP Vertica Analytic Database (7.0.x) Page 591 of 1539 SQL Reference Manual SQL Functions CLOSE_ALL_SESSIONS Closes all external sessions except the one issuing the CLOSE_ALL_SESSIONS functions. Syntax CLOSE_ALL_SESSIONS() Privileges None; however, a non-superuser can only close his or her own session. Notes Closing of the sessions is processed asynchronously. It might take some time for the session to be closed. Check the SESSIONS table for the status. Database shutdown is prevented if new sessions connect after the CLOSE_SESSION or CLOSE_ ALL_SESSIONS() command is invoked (and before the database is actually shut down). See Controlling Sessions below. Message close_all_sessions | Close all sessions command sent. Check SESSIONS for progress. Examples Two user sessions opened, each on a different node: vmartdb=> SELECT * FROM sessions; -[ RECORD 1 ]--------------+---------------------------------------------------node_name | v_vmartdb_node0001 user_name | dbadmin client_hostname | 127.0.0.1:52110 client_pid | 4554 login_timestamp | 2011-01-03 14:05:40.252625-05 session_id | stress04-4325:0x14 client_label | transaction_start | 2011-01-03 14:05:44.325781 transaction_id | 45035996273728326 transaction_description | user dbadmin (select * from sessions;) statement_start | 2011-01-03 15:36:13.896288 statement_id | 10 last_statement_duration_us | 14978 current_statement | select * from sessions; ssl_state | None authentication_method | Trust HP Vertica Analytic Database (7.0.x) Page 592 of 1539 SQL Reference Manual SQL Functions -[ RECORD 2 ]--------------+---------------------------------------------------node_name | v_vmartdb_node0002 user_name | dbadmin client_hostname | 127.0.0.1:57174 client_pid | 30117 login_timestamp | 2011-01-03 15:33:00.842021-05 session_id | stress05-27944:0xc1a client_label | transaction_start | 2011-01-03 15:34:46.538102 transaction_id | -1 transaction_description | user dbadmin (COPY Mart_Fact FROM '/data/mart_Fact.tbl' DELIMITER '|' NULL '\\n';) statement_start | 2011-01-03 15:34:46.538862 statement_id | last_statement_duration_us | 26250 current_statement | COPY Mart_Fact FROM '/data/Mart_Fact.tbl' DELIMITER '|' NULL '\\n'; ssl_state | None authentication_method | Trust -[ RECORD 3 ]--------------+---------------------------------------------------node_name | v_vmartdb_node0003 user_name | dbadmin client_hostname | 127.0.0.1:56367 client_pid | 1191 login_timestamp | 2011-01-03 15:31:44.939302-05 session_id | stress06-25663:0xbec client_label | transaction_start | 2011-01-03 15:34:51.05939 transaction_id | 54043195528458775 transaction_description | user dbadmin (COPY Mart_Fact FROM '/data/Mart_Fact.tbl' DELIMITER '|' NULL '\\n' DIRECT;) statement_start | 2011-01-03 15:35:46.436748 statement_id | last_statement_duration_us | 1591403 current_statement | COPY Mart_Fact FROM '/data/Mart_Fact.tbl' DELIMITER '|' NULL '\\n' DIRECT; ssl_state | None authentication_method | Trust Close all sessions: vmartdb=> \xExpanded display is off. vmartdb=> SELECT CLOSE_ALL_SESSIONS(); CLOSE_ALL_SESSIONS ------------------------------------------------------------------------Close all sessions command sent. Check v_monitor.sessions for progress. (1 row) Session contents after issuing the CLOSE_ALL_SESSIONS() command: => SELECT * FROM SESSIONS;-[ RECORD 1 ]--------------+--------------------------------------node_name | v_vmartdb_node0001 user_name | dbadmin client_hostname | 127.0.0.1:52110 HP Vertica Analytic Database (7.0.x) Page 593 of 1539 SQL Reference Manual SQL Functions client_pid login_timestamp session_id client_label transaction_start transaction_id transaction_description statement_start statement_id last_statement_duration_us current_statement ssl_state authentication_method | | | | | | | | | | | | | 4554 2011-01-03 14:05:40.252625-05 stress04-4325:0x14 2011-01-03 14:05:44.325781 45035996273728326 user dbadmin (SELECT * FROM sessions;) 2011-01-03 16:19:56.720071 25 15605 SELECT * FROM SESSIONS; None Trust Controlling Sessions The database administrator must be able to disallow new incoming connections in order to shut down the database. On a busy system, database shutdown is prevented if new sessions connect after the CLOSE_SESSION or CLOSE_ALL_SESSIONS() command is invoked—and before the database actually shuts down. One option is for the administrator to issue the SHUTDOWN('true') command, which forces the database to shut down and disallow new connections. See SHUTDOWN in the SQL Reference Manual. Another option is to modify the MaxClientSessions parameter from its original value to 0, in order to prevent new non-dbadmin users from connecting to the database. 1. Determine the original value for the MaxClientSessions parameter by querying the V_ MONITOR.CONFIGURATIONS_PARAMETERS system table: => SELECT CURRENT_VALUE FROM CONFIGURATION_PARAMETERS WHERE parameter_name='MaxClient Sessions'; CURRENT_VALUE --------------50 (1 row) 2. Set the MaxClientSessions parameter to 0 to prevent new non-dbadmin connections: => SELECT SET_CONFIG_PARAMETER('MaxClientSessions', 0); Note: The previous command allows up to five administrators to log in. 3. Issue the CLOSE_ALL_SESSIONS() command to remove existing sessions: HP Vertica Analytic Database (7.0.x) Page 594 of 1539 SQL Reference Manual SQL Functions => SELECT CLOSE_ALL_SESSIONS(); 4. Query the SESSIONS table: => SELECT * FROM SESSIONS; When the session no longer appears in the SESSIONS table, disconnect and run the Stop Database command. 5. Restart the database. 6. Restore the MaxClientSessions parameter to its original value: => SELECT SET_CONFIG_PARAMETER('MaxClientSessions', 50); See Also l CLOSE_SESSION l CONFIGURATION_PARAMETERS l SHUTDOWN SESSIONS l l l COMPUTE_FLEXTABLE_KEYS Computes the virtual columns (keys and values) from the map data of a flex table and repopulates the associated _keys table. The keys table has the following columns: l key_name l frequency l data_type_guess This function sorts the keys table by frequency and key_name. Use this function to compute keys without creating an associated table view. To build a view as well, use COMPUTE_FLEXTABLE_KEYS_AND_BUILD_VIEW. HP Vertica Analytic Database (7.0.x) Page 595 of 1539 SQL Reference Manual SQL Functions Usage compute_flextable_keys('flex_table') Arguments flex_table The name of the flex table. Examples During execution, this function determines a data type for each virtual column, casting the values it computes to VARCHAR, LONG VARCHAR, or LONG VARBINARY, depending on the length of the key, and whether the key includes nested maps. The following examples illustrate this function and the results of populating the _keys table, once you've created a flex table (darkdata1) and loaded data: kdb=> create flex table darkdata1(); CREATE TABLE kdb=> copy darkdata1 from '/test/flextable/DATA/tweets_12.json' parser fjsonparser(); Rows Loaded ------------12 (1 row) kdb=> select compute_flextable_keys('darkdata1'); compute_flextable_keys -------------------------------------------------Please see public.darkdata1_keys for updated keys (1 row) kdb=> select * from darkdata1_keys; key_name | frequency | data_type_guess ----------------------------------------------------------+-----------+--------------------contributors | 8 | varchar(20) coordinates | 8 | varchar(20) created_at | 8 | varchar(60) entities.hashtags | 8 | long varbinary(18 6) entities.urls | 8 | long varbinary(3 2) entities.user_mentions | 8 | long varbinary(67 4) . . . retweeted_status.user.time_zone | 1 | varchar(20) retweeted_status.user.url | 1 | varchar(68) retweeted_status.user.utc_offset | 1 | varchar(20) retweeted_status.user.verified | 1 | varchar(20) (125 rows) The flex keys table has these columns: HP Vertica Analytic Database (7.0.x) Page 596 of 1539 SQL Reference Manual SQL Functions Column Description key_name The name of the virtual column (key). frequency The number of times the virtual column occurs in the map. data_ type_ guess The data type for each virtual column, cast to VARCHAR, LONG VARCHAR or LONG VARBINARY, depending on the length of the key, and whether the key includes one or more nested maps. In the _keys table output, the data_type_guess column values are also followed by a value in parentheses, such as varchar(20). The value indicates the padded width of the key column, as calculated by the longest field, multiplied by the FlexTableDataTypeGuessMultiplier configuration parameter value. For more information, see Setting Flex Table Parameters. See Also l BUILD_FLEXTABLE_VIEW l COMPUTE_FLEXTABLE_KEYS_AND_BUILD_VIEW l MATERIALIZE_FLEXTABLE_COLUMNS l RESTORE_FLEXTABLE_DEFAULT_KEYS_TABLE_AND_VIEW COMPUTE_FLEXTABLE_KEYS_AND_BUILD_VIEW Combines the functionality of BUILD_FLEXTABLE_VIEW and COMPUTE_FLEXTABLE_KEYS to compute virtual columns (keys) from the map data of a flex table , and construct a view. If you don't need to perform both operations together, use one of the single-operation functions. Usage compute_flextable_keys_and_build_view('flex_table') Arguments flex_table The name of a flex table. Examples The following example calls the function for the darkdata flex table. kdb=> select compute_flextable_keys_and_build_view('darkdata'); HP Vertica Analytic Database (7.0.x) Page 597 of 1539 SQL Reference Manual SQL Functions compute_flextable_keys_and_build_view ----------------------------------------------------------------------Please see public.darkdata_keys for updated keys The view public.darkdata_view is ready for querying (1 row) See Also l BUILD_FLEXTABLE_VIEW l COMPUTE_FLEXTABLE_KEYS l MATERIALIZE_FLEXTABLE_COLUMNS l RESTORE_FLEXTABLE_DEFAULT_KEYS_TABLE_AND_VIEW CURRENT_SCHEMA Returns the name of the current schema. Behavior Type Stable Syntax CURRENT_SCHEMA() Privileges None Notes The CURRENT_SCHEMA function does not require parentheses. Examples The following command returns the name of the current schema: => SELECT CURRENT_SCHEMA(); current_schema ---------------public HP Vertica Analytic Database (7.0.x) Page 598 of 1539 SQL Reference Manual SQL Functions (1 row) The following command returns the same results without the parentheses: => SELECT CURRENT_SCHEMA; current_schema ---------------public (1 row) The following command shows the current schema, listed after the current user, in the search path: => SHOW SEARCH_PATH; name | setting -------------+--------------------------------------------------search_path | "$user", public, v_catalog, v_monitor, v_internal (1 row) See Also l SET SEARCH_PATH DATA_COLLECTOR_HELP Returns online usage instructions about the Data Collector, the DATA_COLLECTOR system table, and the Data Collector control functions. Syntax DATA_COLLECTOR_HELP() Privileges None Returns The DATA_COLLECTOR_HELP() function returns the following information: => SELECT DATA_COLLECTOR_HELP(); ----------------------------------------------------------------------------Usage Data Collector The data collector retains history of important system activities. This data can be used as a reference of what actions have been taken HP Vertica Analytic Database (7.0.x) Page 599 of 1539 SQL Reference Manual SQL Functions by users, but it can also be used to locate performance bottlenecks, or identify potential improvements to the Vertica configuration. This data is queryable via Vertica system tables. Acccess a list of data collector components, and some statistics, by running: SELECT * FROM v_monitor.data_collector; The amount of data retained by size and time can be controlled with several functions. To just set the size amount: set_data_collector_policy( , , ); To set both the size and time amounts (the smaller one will dominate): set_data_collector_policy( , , , ); To set just the time amount: set_data_collector_time_policy( , ); To set the time amount for all tables: set_data_collector_time_policy( ); The current retention policy for a component can be queried with: get_data_collector_policy( ); Data on disk is kept in the "DataCollector" directory under the Vertica \catalog path. This directory also contains instructions on how to load the monitoring data into another Vertica database. To move the data collector logs and instructions to other storage locations, create labeled storage locations using add_location and then use: set_data_collector_storage_location( ); Additional commands can be used to configure the data collection logs. The log can be cleared with: clear_data_collector([ ]); The log can be synchronized with the disk storage using: flush_data_collector([ ]); See Also l DATA_COLLECTOR l TUNING_RECOMMENDATIONS HP Vertica Analytic Database (7.0.x) Page 600 of 1539 SQL Reference Manual SQL Functions l Analyzing Workloads l Retaining Monitoring Information DISABLE_DUPLICATE_KEY_ERROR Disables error messaging when HP Vertica finds duplicate PRIMARY KEY/UNIQUE KEY values at run time. Queries execute as though no constraints are defined on the schema. Effects are session scoped. Caution: When called, DISABLE_DUPLICATE_KEY_ERROR() suppresses data integrity checking and can lead to incorrect query results. Use this function only after you insert duplicate primary keys into a dimension table in the presence of a pre-join projection. Then correct the violations and turn integrity checking back on with REENABLE_DUPLICATE_ KEY_ERROR(). Syntax DISABLE_DUPLICATE_KEY_ERROR(); Privileges Must be a superuser. Examples The following series of commands create a table named dim and the corresponding projection: CREATE TABLE dim (pk INTEGER PRIMARY KEY, x INTEGER); CREATE PROJECTION dim_p (pk, x) AS SELECT * FROM dim ORDER BY x UNSEGMENTED ALL NODES; The next two statements create a table named fact and the pre-join projection that joins fact to dim. CREATE TABLE fact(fk INTEGER REFERENCES dim(pk)); CREATE PROJECTION prejoin_p (fk, pk, x) AS SELECT * FROM fact, dim WHERE pk=fk ORDER BY x ; The following statements load values into table dim. The last statement inserts a duplicate primary key value of 1: INSERT INTO dim values (1,1);INSERT INTO dim values (2,2); INSERT INTO dim values (1,2); --Constraint violation COMMIT; HP Vertica Analytic Database (7.0.x) Page 601 of 1539 SQL Reference Manual SQL Functions Table dim now contains duplicate primary key values, but you cannot delete the violating row because of the presence of the pre-join projection. Any attempt to delete the record results in the following error message: ROLLBACK: Duplicate primary key detected in FK-PK join Hash-Join (x dim_p), value 1 In order to remove the constraint violation (pk=1), use the following sequence of commands, which puts the database back into the state just before the duplicate primary key was added. To remove the violation: 1. Save the original dim rows that match the duplicated primary key: CREATE TEMP TABLE dim_temp(pk integer, x integer); INSERT INTO dim_temp SELECT * FROM dim WHERE pk=1 AND x=1; -- original dim row 2. Temporarily disable error messaging on duplicate constraint values: SELECT DISABLE_DUPLICATE_KEY_ERROR(); Caution: Remember that running the DISABLE_DUPLICATE_KEY_ERROR function suppresses the enforcement of data integrity checking. 3. Remove the original row that contains duplicate values: DELETE FROM dim WHERE pk=1; 4. Allow the database to resume data integrity checking: SELECT REENABLE_DUPLICATE_KEY_ERROR(); 5. Reinsert the original values back into the dimension table: INSERT INTO dim SELECT * from dim_temp;COMMIT; 6. Validate your dimension and fact tables. If you receive the following error message, it means that the duplicate records you want to delete are not identical. That is, the records contain values that differ in at least one column that is not a primary key; for example, (1,1) and (1,2). ROLLBACK: Delete: could not find a data row to delete (data integrity violation?) HP Vertica Analytic Database (7.0.x) Page 602 of 1539 SQL Reference Manual SQL Functions The difference between this message and the rollback message in the previous example is that a fact row contains a foreign key that matches the duplicated primary key, which has been inserted. A row with values from the fact and dimension table is now in the pre-join projection. In order for the DELETE statement (Step 3 in the following example) to complete successfully, extra predicates are required to identify the original dimension table values (the values that are in the pre-join). This example is nearly identical to the previous example, except that an additional INSERT statement joins the fact table to the dimension table by a primary key value of 1: INSERT INTO dim values (1,1);INSERT INTO dim values (2,2); INSERT INTO fact values (1); -- New insert statement joins fact with dim on primar y key value=1 INSERT INTO dim values (1,2); -- Duplicate primary key value=1 COMMIT; To remove the violation: 1. Save the original dim and fact rows that match the duplicated primary key: CREATE TEMP TABLE dim_temp(pk integer, x integer);CREATE TEMP TABLE fact_temp(fk inte ger); INSERT INTO dim_temp SELECT * FROM dim WHERE pk=1 AND x=1; -- original dim row INSERT INTO fact_temp SELECT * FROM fact WHERE fk=1; 2. Temporarily suppresses the enforcement of data integrity checking: SELECT DISABLE_DUPLICATE_KEY_ERROR(); 3. Remove the duplicate primary keys. These steps also implicitly remove all fact rows with the matching foreign key. 4. Remove the original row that contains duplicate values: DELETE FROM dim WHERE pk=1 AND x=1; Note: The extra predicate (x=1) specifies removal of the original (1,1) row, rather than the newly inserted (1,2) values that caused the violation. 5. Remove all remaining rows: DELETE FROM dim WHERE pk=1; HP Vertica Analytic Database (7.0.x) Page 603 of 1539 SQL Reference Manual SQL Functions 6. Reenable integrity checking: SELECT REENABLE_DUPLICATE_KEY_ERROR(); 7. Reinsert the original values back into the fact and dimension table: INSERT INTO dim SELECT * from dim_temp; INSERT INTO fact SELECT * from fact_temp; COMMIT; 8. Validate your dimension and fact tables. See Also l ANALYZE_CONSTRAINTS l REENABLE_DUPLICATE_KEY_ERROR DISABLE_ELASTIC_CLUSTER Disables elastic cluster scaling, which prevents HP Vertica from bundling data into chunks that are easily transportable to other nodes when performing cluster resizing. The main reason to disable elastic clustering is if you find that the slightly unequal data distribution in your cluster caused by grouping data into discrete blocks results in performance issues. Syntax DISABLE_ELASTIC_CLUSTER() Privileges Must be a superuser. Example => SELECT DISABLE_ELASTIC_CLUSTER(); DISABLE_ELASTIC_CLUSTER ------------------------DISABLED (1 row) HP Vertica Analytic Database (7.0.x) Page 604 of 1539 SQL Reference Manual SQL Functions See Also l ENABLE_ELASTIC_CLUSTER DISABLE_LOCAL_SEGMENTS Disable local data segmentation, which breaks projections segments on nodes into containers that can be easily moved to other nodes. See Local Data Segmentation in the Administrator's Guide for details. Syntax DISABLE_LOCAL_SEGMENTS() Privileges Must be a superuser. Example => SELECT DISABLE_LOCAL_SEGMENTS(); DISABLE_LOCAL_SEGMENTS -----------------------DISABLED (1 row) DISABLE_PROFILING Disables profiling for the profiling type you specify. Syntax DISABLE_PROFILING( 'profiling-type' ) HP Vertica Analytic Database (7.0.x) Page 605 of 1539 SQL Reference Manual SQL Functions Parameters profiling-type The type of profiling data you want to disable. Can be one of: l session—disables profiling for basic session parameters and lock time out data l query—disables profiling for general information about queries that ran, such as the query strings used and the duration of queries l ee—disables profiling for information about the execution run of each query Example The following statement disables profiling on query execution runs: => SELECT DISABLE_PROFILING('ee'); DISABLE_PROFILING ----------------------EE Profiling Disabled (1 row) See Also l CLEAR_PROFILING l ENABLE_PROFILING l Profiling Database Performance DISPLAY_LICENSE Returns the terms of your HP Vertica license. The information this function displays is: l The start and end dates for which the license is valid (or "Perpetual" if the license has no expiration). l The number of days you are allowed to use HP Vertica after your license term expires (the grace period) l The amount of data your database can store, if your license includes a data allowance. Syntax DISPLAY_LICENSE() HP Vertica Analytic Database (7.0.x) Page 606 of 1539 SQL Reference Manual SQL Functions Privileges None Examples => SELECT DISPLAY_LICENSE(); DISPLAY_LICENSE ---------------------------------------------------HP Vertica Systems, Inc. 1/1/2011 12/31/2011 30 50TB (1 row) DO_TM_TASK Runs a Tuple Mover operation on one or more projections defined on the specified table. Tip: You do not need to stop the Tuple Mover to run this function. Syntax DO_TM_TASK ( 'task' [ , '[[db-name.]schema.]table' | '[[db-name.]schema.]projection' ] ) Parameters task Is one of the following tuple mover operations: l 'moveout' — Moves out all projections on the specified table (if a particular projection is not specified) from WOS to ROS. l 'mergeout' — Consolidates ROS containers and purges deleted records. l 'analyze_row_count' — Automatically collects the number of rows in a projection every 60 seconds and aggregates row counts calculated during loads. HP Vertica Analytic Database (7.0.x) Page 607 of 1539 SQL Reference Manual SQL Functions [[db-name.]schema.] [Optional] Specifies the schema name. Using a schema identifies objects that are not unique within the current search path (see Setting Schema Search Paths). You can optionally precede a schema with a database name, but you must be connected to the database you specify. You cannot make changes to objects in other databases. The ability to specify different database objects (from database and schemas to tables and columns) lets you qualify database objects as explicitly as required. For example, use a table and column (mytable.column1), a schema, table, and column (myschema.mytable.column1), and, as full qualification, a database, schema, table, and column (mydb.myschema.mytable.column1). table Runs a tuple mover operation for all projections within the specified table. When using more than one schema, specify the schema that contains the table with the projections you want to affect, as noted above. projection If projection is not passed as an argument, all projections in the system are used. If projection is specified, DO_TM_TASK looks for a projection of that name and, if found, uses it; if a named projection is not found, the function looks for a table with that name and, if found, moves out all projections on that table. Privileges l Any INSERT/UPDATE/DELETE privilege on table l USAGE privileges on schema Notes DO_TM_TASK() is useful for moving out all projections from a table or database without having to name each projection individually. Examples The following example performs a moveout of all projections for table t1: => SELECT DO_TM_TASK('moveout', 't1'); The following example performs a moveout for projection t1_proj: => SELECT DO_TM_TASK('moveout', 't1_proj') HP Vertica Analytic Database (7.0.x) Page 608 of 1539 SQL Reference Manual SQL Functions See Also l COLUMN_STORAGE l DROP_PARTITION l DUMP_PARTITION_KEYS l DUMP_PROJECTION_PARTITION_KEYS l DUMP_TABLE_PARTITION_KEYS PARTITION_PROJECTION l l l DROP_LICENSE Drops a Flex Zone license key from the global catalog. Syntax DROP_LICENSE( 'license name' ) Parameters license name The name of the license to drop. The name can be found in the licenses table. Privileges Must be a superuser. Notes For more information about license keys, see Managing Licenses in the Administrator's Guide. Examples => SELECT DROP_LICENSE('/tmp/vlicense.dat'); HP Vertica Analytic Database (7.0.x) Page 609 of 1539 SQL Reference Manual SQL Functions DROP_LOCATION Removes the specified storage location. Syntax DROP_LOCATION ( 'path' , 'node' ) Parameters path Specifies where the storage location to drop is mounted. node Is the HP Vertica node where the location is available. Privileges Must be a superuser. Retiring or Dropping a Storage Location Dropping a storage location is a permanent operation and cannot be undone. Therefore, HP recommends that you retire a storage location before dropping it. Retiring a storage location lets you verify that you do not need the storage before dropping it. Additionally, you can easily restore a retired storage location if you determine it is still in use. Storage Locations with Temp and Data Files Dropping storage locations is limited to storage locations that contain only temp files. If you use a storage location to store data and then alter it to store only temp files, the location can still contain data files. HP Vertica does not let you drop a storage location containing data files. You can manually merge out the data files from the storage location, and then wait for the ATM to mergeout the data files automatically, or, you can drop partitions. Deleting data files does not work. Example The following example drops a storage location on node3 that was used to store temp files: => SELECT DROP_LOCATION('/secondHP VerticaStorageLocation/' , 'node3'); HP Vertica Analytic Database (7.0.x) Page 610 of 1539 SQL Reference Manual SQL Functions See Also l l l ADD_LOCATION l ALTER_LOCATION_USE l RESTORE_LOCATION l RETIRE_LOCATION l GRANT (Storage Location) l REVOKE (Storage Location) DROP_PARTITION Forces the partition of projections (if needed) and then drops the specified partition. Syntax DROP_PARTITION ( table_name , partition_value [ , ignore_moveout_errors, reorganize_data ]) Parameters table-name Specifies the name of the table. Note: The table_name argument cannot be used as a dimension table in a pre-joined projection and cannot contain projections that are not up to date (have not been refreshed). partition_value The key of the partition to drop. For example: DROP_PARTITION ('trade', 2006); HP Vertica Analytic Database (7.0.x) Page 611 of 1539 SQL Reference Manual SQL Functions ignore_moveout_errors Optional Boolean, defaults to false. l true—Ignores any WOS moveout errors and forces the operation to continue. Set this parameter to true only if there is no WOS data for the partition. l false (or omit)—Displays any moveout errors and aborts the operation on error. Note: If you set this parameter to true and the WOS includes data for the partition in WOS, partition data in WOS is not dropped. reorganize_data Optional Boolean, defaults to false. l true—Reorganizes the data if it is not organized, and then drops the partition. l false—Does not attempt to reorganize the data before dropping the partition. If this parameter is false and the function encounters a ROS without partition keys, an error occurs. Permissions l Table owner l USAGE privilege on schema that contains the table Notes and Restrictions The results of a DROP_PARTITION call go into effect immediately. If you drop a partition using DROP_PARTITION and then try to add data to a partition with the same name, HP Vertica creates a new partition. If the operation cannot obtain an O Lock on the table(s), HP Vertica attempts to close any internal Tuple Mover (TM) sessions running on the same table(s) so that the operation can proceed. Explicit TM operations that are running in user sessions are not closed. If an explicit TM operation is running on the table, then the operation cannot proceed until the explicit TM operation completes. In general, if a ROS container has data that belongs to n+1 partitions and you want to drop a specific partition, the DROP_PARTITION operation: 1. Forces the partition of data into two containers where n One container holds the data that belongs to the partition that is to be dropped. n Another container holds the remaining n partitions. 2. Drops the specified partition. DROP_PARTITION forces a moveout if there is data in the WOS (WOS is not partition aware). HP Vertica Analytic Database (7.0.x) Page 612 of 1539 SQL Reference Manual SQL Functions DROP_PARTITION acquires an exclusive lock on the table to prevent DELETE | UPDATE | INSERT | COPY statements from affecting the table, as well as any SELECT statements issued at SERIALIZABLE isolation level. You cannot perform a DROP_PARTITION operation on tables with projections that are not up to date (have not been refreshed). DROP_PARTITION fails if you do not set the optional third parameter to true and the function encounters ROS's that do not have partition keys. Examples Using the example schema in Defining Partitions, the following command explicitly drops the 2009 partition key from table trade: SELECT DROP_PARTITION('trade', 2009); DROP_PARTITION ------------------Partition dropped (1 row) Here, the partition key is specified: SELECT DROP_PARTITION('trade', EXTRACT('year' FROM '2009-01-01'::date)); DROP_PARTITION ------------------Partition dropped (1 row) The following example creates a table called dates and partitions the table by year: CREATE TABLE dates (year INTEGER NOT NULL, month VARCHAR(8) NOT NULL) PARTITION BY year * 12 + month; The following statement drops the partition using a constant for Oct 2010 (2010*12 + 10 = 24130): SELECT DROP_PARTITION('dates', '24130'); DROP_PARTITION ------------------Partition dropped (1 row) Alternatively, the expression can be placed in line: SELECT DROP_PARTITION('dates', 2010*12 + 10); The following command first reorganizes the data if it is unpartitioned and then explicitly drops the 2009 partition key from table trade: SELECT DROP_PARTITION('trade', 2009, false, true); HP Vertica Analytic Database (7.0.x) Page 613 of 1539 SQL Reference Manual SQL Functions DROP_PARTITION ------------------Partition dropped (1 row) See Also l Dropping Partitions l ADVANCE_EPOCH l ALTER PROJECTION RENAME l COLUMN_STORAGE l CREATE TABLE l DO_TM_TASK l DUMP_PARTITION_KEYS l DUMP_PROJECTION_PARTITION_KEYS l DUMP_TABLE_PARTITION_KEYS l MERGE_PARTITIONS l PARTITION_PROJECTION l PARTITION_TABLE l PROJECTIONS DROP_STATISTICS Removes statistics for the specified table and lets you optionally specify the category of statistics to drop. Syntax DROP_STATISTICS { ('') | ('[[db-name.]schema-name.]table' [, {'BASE' | 'HISTOGRAMS' | 'ALL'} ])}; Return Value 0 - If successful, DROP_STATISTICS always returns 0. If the command fails, DROP_ STATISTICS displays an error message. See vertica.log for message details. HP Vertica Analytic Database (7.0.x) Page 614 of 1539 SQL Reference Manual SQL Functions Parameters '' Empty string. Drops statistics for all projections. [[db-name.]schema.] [Optional] Specifies the database name and optional schema name. Using a database name identifies objects that are not unique within the current search path (see Setting Search Paths). You must be connected to the database you specify, and you cannot change objects in other databases. Specifying different database objects lets you qualify database objects as explicitly as required. For example, you can use a database and a schema name (mydb.myschema). table Drops statistics for all projections within the specified table. When using more than one schema, specify the schema that contains the table with the projections you want to delete, as noted in the syntax. CATEGORY Specifies the category of statistics to drop for the named [db-name.] schema-name.]table: l 'BASE' (default) drops histograms and row counts (min/max column values, histogram. l 'HISTOGRAMS' drops only the histograms. Row counts statistics remain. l 'ALL' drops all statistics. Privileges l INSERT/UPDATE/DELETE privilege on table l USAGE privilege on schema that contains the table Notes Once dropped, statistics can be time consuming to regenerate. Examples The following command analyzes all statistics on the VMart schema database: => SELECT ANALYZE_STATISTICS(''); ANALYZE_STATISTICS -------------------0 HP Vertica Analytic Database (7.0.x) Page 615 of 1539 SQL Reference Manual SQL Functions (1 row) This command drops base statistics for table store_sales_fact in the store schema: => SELECT DROP_STATISTICS('store.store_sales_fact', 'BASE'); DROP_STATISTICS ----------------0 (1 row) Note that this command works the same as the previous command: => SELECT DROP_STATISTICS('store.store_sales_fact'); DROP_STATISTICS ----------------0 (1 row) This command also drops statistics for all table projections: => SELECT DROP_STATISTICS (''); DROP_STATISTICS ----------------0 (1 row) For use cases, see Collecting Statistics in the Administrator's Guide See Also l ANALYZE_STATISTICS l EXPORT_STATISTICS l IMPORT_STATISTICS DUMP_CATALOG Returns an internal representation of the HP Vertica catalog. This function is used for diagnostic purposes. Syntax DUMP_CATALOG() Privileges None; however, function dumps only the objects visible to the user. HP Vertica Analytic Database (7.0.x) Page 616 of 1539 SQL Reference Manual SQL Functions Notes To obtain an internal representation of the HP Vertica catalog for diagnosis, run the query: => SELECT DUMP_CATALOG(); The output is written to the specified file: \o /tmp/catalog.txtSELECT DUMP_CATALOG(); \o DUMP_LOCKTABLE Returns information about deadlocked clients and the resources they are waiting for. Syntax DUMP_LOCKTABLE() Privileges None Notes Use DUMP_LOCKTABLE if HP Vertica becomes unresponsive: 1. Open an additional vsql connection. 2. Execute the query: => SELECT DUMP_LOCKTABLE(); The output is written to vsql. See Monitoring the Log Files. You can also see who is connected using the following command: => SELECT * FROM SESSIONS; Close all sessions using the following command: =>SELECT CLOSE_ALL_SESSIONS(); Close a single session using the following command: HP Vertica Analytic Database (7.0.x) Page 617 of 1539 SQL Reference Manual SQL Functions => SELECT CLOSE_SESSION('session_id'); You get the session_id value from the V_MONITOR.SESSIONS system table. See Also l CLOSE_ALL_SESSIONS l CLOSE_SESSION l LOCKS l SESSIONS DUMP_PARTITION_KEYS Dumps the partition keys of all projections in the system. Syntax DUMP_PARTITION_KEYS( ) Note: The ROS objects of partitioned tables without partition keys are ignored by the tuple mover and are not merged during automatic tuple mover operations. Privileges None; however function dumps only the tables for which user has SELECT privileges. Example => SELECT DUMP_PARTITION_KEYS( ); Partition keys on node v_vmart_node0001 Projection 'states_b0' Storage [ROS container] No of partition keys: 1 Partition keys: NH Storage [ROS container] No of partition keys: 1 Partition keys: MA Projection 'states_b1' Storage [ROS container] No of partition keys: 1 Partition keys: VT Storage [ROS container] No of partition keys: 1 HP Vertica Analytic Database (7.0.x) Page 618 of 1539 SQL Reference Manual SQL Functions Partition keys: ME Storage [ROS container] No of partition keys: 1 Partition keys: CT See Also l DO_TM_TASK l DROP_PARTITION l DUMP_PROJECTION_PARTITION_KEYS l DUMP_TABLE_PARTITION_KEYS l PARTITION_PROJECTION l PARTITION_TABLE l PARTITIONS l Working with Table Partitions DUMP_PROJECTION_PARTITION_KEYS Dumps the partition keys of the specified projection. Syntax DUMP_PROJECTION_PARTITION_KEYS( 'projection_name' ) Parameters projection_name Specifies the name of the projection. Privileges l SELECT privilege on table l USAGE privileges on schema Example The following example creates a simple table called states and partitions the data by state: HP Vertica Analytic Database (7.0.x) Page 619 of 1539 SQL Reference Manual SQL Functions => CREATE TABLE states (year INTEGER NOT NULL, state VARCHAR NOT NULL) PARTITION BY state; => CREATE PROJECTION states_p (state, year) AS SELECT * FROM states ORDER BY state, year UNSEGMENTED ALL NODES; Now dump the partition key of the specified projection: => SELECT DUMP_PROJECTION_PARTITION_KEYS( 'states_p_node0001' ); Partition keys on node helios_node0001 Projection 'states_p_node0001' No of partition keys: 1 Partition keys on node helios_node0002 ... (1 row) See Also l DO_TM_TASK l DROP_PARTITION l DUMP_PARTITION_KEYS l DUMP_TABLE_PARTITION_KEYS l PARTITION_PROJECTION l PARTITION_TABLE l PROJECTIONS l Working with Table Partitions DUMP_TABLE_PARTITION_KEYS Dumps the partition keys of all projections anchored on the specified table. Syntax DUMP_TABLE_PARTITION_KEYS ( 'table_name' ) Parameters table_name Specifies the name of the table. HP Vertica Analytic Database (7.0.x) Page 620 of 1539 SQL Reference Manual SQL Functions Privilege l SELECT privilege on table l USAGE privileges on schema Examples The following example creates a simple table called states and partitions the data by state: => CREATE TABLE states (year INTEGER NOT NULL, state VARCHAR NOT NULL) PARTITION BY state; => CREATE PROJECTION states_p (state, year) AS SELECT * FROM states ORDER BY state, year UNSEGMENTED ALL NODES; Now dump the partition keys of all projections anchored on table states: => SELECT DUMP_TABLE_PARTITION_KEYS( 'states' ); Partition keys on helios_node0001 Projection 'states_p_node0004' No of partition keys: 1 Projection 'states_p_node0003' No of partition keys: 1 Projection 'states_p_node0002' No of partition keys: 1 Projection 'states_p_node0001' No of partition keys: 1 Partition keys on helios_node0002 ... (1 row) See Also l DO_TM_TASK l DROP_PARTITION l DUMP_PROJECTION_PARTITION_KEYS l DUMP_TABLE_PARTITION_KEYS l PARTITION_PROJECTION l PARTITION_TABLE l Working with Table Partitions HP Vertica Analytic Database (7.0.x) Page 621 of 1539 SQL Reference Manual SQL Functions ENABLE_ELASTIC_CLUSTER Enables elastic cluster scaling, which makes enlarging or reducing the size of your database cluster more efficient by segmenting a node's data into chunks that can be easily moved to other hosts. Note: Databases created using HP Vertica Version 5.0 and later have elastic cluster enabled by default. You need to use this function on databases created before version 5.0 in order for them to use the elastic clustering feature. Syntax ENABLE_ELASTIC_CLUSTER() Privileges Must be a superuser. Example => SELECT ENABLE_ELASTIC_CLUSTER(); ENABLE_ELASTIC_CLUSTER -----------------------ENABLED (1 row) See Also l DISABLE_ELASTIC_CLUSTER ENABLE_LOCAL_SEGMENTS Enables local storage segmentation, which breaks projections segments on nodes into containers that can be easily moved to other nodes. See Local Data Segmentation in the Administrator's Guide for more information. Syntax ENABLE_LOCAL_SEGMENTS() HP Vertica Analytic Database (7.0.x) Page 622 of 1539 SQL Reference Manual SQL Functions Privileges Must be a superuser. Example => SELECT ENABLE_LOCAL_SEGMENTS(); ENABLE_LOCAL_SEGMENTS ----------------------ENABLED (1 row) ENABLE_PROFILING Enables profiling for the profiling type you specify. Note: HP Vertica stores profiled data is in memory, so depending on how much data you collect, profiling could be memory intensive. Syntax ENABLE_PROFILING( 'profiling-type' ) Parameters profiling-type The type of profiling data you want to enable. Can be one of: l session—enables profiling for basic session parameters and lock time out data l query—enables profiling for general information about queries that ran, such as the query strings used and the duration of queries l ee—enables profiling for information about the execution run of each query Example The following statement enables profiling on query execution runs: => SELECT ENABLE_PROFILING('ee'); ENABLE_PROFILING ---------------------- HP Vertica Analytic Database (7.0.x) Page 623 of 1539 SQL Reference Manual SQL Functions EE Profiling Enabled (1 row) See Also l CLEAR_PROFILING l DISABLE_PROFILING l Profiling Database Performance EVALUATE_DELETE_PERFORMANCE Evaluates projections for potential DELETE performance issues. If there are issues found, a warning message is displayed. For steps you can take to resolve delete and update performance issues, see Optimizing Deletes and Updates for Performance in the Administrator's Guide. This function uses data sampling to determine whether there are any issues with a projection. Therefore, it does not generate false-positives warnings, but it can miss some cases where there are performance issues. Note: Optimizing for delete performance is the same as optimizing for update performance. So, you can use this function to help optimize a projection for updates as well as deletes. Syntax EVALUATE_DELETE_PERFORMANCE ( 'target' ) Parameters target The name of a projection or table. If you supply the name of a projection, only that projection is evaluated for DELETE performance issues. If you supply the name of a table, then all of the projections anchored to the table will be evaluated for issues. If you do not provide a projection or table name, EVALUATE_DELETE_ PERFORMANCE examines all of the projections that you can access for DELETE performance issues. Depending on the size you your database, this may take a long time. Privileges None HP Vertica Analytic Database (7.0.x) Page 624 of 1539 SQL Reference Manual SQL Functions Notes When evaluating multiple projections, EVALUATE_DELETE_PERFORMANCE reports up to ten projections that have issues, and refers you to a table that contains the full list of issues it has found. Example The following example demonstrates how you can use EVALUATE_DELETE_PERFORMANCE to evaluate your projections for slow DELETE performance. => create table example (A int, B int,C int); CREATE TABLE => create projection one_sort (A,B,C) as (select A,B,C from example) order by A; CREATE PROJECTION => create projection two_sort (A,B,C) as (select A,B,C from example) order by A,B; CREATE PROJECTION => select evaluate_delete_performance('one_sort'); evaluate_delete_performance --------------------------------------------------No projection delete performance concerns found. (1 row) => select evaluate_delete_performance('two_sort'); evaluate_delete_performance --------------------------------------------------No projection delete performance concerns found. (1 row) The previous example showed that there was no structural issue with the projection that would cause poor DELETE performance. However, the data contained within the projection can create potential delete issues if the sorted columns do not uniquely identify a row or small number of rows. In the following example, Perl is used to populate the table with data using a nested series of loops. The inner loop populates column C, the middle loop populates column B, and the outer loop populates column A. The result is column A contains only three distinct values (0, 1, and 2), while column B slowly varies between 20 and 0 and column C changes in each row. EVALUATE_ DELETE_PERFORMANCE is run against the projections again to see if the data within the projections causes any potential DELETE performance issues. => \! perl -e 'for ($i=0; $i<3; $i++) { for ($j=0; $j<21; $j++) { for ($k=0; $k<19; $k++) { printf "%d,%d,%d\n", $i,$j,$k;}}}' | /opt/vertica/bin/vsql -c "copy example from stdin delimiter ',' direct;" Password: => select * from example; A | B | C ---+----+---0 | 20 | 18 0 | 20 | 17 0 | 20 | 16 0 | 20 | 15 0 | 20 | 14 HP Vertica Analytic Database (7.0.x) Page 625 of 1539 SQL Reference Manual SQL Functions 0 | 20 | 13 0 | 20 | 12 0 | 20 | 11 0 | 20 | 10 0 | 20 | 9 0 | 20 | 8 0 | 20 | 7 0 | 20 | 6 0 | 20 | 5 0 | 20 | 4 0 | 20 | 3 0 | 20 | 2 0 | 20 | 1 0 | 20 | 0 0 | 19 | 18 1157 rows omitted 2 | 1 | 0 2 | 0 | 18 2 | 0 | 17 2 | 0 | 16 2 | 0 | 15 2 | 0 | 14 2 | 0 | 13 2 | 0 | 12 2 | 0 | 11 2 | 0 | 10 2 | 0 | 9 2 | 0 | 8 2 | 0 | 7 2 | 0 | 6 2 | 0 | 5 2 | 0 | 4 2 | 0 | 3 2 | 0 | 2 2 | 0 | 1 2 | 0 | 0 => SELECT COUNT (*) FROM example; COUNT ------1197 (1 row) => SELECT COUNT (DISTINCT A) FROM example; COUNT ------3 (1 row) => select evaluate_delete_performance('one_sort'); evaluate_delete_performance --------------------------------------------------Projection exhibits delete performance concerns. (1 row) release=> select evaluate_delete_performance('two_sort'); evaluate_delete_performance --------------------------------------------------No projection delete performance concerns found. (1 row) HP Vertica Analytic Database (7.0.x) Page 626 of 1539 SQL Reference Manual SQL Functions The one_sort projection has potential delete issues since it only sorts on column A which has few distinct values. This means that each value in the sort column corresponds to many rows in the projection, which negatively impacts DELETE performance. Since the two_sort projection is sorted on columns A and B, each combination of values in the two sort columns identifies just a few rows, allowing deletes to be performed faster. Not supplying a projection name results in all of the projections you can access being evaluated for DELETE performance issues. => select evaluate_delete_performance(); evaluate_delete_performance --------------------------------------------------------------------------The following projection exhibits delete performance concerns: "public"."one_sort" See v_catalog.projection_delete_concerns for more details. (1 row) EXPORT_CATALOG Generates a SQL script that you can use to recreate a physical schema design in its current state on a different cluster. This function always attempts to recreate projection statements with KSAFE clauses, if they exist in the original definitions, or OFFSET clauses if they do not. Syntax EXPORT_CATALOG ( [ 'destination' ] , [ 'scope' ] ) Parameters destination Specifies the path and name of the SQL output file. An empty string (''), which is the default, outputs the script to standard output. The function writes the script to the catalog directory if no destination is specified. If you specify a file that does not exist, the function creates one. If the file preexists, the function silently overwrites its contents. scope Determines what to export: l DESIGN—Exports schemas, tables, constraints, views, and projections to which the user has access. This is the default value. l DESIGN_ALL—Exports all the design objects plus system objects created in Database Designer (for example, design contexts and their tables). The objects that are exported are those to which the user has access. l TABLES—Exports all tables, constraints, and projections for for which the user has permissions. See also EXPORT_TABLES. HP Vertica Analytic Database (7.0.x) Page 627 of 1539 SQL Reference Manual SQL Functions Privileges None. However: l EXPORT_CATALOG exports only the objects visible to the user . l Only a superuser can export output to a file. Example The following example exports the design to standard output: => SELECT EXPORT_CATALOG('','DESIGN'); See Also EXPORT_OBJECTS l EXPORT_TABLES l l EXPORT_OBJECTS Generates a SQL script you can use to recreate catalog objects on a different cluster. The generated script includes only the non-virtual objects to which the user has access. The function exports catalog objects in order dependency for correct recreation. Running the generated SQL script on another cluster then creates all referenced objects before their dependent objects. The EXPORT_OBJECTS function always attempts to recreate projection statements with KSAFE clauses, if they existed in the original definitions, or OFFSET clauses, if they did not. None of the EXPORT_OBJECTS parameters accepts a NULL value as input. EXPORT_ OBJECTS returns an error if an explicitly-specified object does not exist, or the user does not have access to the object. Syntax EXPORT_OBJECTS( [ 'destination' ] , [ 'scope' ] , [ 'ksafe' ] ) HP Vertica Analytic Database (7.0.x) Page 628 of 1539 SQL Reference Manual SQL Functions Parameters destination Specifies the path and name of the SQL output file. The default empty string ('') outputs the script to standard output. The function writes the script to the catalog directory if no destination is specified. If you specify a file that does not exist, the function creates one. If the file preexists, the function silently overwrites its contents. scope ksafe Determines the scope of the catalog objects to export: l An empty string (' ')—exports all non-virtual objects to which the user has access, including constraints. (Note that constraints are not objects that can be passed as individual arguments.) An empty string is the default scope value if you do not limit the export. l A comma-delimited list of catalog objects to export, which can include the following: n ' [dbname.]schema.object '—matches each named schema object. You can optionally qualify the schema with a database prefix. A named schema object can be a table, projection, view, sequence, or user-defined SQL function. n ' [dbname.]schema—matches the named schema, which you can optionally qualify with a database prefix. For a schema, HP Vertica exports all nonvirtual objects that the user has access to within the schema. If a schema and table have the same name, the schema takes precedence. Determines if the statistics are regenerated before loading them into the design context Specifies whether to incorporate a MARK_DESIGN_KSAFE statement with the correct K-safe value for the database: l true—adds the MARK_DESIGN_KSAFE statement to the end of the output script. This is the default value. l false—omits the MARK_DESIGN_KSAFE statement from the script. Privileges None. However: l EXPORT_OBJECTS exports only the objects visible to the user . l Only a superuser can export output to a file. HP Vertica Analytic Database (7.0.x) Page 629 of 1539 SQL Reference Manual SQL Functions Example The following example exports all the non-virtual objects to which the user has access to standard output. The example uses false for the last parameter, indicating that the file will not include the MARK_DESIGN_KSAFE statement at the end. => SELECT EXPORT_OBJECTS(' ',' ',false); See Also l EXPORT_CATALOG l EXPORT_TABLES l Exporting Objects EXPORT_STATISTICS Generates an XML file that contains statistics for the database. You can optionally export statistics on a single database object (table, projection, or table column). Before you export statistics for the database, run ANALYZE_STATISTICS() to automatically collect the most up to date statistics information. Note: Use the second argument only if statistics in the database do not match the statistics of data. Syntax EXPORT_STATISTICS [ ( 'destination' ) ... | ( '[ [ db-name.]schema.]table [.column-name ]' ) ] Parameters destination Specifies the path and name of the XML output file. An empty string returns the script to the screen. HP Vertica Analytic Database (7.0.x) Page 630 of 1539 SQL Reference Manual SQL Functions [[db-name.]schema.] [Optional] Specifies the schema name. Using a schema identifies objects that are not unique within the current search path (see Setting Schema Search Paths). You can optionally precede a schema with a database name, but you must be connected to the database you specify. You cannot make changes to objects in other databases. The ability to specify different database objects (from database and schemas to tables and columns) lets you qualify database objects as explicitly as required. For example, use a table and column (mytable.column1), a schema, table, and column (myschema.mytable.column1), and, as full qualification, a database, schema, table, and column (mydb.myschema.mytable.column1). table Specifies the name of the table and exports statistics for all projections of that table. Note: If you are using more than one schema, specify the schema that contains the projection, as noted as noted in the [[db-name.] schema.] entry. [.column-name] [Optional] Specifies the name of a single column, typically a predicate column. Using this option with a table specification lets you export statistics for only that column. Privileges Must be a superuser. Examples The following command exports statistics on the VMart example database to a file: vmart=> SELECT EXPORT_STATISTICS('/opt/vertica/examples/VMart_Schema/vmart_stats.xml'); EXPORT_STATISTICS ----------------------------------Statistics exported successfully (1 row) The next statement exports statistics on a single column (price) from a table called food: => SELECT EXPORT_STATISTICS('/opt/vertica/examples/VMart_Schema/price.xml', 'food.pric e'); HP Vertica Analytic Database (7.0.x) Page 631 of 1539 SQL Reference Manual SQL Functions See Also l ANALYZE_STATISTICS l DROP_STATISTICS l IMPORT_STATISTICS l Collecting Database Statistics EXPORT_TABLES Generates a SQL script that can be used to recreate a logical schema (schemas, tables, constraints, and views) on a different cluster. Syntax EXPORT_TABLES ( [ 'destination' ] , [ 'scope' ] ) Parameters destination Specifies the path and name of the SQL output file. An empty string (''), which is the default, outputs the script to standard output. The function writes the script to the catalog directory if no destination is specified. If you specify a file that does not exist, the function creates one. If the file preexists, the function silently overwrites its contents. scope Determines the tables to export. Specify the scope as follows: l An empty string (' ')—exports all non-virtual table objects to which the user has access, including table schemas, sequences, and constraints. Exporting all non-virtual objects is the default scope, and what the function exports if you do not specify a scope. l A comma-delimited list of objects, which can include the following: n ' [dbname.][schema.]object '—matches the named objects, which can be schemas, tables, or views, in the schema. You can optionally qualify a schema with a database prefix, and objects with a schema. You cannot pass constraints as individual arguments. n ' [dbname.]object '—matches a named object, which can be a schema, table, or view. You can optionally qualify a schema with a database prefix, and an object with its schema. For a schema, HP Vertica exports all non-virtual objects to which the user has access within the schema. If a schema and table both have the same name, the schema takes precedence. HP Vertica Analytic Database (7.0.x) Page 632 of 1539 SQL Reference Manual SQL Functions Privileges None; however: l Function exports only the objects visible to the user l Only a superuser can export output to file Example The following example exports the store_orders_fact table of the store schema (in the current database) to standard output: => SELECT EXPORT_TABLES(' ','store.store_orders_fact'); EXPORT_TABLES returns an error if: l You explicitly specify an object that does not exist l The current user does not have access to a specified object See Also EXPORT_CATALOG l EXPORT_OBJECTS l l FLUSH_DATA_COLLECTOR Waits until memory logs are moved to disk and then flushes the Data Collector, synchronizing the log with the disk storage. A superuser can flush Data Collector information for an individual component or for all components. Syntax FLUSH_DATA_COLLECTOR( [ 'component' ] ) HP Vertica Analytic Database (7.0.x) Page 633 of 1539 SQL Reference Manual SQL Functions Parameters component Flushes the specified component. If you provide no argument, the function flushes the Data Collector in full. For the current list of component names, query the V_MONITOR.DATA_ COLLECTOR system table. Privileges Must be a superuser. Examples The following command flushes the Data Collector for the ResourceAcquisitions component: => SELECT flush_data_collector('ResourceAcquisitions'); flush_data_collector ---------------------FLUSH (1 row) The following command flushes data collection for all components: => SELECT flush_data_collector(); flush_data_collector ---------------------FLUSH (1 row) See Also DATA_COLLECTOR l l GET_AHM_EPOCH Returns the number of the epoch in which the Ancient History Mark is located. Data deleted up to and including the AHM epoch can be purged from physical storage. Syntax GET_AHM_EPOCH() HP Vertica Analytic Database (7.0.x) Page 634 of 1539 SQL Reference Manual SQL Functions Note: The AHM epoch is 0 (zero) by default (purge is disabled). Privileges None Examples SELECT GET_AHM_EPOCH(); GET_AHM_EPOCH ---------------------Current AHM epoch: 0 (1 row) GET_AHM_TIME Returns a TIMESTAMP value representing the Ancient History Mark. Data deleted up to and including the AHM epoch can be purged from physical storage. Syntax GET_AHM_TIME() Privileges None Examples SELECT GET_AHM_TIME(); GET_AHM_TIME ------------------------------------------------Current AHM Time: 2010-05-13 12:48:10.532332-04 (1 row) See Also l SET DATESTYLE l TIMESTAMP HP Vertica Analytic Database (7.0.x) Page 635 of 1539 SQL Reference Manual SQL Functions GET_AUDIT_TIME Reports the time when the automatic audit of database size occurs. HP Vertica performs this audit if your HP Vertica license includes a data size allowance. For details of this audit, see Managing Your License Key in the Administrator's Guide. To change the time the audit runs, use the SET_ AUDIT_TIME function. Syntax GET_AUDIT_TIME() Privileges None Example => SELECT get_audit_time(); get_audit_time ----------------------------------------------------The audit is scheduled to run at 11:59 PM each day. (1 row) GET_COMPLIANCE_STATUS Displays whether your database is in compliance with your HP Vertica license agreement. This information includes the results of HP Vertica's most recent audit of the database size (if your license has a data allowance as part of its terms), and the license term (if your license has an end date). The information displayed by GET_COMPLIANCE_STATUS includes: l The estimated size of the database (see How HP Vertica Calculates Database Size in the Administrator's Guide for an explanation of the size estimate). l The raw data size allowed by your HP Vertica license. l The percentage of your allowance that your database is currently using. l The date and time of the last audit. l Whether your database complies with the data allowance terms of your license agreement. l The end date of your license. l How many days remain until your license expires. HP Vertica Analytic Database (7.0.x) Page 636 of 1539 SQL Reference Manual SQL Functions Note: If your license does not have a data allowance or end date, some of the values may not appear in the output for GET_COMPLIANCE_STATUS. If the audit shows your license is not in compliance with your data allowance, you should either delete data to bring the size of the database under the licensed amount, or upgrade your license. If your license term has expired, you should contact HP immediately to renew your license. See Managing Your License Key in the Administrator's Guide for further details. Syntax GET_COMPLIANCE_STATUS() Privileges None Example GET_COMPLIANCE_STATUS --------------------------------------------------------------------------------Raw Data Size: 2.00GB +/- 0.003GB License Size : 4.000GB Utilization : 50% Audit Time : 2011-03-09 09:54:09.538704+00 Compliance Status : The database is in compliance with respect to raw data size. License End Date: 04/06/2011 Days Remaining: 28.59 (1 row) GET_CURRENT_EPOCH The epoch into which data (COPY, INSERT, UPDATE, and DELETE operations) is currently being written. The current epoch advances automatically every three minutes. Returns the number of the current epoch. Syntax GET_CURRENT_EPOCH() Privileges None HP Vertica Analytic Database (7.0.x) Page 637 of 1539 SQL Reference Manual SQL Functions Examples SELECT GET_CURRENT_EPOCH(); GET_CURRENT_EPOCH ------------------683 (1 row) GET_DATA_COLLECTOR_POLICY Retrieves a brief statement about the retention policy for the specified component. Syntax GET_DATA_COLLECTOR_POLICY( 'component' ) Parameters component Returns the retention policy for the specified component. For a current list of component names, query the V_MONITOR.DATA_ COLLECTOR system table Privileges None Example The following query returns the history of all resource acquisitions by specifying the ResourceAcquisitions component: => SELECT get_data_collector_policy('ResourceAcquisitions'); get_data_collector_policy ---------------------------------------------1000KB kept in memory, 10000KB kept on disk. (1 row) See Also DATA_COLLECTOR l l HP Vertica Analytic Database (7.0.x) Page 638 of 1539 SQL Reference Manual SQL Functions GET_LAST_GOOD_EPOCH A term used in manual recovery, LGE (Last Good Epoch) refers to the most recent epoch that can be recovered. Returns the number of the last good epoch. Syntax GET_LAST_GOOD_EPOCH() Privileges None Examples SELECT GET_LAST_GOOD_EPOCH(); GET_LAST_GOOD_EPOCH --------------------682 (1 row) GET_NUM_ACCEPTED_ROWS Returns the number of rows loaded into the database for the last completed load for the current session. GET_NUM_ACCEPTED_ROWS is a meta-function. Do not use it as a value in an INSERT query. The number of accepted rows is not available for a load that is currently in process. Check the LOAD_STREAMS system table for its status. Also, this meta-function supports only loads from STDIN or a single file on the initiator. You cannot use GET_NUM_ACCEPTED_ROWS for multi-node loads. Syntax GET_NUM_ACCEPTED_ROWS(); Privileges None Note: The data regarding accepted rows from the last load during the current session does not HP Vertica Analytic Database (7.0.x) Page 639 of 1539 SQL Reference Manual SQL Functions persist, and is lost when you initiate a new load. See Also l GET_NUM_REJECTED_ROWS GET_NUM_REJECTED_ROWS Returns the number of rows that were rejected during the last completed load for the current session. GET_NUM_REJECTED_ROWS is a meta-function. Do not use it as a value in an INSERT query. Rejected row information is unavailable for a load that is currently running. The number of rejected rows is not available for a load that is currently in process. Check the LOAD_STREAMS system table for its status. Also, this meta-function supports only loads from STDIN or a single file on the initiator. You cannot use GET_NUM_REJECTED_ROWS for multi-node loads. Syntax GET_NUM_REJECTED_ROWS(); Privileges None Note: The data regarding rejected rows from the last load during the current session does not persist, and is dropped when you initiate a new load. See Also l GET_NUM_ACCEPTED_ROWS GET_PROJECTION_STATUS Returns information relevant to the status of a projection. Syntax GET_PROJECTION_STATUS ( '[[db-name.]schema-name.]projection' ); HP Vertica Analytic Database (7.0.x) Page 640 of 1539 SQL Reference Manual SQL Functions Parameters [[db-name.]schema.] [Optional] Specifies the database name and optional schema name. Using a database name identifies objects that are not unique within the current search path (see Setting Search Paths). You must be connected to the database you specify, and you cannot change objects in other databases. Specifying different database objects lets you qualify database objects as explicitly as required. For example, you can use a database and a schema name (mydb.myschema). projection Is the name of the projection for which to display status. When using more than one schema, specify the schema that contains the projection, as noted above. Privileges None Description GET_PROJECTION_STATUS returns information relevant to the status of a projection: l The current K-safety status of the database l The number of nodes in the database l Whether the projection is segmented l The number and names of buddy projections l Whether the projection is safe l Whether the projection is up-to-date l Whether statistics have been computed for the projection Notes l You can use GET_PROJECTION_STATUS to monitor the progress of a projection data refresh. See ALTER PROJECTION. l To view a list of the nodes in a database, use the View Database Command in the Administration Tools. HP Vertica Analytic Database (7.0.x) Page 641 of 1539 SQL Reference Manual SQL Functions Examples => SELECT GET_PROJECTION_STATUS('public.customer_dimension_site01'); GET_PROJECTION_STATUS ---------------------------------------------------------------------------------------------Current system K is 1. # of Nodes: 4. public.customer_dimension_site01 [Segmented: No] [Seg Cols: ] [K: 3] [public.customer_dim ension_site04, public.customer_dimension_site03, public.customer_dimension_site02] [Safe: Yes] [UptoDate: Yes][Stats: Yes] See Also l ALTER PROJECTION RENAME l GET_PROJECTIONS, GET_TABLE_PROJECTIONS GET_PROJECTIONS, GET_TABLE_PROJECTIONS Note: This function was formerly named GET_TABLE_PROJECTIONS(). HP Vertica still supports the former function name. Returns information relevant to the status of a table: l The current K-safety status of the database l The number of sites (nodes) in the database l The number of projections for which the specified table is the anchor table l For each projection: n The projection's buddy projections n Whether the projection is segmented n Whether the projection is safe n Whether the projection is up-to-date Syntax GET_PROJECTIONS ( '[[db-name.]schema-name.]table' ) HP Vertica Analytic Database (7.0.x) Page 642 of 1539 SQL Reference Manual SQL Functions Parameters [[db-name.]schema.] [Optional] Specifies the database name and optional schema name. Using a database name identifies objects that are not unique within the current search path (see Setting Search Paths). You must be connected to the database you specify, and you cannot change objects in other databases. Specifying different database objects lets you qualify database objects as explicitly as required. For example, you can use a database and a schema name (mydb.myschema). table Is the name of the table for which to list projections. When using more than one schema, specify the schema that contains the table. Privileges None Notes l You can use GET_PROJECTIONS to monitor the progress of a projection data refresh. See ALTER PROJECTION. l To view a list of the nodes in a database, use the View Database Command in the Administration Tools. Examples The following example gets information about the store_dimension table in the VMart schema: => SELECT GET_PROJECTIONS('store.store_dimension'); -------------------------------------------------------------------------------------Current system K is 1. # of Nodes: 4. Table store.store_dimension has 4 projections. Projection Name: [Segmented] [Seg Cols] [# of Buddies] [Buddy Projections] [Safe] [UptoDa te] ---------------------------------------------------------store.store_dimension_node0004 [Segmented: No] [Seg Cols: ] [K: 3] [store.store_dimensio n_node0003, store.store_dimension_node0002, store.store_dimension_node0001] [Safe: Yes] [UptoDate: Yes][Stats: Yes] store.store_dimension_node0003 [Segmented: No] [Seg Cols: ] [K: 3] [store.store_dimensio n_node0004, store.store_dimension_node0002, store.store_dimension_node0001] [Safe: Yes] [UptoDate: Yes][Stats: Yes] store.store_dimension_node0002 [Segmented: No] [Seg Cols: ] [K: 3] [store.store_dimensio n_node0004, store.store_dimension_node0003, store.store_dimension_node0001] [Safe: Yes] [UptoDate: Yes][Stats: Yes] HP Vertica Analytic Database (7.0.x) Page 643 of 1539 SQL Reference Manual SQL Functions store.store_dimension_node0001 [Segmented: No] [Seg Cols: ] [K: 3] [store.store_dimensio n_node0004, store.store_dimension_node0003, store.store_dimension_node0002] [Safe: Yes] [UptoDate: Yes][Stats: Yes] (1 row) See Also l ALTER PROJECTION RENAME l GET_PROJECTION_STATUS HAS_ROLE Indicates, with a Boolean value, whether a role has been assigned to a user. This function is useful for letting you check your own role membership. Behavior Type Stable Syntax 1 HAS_ROLE( [ 'user_name' ,] 'role_name' ); Syntax 2 HAS_ROLE( 'role_name' ); Parameters user_name [Optional] The name of a user to look up. Currently, only a superuser can supply the user_name argument. role_name The name of the role you want to verify has been granted. Privileges Users can check their own role membership by calling HAS_ROLE('role_name'), but only a superuser can look up other users' memberships using the optional user_name parameter. HP Vertica Analytic Database (7.0.x) Page 644 of 1539 SQL Reference Manual SQL Functions Notes You can query V_CATALOG system tables ROLES, GRANTS, and USERS to show any directlyassigned roles; however, these tables do not indicate whether a role is available to a user when roles may be available through other roles (indirectly). Examples User Bob wants to see if he has been granted the commentor role: => SELECT HAS_ROLE('commentor'); Output t for true indicates that Bob has been assigned the commentor role: HAS_ROLE ---------t (1 row) In the following function call, a superuser checks if the logadmin role has been granted to user Bob: => SELECT HAS_ROLE('Bob', 'logadmin'); HAS_ROLE ---------t (1 row) To view the names of all roles users can access, along with any roles that have been assigned to those roles, query the V_CATALOG.ROLES system table. An asterisk in the output means role granted WITH ADMIN OPTION. => SELECT * FROM roles; role_id | name | assigned_roles -------------------+-----------------+---------------45035996273704964 | public | 45035996273704966 | dbduser | 45035996273704968 | dbadmin | dbduser* 45035996273704972 | pseudosuperuser | dbadmin* 45035996273704974 | logreader | 45035996273704976 | logwriter | 45035996273704978 | logadmin | logreader, logwriter (7 rows) HP Vertica Analytic Database (7.0.x) Page 645 of 1539 SQL Reference Manual SQL Functions See Also l GRANTS l ROLES l USERS l Managing Users and Privileges l Viewing a user's Role IMPORT_STATISTICS Imports statistics from the XML file generated by the EXPORT_STATISTICS command. Syntax IMPORT_STATISTICS ( 'destination' ) Parameters destination Specifies the path and name of the XML input file (which is the output of EXPORT_ STATISTICS function). Privileges Must be a superuser. Notes l Imported statistics override existing statistics for all projections on the specified table. l For use cases, see Collecting Statistics in the Administrator's Guide Example Import the statistics for the VMart database that EXPORT_STATISTICS saved. -> SELECT IMPORT_STATISTICS('/opt/vertica/examples/VMart_Schema/vmart_stats.xml'); IMPORT_STATISTICS HP Vertica Analytic Database (7.0.x) Page 646 of 1539 SQL Reference Manual SQL Functions ---------------------------------------------------------------------------Importing statistics for projection date_dimension_super column date_key failure (stats d id not contain row counts) Importing statistics for projection date_dimension_super column date failure (stats did n ot contain row counts) Importing statistics for projection date_dimension_super column full_date_description fai lure (stats did not contain row counts) ... (1 row) VMart=> See Also l ANALYZE_STATISTICS l DROP_STATISTICS l EXPORT_STATISTICS INTERRUPT_STATEMENT Interrupts the specified statement (within an external session), rolls back the current transaction, and writes a success or failure message to the log file. Syntax INTERRUPT_STATEMENT( 'session_id ', statement_id ) Parameters session_id Specifies the session to interrupt. This identifier is unique within the cluster at any point in time. statement_id Specifies the statement to interrupt Privileges Must be a superuser. HP Vertica Analytic Database (7.0.x) Page 647 of 1539 SQL Reference Manual SQL Functions Notes l Only statements run by external sessions can be interrupted. l Sessions can be interrupted during statement execution. l If the statement_id is valid, the statement is interruptible. The command is successfully sent and returns a success message. Otherwise the system returns an error. Messages The following list describes messages you might encounter: Message Meaning Statement interrupt sent. Check SESSIONS for progress. This message indicates success. Session could not be successfully interrupted: session not found. The session ID argument to the interrupt command does not match a running session. Session could not be successfully interrupted: statement not found. The statement ID does not match (or no longer matches) the ID of a running statement (if any). No interruptible statement running The statement is DDL or otherwise non-interruptible. Internal (system) sessions cannot be interrupted. The session is internal, and only statements run by external sessions can be interrupted. Examples Two user sessions are open. RECORD 1 shows user session running SELECT FROM SESSION, and RECORD 2 shows user session running COPY DIRECT: HP Vertica Analytic Database (7.0.x) Page 648 of 1539 SQL Reference Manual SQL Functions => SELECT * FROM SESSIONS; -[ RECORD 1 ]--------------+---------------------------------------------------node_name | v_vmartdb_node0001 user_name | dbadmin client_hostname | 127.0.0.1:52110 client_pid | 4554 login_timestamp | 2011-01-03 14:05:40.252625-05 session_id | stress04-4325:0x14 client_label | transaction_start | 2011-01-03 14:05:44.325781 transaction_id | 45035996273728326 transaction_description | user dbadmin (select * from sessions;) statement_start | 2011-01-03 15:36:13.896288 statement_id | 10 last_statement_duration_us | 14978 current_statement | select * from sessions; ssl_state | None authentication_method | Trust -[ RECORD 2 ]--------------+---------------------------------------------------node_name | v_vmartdb_node0003 user_name | dbadmin client_hostname | 127.0.0.1:56367 client_pid | 1191 login_timestamp | 2011-01-03 15:31:44.939302-05 session_id | stress06-25663:0xbec client_label | transaction_start | 2011-01-03 15:34:51.05939 transaction_id | 54043195528458775 transaction_description | user dbadmin (COPY Mart_Fact FROM '/data/Mart_Fact.tbl' DELIMITER '|' NULL '\\n' DIRECT;) statement_start | 2011-01-03 15:35:46.436748 statement_id | 5 last_statement_duration_us | 1591403 current_statement | COPY Mart_Fact FROM '/data/Mart_Fact.tbl' DELIMITER '|' NULL '\\n' DIRECT; ssl_state | None authentication_method | Trust Interrupt the COPY DIRECT statement running in stress06-25663:0xbec: => \xExpanded display is off. => SELECT INTERRUPT_STATEMENT('stress06-25663:0x1537', 5); interrupt_statement -----------------------------------------------------------------Statement interrupt sent. Check v_monitor.sessions for progress. (1 row) Verify that the interrupted statement is no longer active by looking at the current_statement column in the SESSIONS system table. This column becomes blank when the statement has been interrupted: => SELECT * FROM SESSIONS; -[ RECORD 1 ]--------------+---------------------------------------------------node_name | v_vmartdb_node0001 user_name | dbadmin HP Vertica Analytic Database (7.0.x) Page 649 of 1539 SQL Reference Manual SQL Functions client_hostname | 127.0.0.1:52110 client_pid | 4554 login_timestamp | 2011-01-03 14:05:40.252625-05 session_id | stress04-4325:0x14 client_label | transaction_start | 2011-01-03 14:05:44.325781 transaction_id | 45035996273728326 transaction_description | user dbadmin (select * from sessions;) statement_start | 2011-01-03 15:36:13.896288 statement_id | 10 last_statement_duration_us | 14978 current_statement | select * from sessions; ssl_state | None authentication_method | Trust -[ RECORD 2 ]--------------+---------------------------------------------------node_name | v_vmartdb_node0003 user_name | dbadmin client_hostname | 127.0.0.1:56367 client_pid | 1191 login_timestamp | 2011-01-03 15:31:44.939302-05 session_id | stress06-25663:0xbec client_label | transaction_start | 2011-01-03 15:34:51.05939 transaction_id | 54043195528458775 transaction_description | user dbadmin (COPY Mart_Fact FROM '/data/Mart_Fact.tbl' DELIMITER '|' NULL '\\n' DIRECT;) statement_start | 2011-01-03 15:35:46.436748 statement_id | 5 last_statement_duration_us | 1591403 current_statement | ssl_state | None authentication_method | Trust See Also l SESSIONS l Managing Sessions l Configuration Parameters INSTALL_LICENSE Installs the license key in the global catalog. Syntax INSTALL_LICENSE( 'filename' ) Parameters filename specifies the absolute pathname of a valid license file. HP Vertica Analytic Database (7.0.x) Page 650 of 1539 SQL Reference Manual SQL Functions Privileges Must be a superuser. Notes For more information about license keys, see Managing Your License Key in the Administrator's Guide. Examples => SELECT INSTALL_LICENSE('/tmp/vlicense.dat'); LAST_INSERT_ID Returns the last value of a column whose value is automatically incremented through the AUTO_ INCREMENT or IDENTITY Column-Constraint. If multiple sessions concurrently load the same table, the returned value is the last value generated for an AUTO_INCREMENT column by an insert in that session. Behavior Type Volatile Syntax LAST_INSERT_ID() Privileges l Table owner l USAGE privileges on schema Notes l This function works only with AUTO_INCREMENT and IDENTITY columns. See columnconstraints for the CREATE TABLE statement. l LAST_INSERT_ID does not work with sequence generators created through the CREATE SEQUENCE statement. HP Vertica Analytic Database (7.0.x) Page 651 of 1539 SQL Reference Manual SQL Functions Examples Create a sample table called customer4. => CREATE TABLE customer4( ID IDENTITY(2,2), lname VARCHAR(25), fname VARCHAR(25), membership_card INTEGER ); => INSERT INTO customer4(lname, fname, membership_card) VALUES ('Gupta', 'Saleem', 475987); Notice that the IDENTITY column has a seed of 2, which specifies the value for the first row loaded into the table, and an increment of 2, which specifies the value that is added to the IDENTITY value of the previous row. Query the table you just created: => SELECT * FROM customer4; ID | lname | fname | membership_card ----+-------+--------+----------------2 | Gupta | Saleem | 475987 (1 row) Insert some additional values: => INSERT INTO customer4(lname, fname, membership_card) VALUES ('Lee', 'Chen', 598742); Call the LAST_INSERT_ID function: => SELECT LAST_INSERT_ID(); LAST_INSERT_ID ---------------4 (1 row) Query the table again: => SELECT * FROM customer4; ID | lname | fname | membership_card ----+-------+--------+----------------2 | Gupta | Saleem | 475987 4 | Lee | Chen | 598742 (2 rows) Add another row: HP Vertica Analytic Database (7.0.x) Page 652 of 1539 SQL Reference Manual SQL Functions => INSERT INTO customer4(lname, fname, membership_card) VALUES ('Davis', 'Bill', 469543); Call the LAST_INSERT_ID function: => SELECT LAST_INSERT_ID(); LAST_INSERT_ID ---------------6 (1 row) Query the table again: => SELECT * FROM customer4; ID | lname | fname ----+-------+--------+----------------2 | Gupta | Saleem | 475987 4 | Lee | Chen | 598742 6 | Davis | Bill | 469543 (3 rows) | membership_card See Also l ALTER SEQUENCE l CREATE SEQUENCE l DROP SEQUENCE l SEQUENCES l Using Named Sequences l Sequence Privileges MAKE_AHM_NOW Sets the Ancient History Mark (AHM) to the greatest allowable value, and lets you drop any projections that existed before the issue occurred. Caution: This function is intended for use by Administrators only. Syntax MAKE_AHM_NOW ( [ true ] ) HP Vertica Analytic Database (7.0.x) Page 653 of 1539 SQL Reference Manual SQL Functions Parameters true [Optional] Allows AHM to advance when nodes are down. Note: If the AHM is advanced after the last good epoch of the failed nodes, those nodes must recover all data from scratch. Use with care. Privileges Must be a superuser. Notes l l The MAKE_AHM_NOW function performs the following operations: n Advances the epoch. n Performs a moveout operation on all projections. n Sets the AHM to LGE — at least to the current epoch at the time MAKE_AHM_NOW() was issued. All history is lost and you cannot perform historical queries prior to the current epoch. Example => SELECT MAKE_AHM_NOW(); MAKE_AHM_NOW -----------------------------AHM set (New AHM Epoch: 683) (1 row) The following command allows the AHM to advance, even though node 2 is down: => SELECT WARNING: WARNING: WARNING: MAKE_AHM_NOW(true); Received no response from v_vmartdb_node0002 in get cluster LGE Received no response from v_vmartdb_node0002 in get cluster LGE Received no response from v_vmartdb_node0002 in set AHM MAKE_AHM_NOW -----------------------------AHM set (New AHM Epoch: 684) (1 row) HP Vertica Analytic Database (7.0.x) Page 654 of 1539 SQL Reference Manual SQL Functions See Also l DROP PROJECTION l MARK_DESIGN_KSAFE l SET_AHM_EPOCH l SET_AHM_TIME MARK_DESIGN_KSAFE Enables or disables high availability in your environment, in case of a failure. Before enabling recovery, MARK_DESIGN_KSAFE queries the catalog to determine whether a cluster's physical schema design meets the following requirements: l Small, unsegmented tables are replicated on all nodes. l Large table superprojections are segmented with each segment on a different node. l Each large table projection has at least one buddy projection for K-safety=1 (or two buddy projections for K-safety=2). Buddy projections are also segmented across database nodes, but the distribution is modified so that segments that contain the same data are distributed to different nodes. See High Availability Through Projections in the Concepts Guide. Note: Projections are considered to be buddies if they contain the same columns and have the same segmentation. They can have different sort orders. MARK_DESIGN_KSAFE does not change the physical schema in any way. Syntax MARK_DESIGN_KSAFE ( k ) Parameters k 2 enables high availability if the schema design meets requirements for K-safety=2 1 enables high availability if the schema design meets requirements for K-safety=1 0 disables high availability If you specify a k value of one (1) or two (2), HP Vertica returns one of the following messages. Success: HP Vertica Analytic Database (7.0.x) Page 655 of 1539 SQL Reference Manual SQL Functions Marked design n-safe Failure: The schema does not meet requirements for K=n. Fact table projection projection-name has insufficient "buddy" projections. n in the message is 1 or 2 and represents the k value. Privileges Must be a superuser. Notes l The database's internal recovery state persists across database restarts but it is not checked at startup time. l If a database has automatic recovery enabled, you must temporarily disable automatic recovery before creating a new table. l When one node fails on a system marked K-safe=1, the remaining nodes are available for DML operations. Examples => SELECT MARK_DESIGN_KSAFE(1); mark_design_ksafe ---------------------Marked design 1-safe (1 row) If the physical schema design is not K-Safe, messages indicate which projections do not have a buddy: => SELECT MARK_DESIGN_KSAFE(1); The given K value is not correct; the schema is 0-safe Projection pp1 has 0 buddies, which is smaller that the given K of 1 Projection pp2 has 0 buddies, which is smaller that the given K of 1 . . . (1 row) HP Vertica Analytic Database (7.0.x) Page 656 of 1539 SQL Reference Manual SQL Functions See Also l SYSTEM l High Availability and Recovery l HP Vertica System Tables l Avoiding Resegmentation During Joins l Failure Recovery MATERIALIZE_FLEXTABLE_COLUMNS Materializes virtual columns that are listed as key_names in the flextable_keys table. You can optionally indicate the number of columns to materialize, and use a keys table other than the default. If you do not specify the number of columns, the function materializes up to 50 virtual column key names. Calling this function requires that you first compute flex table keys using either COMPUTE_FLEXTABLE_KEYS or COMPUTE_FLEXTABLE_KEYS_AND_BUILD_VIEW . Note: Materializing any virtual column into a real column with this function affects data storage limits. Each materialized column counts against the data storage limit of your HP Vertica Enterprise Edition (EE) license. This increase is reflected when HP Vertica next performs a license compliance audit. To manually check your EE license compliance, call the audit() function, described in the SQL Reference Manual. Usage materialize_flextable_columns('flex_table' [, n-columns [, keys_table_name] ]) Arguments flex_table The name of the flex table with columns to materialize. Specifying only the flex table name attempts to materialize up to 50 columns of key names in the default flex_table_keys table, skipping any columns already materialized. To materialize a specific number of columns, use the optional parameter n_columns, described next. HP Vertica Analytic Database (7.0.x) Page 657 of 1539 SQL Reference Manual SQL Functions n-columns [Optional ] The number of columns to materialize. The function attempts to materialize the number of columns from the flex_table_ keys table, skipping any columns already materialized. HP VERTICA tables support a total of 1600 columns, which is the greatest value you can specify for n-columns. The function orders the materialized results by frequency, descending, key_namewhen materializing the first n columns. keys_table_name [Optional] The name of a flex_keys_table from which to materialize columns. The function attempts to materialize the number of columns (value of n-columns) from keys_table_name, skipping any columns already materialized. The function orders the materialized results by frequency, descending, key_namewhen materializing the first n columns. Examples The following example loads a sample file of tweets (tweets_10000.json) into the flex table twitter_r. After loading data and computing keys for the sample flex table, the example calls materialize_ flextable_columns to materialize the first four columns: dbt=> copy twitter_r from '/home/release/KData/tweets_10000.json' parser fjsonparser(); Rows Loaded ------------10000 (1 row) dbt=> select compute_flextable_keys ('twitter_r'); compute_flextable_keys --------------------------------------------------Please see public.twitter_r_keys for updated keys (1 row) dbt=> select materialize_flextable_columns('twitter_r', 4); materialize_flextable_columns ------------------------------------------------------------------------------The following columns were added to the table public.twitter_r: contributors entities.hashtags entities.urls For more details, run the following query: SELECT * FROM v_catalog.materialize_flextable_columns_results WHERE table_schema = 'publi c' and table_name = 'twitter_r'; (1 row) The last message in the example recommends querying the materialize_flextable_columns_ results system table for the results of materializing the columns. Following is an example of running that query: HP Vertica Analytic Database (7.0.x) Page 658 of 1539 SQL Reference Manual SQL Functions dbt=> SELECT * FROM v_catalog.materialize_flextable_columns_results WHERE table_schema = 'public' and table_name = 'twitter_r'; table_id | table_schema | table_name | creation_time | key_name | status | message -------------------+--------------+------------+------------------------------+-------------------+--------+-------------------------------------------------------45035996273733172 | public | twitter_r | 2013-11-20 17:00:27.945484-05 | contributors | ADDED | Added successfully 45035996273733172 | public | twitter_r | 2013-11-20 17:00:27.94551-05 | entities.hashtags | ADDED | Added successfully 45035996273733172 | public | entities.urls | ADDED | twitter_r | 2013-11-20 17:00:27.945519-05 | Added successfully 45035996273733172 | public | twitter_r | 2013-11-20 17:00:27.945532-05 | created_at | EXISTS | Column of same name already exists in table definition (4 rows) See the MATERIALIZE_FLEXTABLE_COLUMNS_RESULTS system table in the SQL Reference Manual. See Also l BUILD_FLEXTABLE_VIEW l COMPUTE_FLEXTABLE_KEYS l COMPUTE_FLEXTABLE_KEYS_AND_BUILD_VIEW l RESTORE_FLEXTABLE_DEFAULT_KEYS_TABLE_AND_VIEW MEASURE_LOCATION_PERFORMANCE Measures disk performance for the location specified. Syntax MEASURE_LOCATION_PERFORMANCE ( 'path' , 'node' ) Parameters path Specifies where the storage location to measure is mounted. node Is the HP Vertica node where the location to be measured is available. Privileges Must be a superuser. HP Vertica Analytic Database (7.0.x) Page 659 of 1539 SQL Reference Manual SQL Functions Notes l To get a list of all node names on your cluster, query the V_MONITOR.DISK_STORAGE system table: => SELECT node_name from DISK_STORAGE; node_name ------------------v_vmartdb_node0004 v_vmartdb_node0004 v_vmartdb_node0005 v_vmartdb_node0005 v_vmartdb_node0006 v_vmartdb_node0006 (6 rows) l If you intend to create a tiered disk architecture in which projections, columns, and partitions are stored on different disks based on predicted or measured access patterns, you need to measure storage location performance for each location in which data is stored. You do not need to measure storage location performance for temp data storage locations because temporary files are stored based on available space. l The method of measuring storage location performance applies only to configured clusters. If you want to measure a disk before configuring a cluster see Measuring Storage Performance. l Storage location performance equates to the amount of time it takes to read and write 1MB of data from the disk. This time equates to: IO time = Time to read/write 1MB + Time to seek = 1/Throughput + 1/Latency Throughput is the average throughput of sequential reads/writes (units in MB per second) Latency is for random reads only in seeks (units in seeks per second) Note: The IO time of a faster storage location is less than a slower storage location. Example The following example measures the performance of a storage location on v_vmartdb_node0004: => SELECT MEASURE_LOCATION_PERFORMANCE('/secondVerticaStorageLocation/' , 'v_vmartdb_node 0004'); WARNING: measure_location_performance can take a long time. Please check logs for progre ss measure_location_performance -------------------------------------------------Throughput : 122 MB/sec. Latency : 140 seeks/sec HP Vertica Analytic Database (7.0.x) Page 660 of 1539 SQL Reference Manual SQL Functions See Also l ADD_LOCATION l ALTER_LOCATION_USE l RESTORE_LOCATION l RETIRE_LOCATION l Measuring Storage Performance MERGE_PARTITIONS Merges ROS containers that have data belonging to partitions in a specified partition key range: partitionKeyFromto partitionKeyTo. Note: This function is deprecated in HP Vertica 7.0. Syntax MERGE_PARTITIONS ( table_name , partition_key_from , partition_key_to ) Parameters table_name Specifies the name of the table partition_key_from Specifies the start point of the partition partition_key_to Specifies the end point of the partition Privileges l Table owner l USAGE privilege on schema that contains the table Notes l You cannot run MERGE_PARTITIONS() on a table with data that is not reorganized. You must reorganize the data first using ALTER_TABLE table REORGANIZE, or PARTITION_TABLE(table). HP Vertica Analytic Database (7.0.x) Page 661 of 1539 SQL Reference Manual SQL Functions l The edge values are included in the range, and partition_key_from must be less than or equal to partition_key_to. l Inclusion of partitions in the range is based on the application of less than (<)/greater than (>) operators of the corresponding data type. Note: No restrictions are placed on a partition key's data type. l If partition_key_from is the same as partition_key_to, all ROS containers of the partition key are merged into one ROS. Examples => => => => => => => => SELECT SELECT SELECT SELECT SELECT SELECT SELECT SELECT MERGE_PARTITIONS('T1', MERGE_PARTITIONS('T1', MERGE_PARTITIONS('T1', MERGE_PARTITIONS('T1', MERGE_PARTITIONS('T1', MERGE_PARTITIONS('T1', MERGE_PARTITIONS('T1', MERGE_PARTITIONS('T1', '200', '400'); '800', '800'); 'CA', 'MA'); 'false', 'true'); '06/06/2008', '06/07/2008'); '02:01:10', '04:20:40'); '06/06/2008 02:01:10', '06/07/2008 02:01:10'); '8 hours', '1 day 4 hours 20 seconds'); MOVE_PARTITIONS_TO_TABLE Moves partitions from a source table to a target table. The target table must have the same projection column definitions, segmentation, and partition expressions as the source table. If the target table does not exist, the function creates a new table based on the source definition. The function requires both minimum and maximum range values, indicating what partition values to move. Syntax MOVE_PARTITIONS_TO_TABLE ( '[[db-name.]schema.]source_table', 'min_range_value', 'max_range_value', '[[db-name.]schema.]target_table' ) Parameters [[db-name.]schema.]source_table The source table (optionally qualified), from which you want to move partitions. min_range_value The minimum value in the partition to move. max_range_value The maximum value of the partition being moved. target_table The table to which the partitions are being moved. HP Vertica Analytic Database (7.0.x) Page 662 of 1539 SQL Reference Manual SQL Functions Privileges l Table owner l If target table is created as part of moving partitions, the new table has the same owner as the target. If the target table exists, user must have own the target table, and have ability to call this function. Example If you call MOVE_PARTITIONS_TO_TABLE and the destination table does not exist, the function will create the table automatically: VMART=> SELECT MOVE_PARTITIONS_TO_TABLE ( 'prod_trades', '200801', '200801', 'partn_backup.trades_200801'); MOVE_PARTITIONS_TO_TABLE --------------------------------------------------------------------------1 distinct partition values moved at epoch 15. Effective move epoch: 14. (1 row) See Also l DROP_PARTITION l DUMP_PARTITION_KEYS l DUMP_PROJECTION_PARTITION_KEYS l DUMP_TABLE_PARTITION_KEYS l PARTITION_PROJECTION l Moving Partitions l Creating a Table Like Another PARTITION_PROJECTION Forces a split of ROS containers of the specified projection. Syntax PARTITION_PROJECTION ( '[[db-name.]schema.]projection_name' ) HP Vertica Analytic Database (7.0.x) Page 663 of 1539 SQL Reference Manual SQL Functions Parameters [[db-name.]schema.] [Optional] Specifies the database name and optional schema name. Using a database name identifies objects that are not unique within the current search path (see Setting Search Paths). You must be connected to the database you specify, and you cannot change objects in other databases. Specifying different database objects lets you qualify database objects as explicitly as required. For example, you can use a database and a schema name (mydb.myschema). projection_name Specifies the name of the projection. Privileges l Table owner l USAGE privilege on schema Notes Partitioning expressions take immutable functions only, in order that the same information be available across all nodes. PARTITION_PROJECTION() is similar to PARTITION_TABLE(), except that PARTITION_ PROJECTION works only on the specified projection, instead of the table. Users must have USAGE privilege on schema that contains the table. PARTITION_PROJECTION() purges data while partitioning ROS containers if deletes were applied before the AHM epoch. Example The following command forces a split of ROS containers on the states_p_node01 projection: => SELECT PARTITION_PROJECTION ('states_p_node01'); partition_projection -----------------------Projection partitioned (1 row) HP Vertica Analytic Database (7.0.x) Page 664 of 1539 SQL Reference Manual SQL Functions See Also l DO_TM_TASK l DROP_PARTITION l DUMP_PARTITION_KEYS l DUMP_PROJECTION_PARTITION_KEYS l DUMP_TABLE_PARTITION_KEYS l PARTITION_TABLE l Working with Table Partitions PARTITION_TABLE Forces the system to break up any ROS containers that contain multiple distinct values of the partitioning expression. Only ROS containers with more than one distinct value participate in the split. Syntax PARTITION_TABLE ( '[[db-name.]schema.]table_name' ) Parameters [[db-name.]schema.] [Optional] Specifies the database name and optional schema name. Using a database name identifies objects that are not unique within the current search path (see Setting Search Paths). You must be connected to the database you specify, and you cannot change objects in other databases. Specifying different database objects lets you qualify database objects as explicitly as required. For example, you can use a database and a schema name (mydb.myschema). table_name Specifies the name of the table. Privileges l Table owner l USAGE privilege on schema HP Vertica Analytic Database (7.0.x) Page 665 of 1539 SQL Reference Manual SQL Functions Notes PARTITION_TABLE is similar to PARTITION_PROJECTION, except that PARTITION_TABLE works on the specified table. Users must have USAGE privilege on schema that contains the table. Partitioning functions take immutable functions only, in order that the same information be available across all nodes. Example The following example creates a simple table called states and partitions data by state. => CREATE TABLE states (year INTEGER NOT NULL, state VARCHAR NOT NULL) PARTITION BY state; => CREATE PROJECTION states_p (state, year) AS SELECT * FROM states ORDER BY state, year UNSEGMENTED ALL NODES; Now call the PARTITION_TABLE function to partition table states: => SELECT PARTITION_TABLE('states'); PARTITION_TABLE ------------------------------------------------------partition operation for projection 'states_p_node0004' partition operation for projection 'states_p_node0003' partition operation for projection 'states_p_node0002' partition operation for projection 'states_p_node0001' (1 row) See Also l DO_TM_TASK l DROP_PARTITION l DUMP_PARTITION_KEYS l DUMP_PROJECTION_PARTITION_KEYS l DUMP_TABLE_PARTITION_KEYS l PARTITION_PROJECTION l Working with Table Partitions HP Vertica Analytic Database (7.0.x) Page 666 of 1539 SQL Reference Manual SQL Functions PURGE Permanently removes deleted data from physical storage so that the disk space can be reused. You can purge historical data up to and including the epoch in which the Ancient History Mark is contained. Purges all projections in the physical schema. PURGE does not delete temporary tables. Syntax PURGE() Privileges l Table owner l USAGE privilege on schema Notes l PURGE() was formerly named PURGE_ALL_PROJECTIONS. HP Vertica supports both function calls. Caution: PURGE could temporarily take up significant disk space while the data is being purged. See Also l MERGE_PARTITIONS l PARTITION_TABLE l PURGE_PROJECTION l PURGE_TABLE l STORAGE_CONTAINERS l Purging Deleted Data PURGE_PARTITION Purges a table partition of deleted rows. Similar to PURGE() and PURGE_PROJECTION(), this function removes deleted data from physical storage so you can reuse the disk space. PURGE_PARTITION() removes data from the AHM epoch and earlier only. HP Vertica Analytic Database (7.0.x) Page 667 of 1539 SQL Reference Manual SQL Functions Syntax PURGE_PARTITION ( '[[db_name.]schema_name.]table_name', partition_key ) Parameters [[db_name.]schema_name.] [Optional] Specifies the database name and optional schema name. Using a database name identifies objects that are not unique within the current search path (see Setting Search Paths). You must be connected to the database you specify, and you cannot change objects in other databases. Specifying different database objects lets you qualify database objects as explicitly as required. For example, you can use a database and a schema name (mydb.myschema). table_name The name of the partitioned table partition_key The key of the partition to be purged of deleted rows Privileges l Table owner l USAGE privilege on schema Example The following example lists the count of deleted rows for each partition in a table, then calls PURGE_ PARTITION() to purge the deleted rows from the data. => SELECT partition_key,table_schema,projection_name,sum(deleted_row_count) AS deleted_row_count FROM partitions GROUP BY partition_key,table_schema,projection_name ORDER BY partition_key; partition_key | table_schema | projection_name | deleted_row_count ---------------+--------------+-----------------+------------------0 | public | t_super | 2 1 | public | t_super | 2 2 | public | t_super | 2 3 | public | t_super | 2 4 | public | t_super | 2 5 | public | t_super | 2 6 | public | t_super | 2 7 | public | t_super | 2 8 | public | t_super | 2 9 | public | t_super | 1 HP Vertica Analytic Database (7.0.x) Page 668 of 1539 SQL Reference Manual SQL Functions (10 rows) => SELECT PURGE_PARTITION('t',5); -- Purge partition with key 5. purge_partition -----------------------------------------------------------------------Task: merge partitions (Table: public.t) (Projection: public.t_super) (1 row) => SELECT partition_key,table_schema,projection_name,sum(deleted_row_count) AS deleted_row_count FROM partitions GROUP BY partition_key,table_schema,projection_name ORDER BY partition_key; partition_key | table_schema | projection_name | deleted_row_count ---------------+--------------+-----------------+------------------0 | public | t_super | 2 1 | public | t_super | 2 2 | public | t_super | 2 3 | public | t_super | 2 4 | public | t_super | 2 5 | public | t_super | 0 6 | public | t_super | 2 7 | public | t_super | 2 8 | public | t_super | 2 9 | public | t_super | 1 (10 rows) See Also l MERGE_PARTITIONS l PURGE l PURGE_PROJECTION l PURGE_TABLE l STORAGE_CONTAINERS PURGE_PROJECTION Permanently removes deleted data from physical storage so that the disk space can be reused. You can purge historical data up to and including the epoch in which the Ancient History Mark is contained. Purges the specified projection. Caution: PURGE_PROJECTION could temporarily take up significant disk space while purging the data. HP Vertica Analytic Database (7.0.x) Page 669 of 1539 SQL Reference Manual SQL Functions Syntax PURGE_PROJECTION ( '[[db-name.]schema.]projection_name' ) Parameters [[db-name.]schema.] [Optional] Specifies the database name and optional schema name. Using a database name identifies objects that are not unique within the current search path (see Setting Search Paths). You must be connected to the database you specify, and you cannot change objects in other databases. Specifying different database objects lets you qualify database objects as explicitly as required. For example, you can use a database and a schema name (mydb.myschema). projection_name Identifies the projection name. When using more than one schema, specify the schema that contains the projection, as noted above. Privileges l Table owner l USAGE privilege on schema Notes See PURGE for notes about the outcome of purge operations. See Also l PURGE_TABLE l STORAGE_CONTAINERS l Purging Deleted Data PURGE_TABLE Note: This function was formerly named PURGE_TABLE_PROJECTIONS(). HP Vertica still supports the former function name. Permanently removes deleted data from physical storage so that the disk space can be reused. You can purge historical data up to and including the epoch in which the Ancient History Mark is contained. HP Vertica Analytic Database (7.0.x) Page 670 of 1539 SQL Reference Manual SQL Functions Purges all projections of the specified table. You cannot use this function to purge temporary tables. Syntax PURGE_TABLE ( '[[db-name.]schema.]table_name' ) Parameters [[db-name.]schema.] [Optional] Specifies the database name and optional schema name. Using a database name identifies objects that are not unique within the current search path (see Setting Search Paths). You must be connected to the database you specify, and you cannot change objects in other databases. Specifying different database objects lets you qualify database objects as explicitly as required. For example, you can use a database and a schema name (mydb.myschema). table_name Specifies the table to purge. Privileges l Table owner l USAGE privilege on schema Caution: PURGE_TABLE could temporarily take up significant disk space while the data is being purged. Example The following example purges all projections for the store sales fact table located in the Vmart schema: => SELECT PURGE_TABLE('store.store_sales_fact'); See Also l PURGE l PURGE_TABLE l STORAGE_CONTAINERS l Purging Deleted Data HP Vertica Analytic Database (7.0.x) Page 671 of 1539 SQL Reference Manual SQL Functions REALIGN_CONTROL_NODES Chooses control nodes (spread hosts) from all cluster nodes and assigns the rest of the nodes in the cluster to a control node. Calling this function respects existing fault groups, which you can view by querying the V_CATALOG.CLUSTER_LAYOUT system table. This view also lets you see the proposed new layout for nodes in the cluster. Note: You use this function with other cluster management functions. For details, see Defining and Realigning Control Nodes on an Existing Cluster in the Administrator's Guide. Syntax REALIGN_CONTROL_NODES() Privileges Must be a superuser. Example The following command chooses control nodes from all cluster nodes and assigns the rest of the nodes in the cluster to a control node: => SELECT realign_control_nodes(); See Also Cluster Management Functions V_CATALOG.CLUSTER_LAYOUT Large Cluster in the Administrator's Guide REBALANCE_CLUSTER Call this function to begin rebalancing data in the cluster synchronously. A rebalance operation performs the following tasks: l Distributes data based on user-defined fault groups, if specified, or based on large cluster automatic fault groups l Redistributes the database projections' data across all nodes HP Vertica Analytic Database (7.0.x) Page 672 of 1539 SQL Reference Manual SQL Functions l Refreshes projections l Sets the Ancient History Mark l Drops projections that are no longer needed When to rebalance the cluster Rebalancing is useful (or necessary) after you: l Mark one or more nodes as ephemeral in preparation of removing them from the cluster l Add one or more nodes to the cluster so HP Vertica can populate the empty nodes with data l Remove one or more nodes from the cluster so HP Vertica can redistribute the data among the remaining nodes l Change the scaling factor of an elastic cluster, which determines the number of storage containers used to store a projection across the database l Set the control node size or realign control nodes on a large cluster layout l Specify more than 120 nodes in your initial HP Vertica cluster configuration l Add nodes to or remove nodes from a fault group Because this function runs the rebalance task synchronously, it does not return until the data has been rebalanced. Closing or dropping the session cancels the rebalance task. Important: On large cluster arrangements, you typically use this function in a flow, described Defining and Realigning Control Nodes in the Administrator's Guide. After you change the number and distribution of control nodes (spread hosts), you must run REBALANCE_CLUSTER() for fault tolerance to be realized. Syntax REBALANCE_CLUSTER() Privileges Must be a superuser. Example The following command rebalances data across the cluster. HP Vertica Analytic Database (7.0.x) Page 673 of 1539 SQL Reference Manual SQL Functions => SELECT REBALANCE_CLUSTER(); REBALANCE_CLUSTER ------------------REBALANCED (1 row) See Also START_REBALANCE_CLUSTER CANCEL_REBALANCE_CLUSTER Rebalancing Data Across Nodes in the Administrator's Guide REENABLE_DUPLICATE_KEY_ERROR Restores the default behavior of error reporting by reversing the effects of DISABLE_DUPLICATE_ KEY_ERROR. Effects are session scoped. Syntax REENABLE_DUPLICATE_KEY_ERROR(); Privileges Must be a superuser. Examples For examples and usage, see DISABLE_DUPLICATE_KEY_ERROR. See Also l ANALYZE_CONSTRAINTS REFRESH Performs a synchronous, optionally-targeted refresh of a specified table's projections. Information about a refresh operation—whether successful or unsuccessful—is maintained in the PROJECTION_REFRESHES system table until either the CLEAR_PROJECTION_ REFRESHES() function is executed or the storage quota for the table is exceeded. The PROJECTION_REFRESHES.IS_EXECUTING column returns a boolean value that indicates whether the refresh is currently running (t) or occurred in the past (f). HP Vertica Analytic Database (7.0.x) Page 674 of 1539 SQL Reference Manual SQL Functions Syntax REFRESH ( '[[db-name.]schema.]table_name [ , ... ]' ) Parameters [[db-name.]schema.] [Optional] Specifies the database name and optional schema name. Using a database name identifies objects that are not unique within the current search path (see Setting Search Paths). You must be connected to the database you specify, and you cannot change objects in other databases. Specifying different database objects lets you qualify database objects as explicitly as required. For example, you can use a database and a schema name (mydb.myschema). table_name Specifies the name of a specific table containing the projections to be refreshed. The REFRESH() function attempts to refresh all the tables provided as arguments in parallel. Such calls will be part of the Database Designer deployment (and deployment script). When using more than one schema, specify the schema that contains the table, as noted above. Returns Column Name Description Projection Name The name of the projection that is targeted for refresh. Anchor Table The name of the projection's associated anchor table. Status The status of the projection: l Queued—Indicates that a projection is queued for refresh. l Refreshing—Indicates that a refresh for a projection is in process. l Refreshed—Indicates that a refresh for a projection has successfully completed. l Failed—Indicates that a refresh for a projection did not successfully complete. HP Vertica Analytic Database (7.0.x) Page 675 of 1539 SQL Reference Manual SQL Functions Refresh Method The method used to refresh the projection: l Buddy—Uses the contents of a buddy to refresh the projection. This method maintains historical data. This enables the projection to be used for historical queries. l Scratch—Refreshes the projection without using a buddy. This method does not generate historical data. This means that the projection cannot participate in historical queries from any point before the projection was refreshed. Error Count The number of times a refresh failed for the projection. Duration (sec) The length of time that the projection refresh ran in seconds. Privileges REFRESH() works only if invoked on tables owned by the calling user. Notes l Unlike START_REFRESH(), which runs in the background, REFRESH() runs in the foreground of the caller's session. l The REFRESH() function refreshes only the projections in the specified table. l If you run REFRESH() without arguments, it refreshes all non up-to-date projections. If the function returns a header string with no results, then no projections needed refreshing. Examples The following example refreshes the projections in tables t1 and t2: => SELECT REFRESH('t1, t2'); REFRESH ---------------------------------------------------------------------------------------Refresh completed with the following outcomes: Projection Name: [Anchor Table] [Status] [Refresh Method] [Error Count] [Duration (sec)] ---------------------------------------------------------------------------------------"public"."t1_p": [t1] [refreshed] [scratch] [0] [0]"public"."t2_p": [t2] [refreshed] [scr atch] [0] [0] This next example shows that only the projection on table t was refreshed: => SELECT REFRESH('allow, public.deny, t'); HP Vertica Analytic Database (7.0.x) Page 676 of 1539 SQL Reference Manual SQL Functions REFRESH ---------------------------------------------------------------------------------------Refresh completed with the following outcomes: Projection Name: [Anchor Table] [Status] [Refresh Method] [Error Count] [Duration (sec)] ---------------------------------------------------------------------------------------"n/a"."n/a": [n/a] [failed: insufficient permissions on table "allow"] [] [1] [0] "n/a"."n/a": [n/a] [failed: insufficient permissions on table "public.deny"] [] [1] [0] "public"."t_p1": [t] [refreshed] [scratch] [0] [0] See Also l CLEAR_PROJECTION_REFRESHES l PROJECTION_REFRESHES l START_REFRESH l Clearing PROJECTION_REFRESHES History RELEASE_ALL_JVM_MEMORY Forces all sessions to release the memory consumed by their Java Virtual Machines (JVM). Syntax RELEASE_ALL_JVM_MEMORY(); Permissions Must be a superuser. Example The following example demonstrates viewing the JVM memory use in all open sessions, then calling RELEASE_ALL_JVM_MEMORY() to release the memory: => select user_name,jvm_memory_kb FROM V_MONITOR.SESSIONS; user_name | jvm_memory_kb -----------+--------------dbadmin | 79705 (1 row) => SELECT RELEASE_ALL_JVM_MEMORY(); RELEASE_ALL_JVM_MEMORY ----------------------------------------------------------------------------- HP Vertica Analytic Database (7.0.x) Page 677 of 1539 SQL Reference Manual SQL Functions Close all JVM sessions command sent. Check v_monitor.sessions for progress. (1 row) => SELECT user_name,jvm_memory_kb FROM V_MONITOR.SESSIONS; user_name | jvm_memory_kb -----------+--------------dbadmin | 0 (1 row) See Also l RELEASE_JVM_MEMORY RELEASE_JVM_MEMORY Terminates a Java Virtual Machine (JVM), making available the memory the JVM was using. Syntax RELEASE_JVM_MEMORY(); Privileges None. Examples User session opened. RECORD 2 shows the user session running COPY DIRECT statement. => SELECT RELEASE_JVM_MEMORY(); release_jvm_memory ----------------------------------------Java process killed and memory released (1 row) See Also l RELEASE_ALL_JVM_MEMORY RELOAD_SPREAD Calling this function with the required true argument updates cluster changes (such as new or realigned control nodes spread hosts or fault groups or new/dropped cluster nodes), to the catalog's spread configuration file. HP Vertica Analytic Database (7.0.x) Page 678 of 1539 SQL Reference Manual SQL Functions Important: This function is often used in a multi-step process for large and elastic cluster arrangements. Calling RELOAD_SPREAD(true) might require that you restart the database, which you do using the Administration Tools. You must then rebalance the cluster for fault tolerance to be realized. See Defining and Realigning Control Nodes in the Administrator's Guide for more information. Syntax RELOAD_SPREAD(true) Parameters true Updates cluster changes related to control message responsibilities to the spread configuration file. Privileges Must be a superuser. Example The following command updates the cluster with changes to control messaging: => SELECT reload_spread(true); reload_spread --------------reloaded (1 row) See Also Cluster Management Functions REBALANCE_CLUSTER V_CATALOG.CLUSTER_LAYOUT Large Cluster in the Administrator's Guide HP Vertica Analytic Database (7.0.x) Page 679 of 1539 SQL Reference Manual SQL Functions RESET_LOAD_BALANCE_POLICY Resets the counter each host in the cluster maintains to track which host it will refer a client to when the native connection load balancing scheme is set to ROUNDROBIN. Syntax RESET_LOAD_BALANCE_POLICY() Notes This function only has an effect if the current native connection load balancing scheme is ROUNDROBIN. Permissions This function can be called only by a superuser . Example The following example demonstrates calling RESET_LOAD_BALANCE_POLICY: => SELECT RESET_LOAD_BALANCE_POLICY(); RESET_LOAD_BALANCE_POLICY ------------------------------------------------------------------------Successfully reset stateful client load balance policies: "roundrobin". (1 row) RESTORE_LOCATION Restores a storage location that was previously retired with RETIRE_LOCATION. Syntax RESTORE_LOCATION ( 'path', 'node' ) Parameters path Specifies where the retired storage location is mounted. node Is the HP Vertica node where the retired location is available. HP Vertica Analytic Database (7.0.x) Page 680 of 1539 SQL Reference Manual SQL Functions Privileges Must be a superuser. Effects of Restoring a Previously Retired Location After restoring a storage location, HP Vertica re-ranks all of the cluster storage locations and uses the newly-restored location to process queries as determined by its rank. Monitoring Storage Locations Disk storage information that the database uses on each node is available in the V_ MONITOR.DISK_STORAGE system table. Example The following example restores the retired storage location on node3: => SELECT RESTORE_LOCATION ('/thirdHP VerticaStorageLocation/' , 'v_vmartdb_node0004'); See Also l Altering Storage Location Use l ADD_LOCATION l ALTER_LOCATION_USE l DROP_LOCATION l RETIRE_LOCATION l GRANT (Storage Location) l REVOKE (Storage Location) RESTORE_FLEXTABLE_DEFAULT_KEYS_TABLE_AND_ VIEW Restores the _keys table and the _view, linking them with their associated flex table if either is dropped. This function notes whether it restores one or both. HP Vertica Analytic Database (7.0.x) Page 681 of 1539 SQL Reference Manual SQL Functions Usage restore_flextable_default_keys_table_and_view('flex_table') Arguments flex_table The name of the flex table . Examples This example invokes the function with an existing flex table, restoring both the _keys table and _ view: kdb=> select restore_flextable_default_keys_table_and_view('darkdata'); restore_flextable_default_keys_table_and_view ---------------------------------------------------------------------------------The keys table public.darkdata_keys was restored successfully. The view public.darkdata_view was restored successfully. (1 row) This example shows the function restoring darkdata_view, but noting that darkdata_keys does not need restoring: kdb=> select restore_flextable_default_keys_table_and_view('darkdata'); restore_flextable_default_keys_table_and_view -----------------------------------------------------------------------------------------------The keys table public.darkdata_keys already exists and is linked to darkdata. The view public.darkdata_view was restored successfully. (1 row) The _keys table has no content after it is restored: kdb=> select * from darkdata_keys; key_name | frequency | data_type_guess ----------+-----------+----------------(0 rows) See Also l BUILD_FLEXTABLE_VIEW l COMPUTE_FLEXTABLE_KEYS HP Vertica Analytic Database (7.0.x) Page 682 of 1539 SQL Reference Manual SQL Functions l COMPUTE_FLEXTABLE_KEYS_AND_BUILD_VIEW l MATERIALIZE_FLEXTABLE_COLUMNS RETIRE_LOCATION Makes the specified storage location inactive. Syntax RETIRE_LOCATION ( 'path', 'node' ) Parameters path Specifies where the storage location to retire is mounted. node Is the HP Vertica node where the location is available. Privileges Must be a superuser. Effects of Retiring a Storage Location When you use this function, HP Vertica checks that the location is not the only storage for data and temp files. At least one location must exist on each node to store data and temp files, though you can store both sorts of files in either the same location, or separate locations. Note: You cannot retire a location if it is used in a storage policy, and is the last available storage for its associated objects. When you retire a storage location: l No new data is stored at the retired location, unless you first restore it with the RESTORE_ LOCATION() function. l If the storage location being retired contains stored data, the data is not moved, so you cannot drop the storage location. Instead, HP Vertica removes the stored data through one or more mergeouts. l If the storage location being retired was used only for temp files, you can drop the location. See Dropping Storage Locations in the Administrators Guide and the DROP_LOCATION() function. HP Vertica Analytic Database (7.0.x) Page 683 of 1539 SQL Reference Manual SQL Functions Monitoring Storage Locations Disk storage information that the database uses on each node is available in the V_ MONITOR.DISK_STORAGE system table. Example The following example retires a storage location: => SELECT RETIRE_LOCATION ('/secondVerticaStorageLocation/' , 'v_vmartdb_node0004'); See Also l Retiring Storage Locations l ADD_LOCATION l ALTER_LOCATION_USE l DROP_LOCATION l RESTORE_LOCATION l GRANT (Storage Location) l REVOKE (Storage Location) SET_AHM_EPOCH Sets the Ancient History Mark (AHM) to the specified epoch. This function allows deleted data up to and including the AHM epoch to be purged from physical storage. SET_AHM_EPOCH is normally used for testing purposes. Consider SET_AHM_TIME instead, which is easier to use. Syntax SET_AHM_EPOCH ( epoch, [ true ] ) HP Vertica Analytic Database (7.0.x) Page 684 of 1539 SQL Reference Manual SQL Functions Parameters epoch true Specifies one of the following: l The number of the epoch in which to set the AHM l Zero (0) (the default) disables PURGE Optionally allows the AHM to advance when nodes are down. Note: If the AHM is advanced after the last good epoch of the failed nodes, those nodes must recover all data from scratch. Use with care. Privileges Must be a superuser. Notes If you use SET_AHM_EPOCH , the number of the specified epoch must be: l Greater than the current AHM epoch l Less than the current epoch l Less than or equal to the cluster last good epoch (the minimum of the last good epochs of the individual nodes in the cluster) l Less than or equal to the cluster refresh epoch (the minimum of the refresh epochs of the individual nodes in the cluster) Use the SYSTEM table to see current values of various epochs related to the AHM; for example: => SELECT * from SYSTEM; -[ RECORD 1 ]------------+--------------------------current_timestamp | 2009-08-11 17:09:54.651413 current_epoch | 1512 ahm_epoch | 961 last_good_epoch | 1510 refresh_epoch | -1 designed_fault_tolerance | 1 node_count | 4 node_down_count | 0 current_fault_tolerance | 1 catalog_revision_number | 1590 wos_used_bytes | 0 wos_row_count | 0 HP Vertica Analytic Database (7.0.x) Page 685 of 1539 SQL Reference Manual SQL Functions ros_used_bytes ros_row_count total_used_bytes total_row_count | | | | 41490783 1298104 41490783 1298104 All nodes must be up. You cannot use SET_AHM_EPOCH when any node in the cluster is down, except by using the optional true parameter. When a node is down and you issue SELECT MAKE_AHM_NOW(), the following error is printed to the vertica.log: Some nodes were excluded from setAHM. If their LGE is before the AHM they will perform fu ll recovery. Examples The following command sets the AHM to a specified epoch of 12: => SELECT SET_AHM_EPOCH(12); The following command sets the AHM to a specified epoch of 2 and allows the AHM to advance despite a failed node: => SELECT SET_AHM_EPOCH(2, true); See Also l MAKE_AHM_NOW l SET_AHM_TIME l SYSTEM SET_AHM_TIME Sets the Ancient History Mark (AHM) to the epoch corresponding to the specified time on the initiator node. This function allows historical data up to and including the AHM epoch to be purged from physical storage. Syntax SET_AHM_TIME ( time , [ true ] ) HP Vertica Analytic Database (7.0.x) Page 686 of 1539 SQL Reference Manual SQL Functions Parameters time Is a TIMESTAMP value that is automatically converted to the appropriate epoch number. true [Optional] Allows the AHM to advance when nodes are down. Note: If the AHM is advanced after the last good epoch of the failed nodes, those nodes must recover all data from scratch. Privileges Must be a superuser. Notes l SET_AHM_TIME returns a TIMESTAMP WITH TIME ZONE value representing the end point of the AHM epoch. l You cannot change the AHM when any node in the cluster is down, except by using the optional true parameter. l When a node is down and you issue SELECT MAKE_AHM_NOW(), the following error is printed to the vertica.log: Some nodes were excluded from setAHM. If their LGE is before the AHM they will perform full recovery. Examples Epochs depend on a configured epoch advancement interval. If an epoch includes a three-minute range of time, the purge operation is accurate only to within minus three minutes of the specified timestamp: => SELECT SET_AHM_TIME('2008-02-27 18:13'); set_ahm_time -----------------------------------AHM set to '2008-02-27 18:11:50-05' (1 row) Note: The –05 part of the output string is a time zone value, an offset in hours from UTC (Universal Coordinated Time, traditionally known as Greenwich Mean Time, or GMT). In the previous example, the actual AHM epoch ends at 18:11:50, roughly one minute before the specified timestamp. This is because SET_AHM_TIME selects the epoch that ends at or before HP Vertica Analytic Database (7.0.x) Page 687 of 1539 SQL Reference Manual SQL Functions the specified timestamp. It does not select the epoch that ends after the specified timestamp because that would purge data deleted as much as three minutes after the AHM. For example, using only hours and minutes, suppose that epoch 9000 runs from 08:50 to 11:50 and epoch 9001 runs from 11:50 to 15:50. SET_AHM_TIME('11:51') chooses epoch 9000 because it ends roughly one minute before the specified timestamp. In the next example, if given an environment variable set as date =`date`; the following command fails if a node is down: => SELECT SET_AHM_TIME('$date'); In order to force the AHM to advance, issue the following command instead: => SELECT SET_AHM_TIME('$date', true); See Also l MAKE_AHM_NOW l SET_AHM_EPOCH l SET DATESTYLE l TIMESTAMP SET_AUDIT_TIME Sets the time that HP Vertica performs automatic database size audit to determine if the size of the database is compliant with the raw data allowance in your HP Vertica license. Use this function if the audits are currently scheduled to occur during your database's peak activity time. This is normally not a concern, since the automatic audit has little impact on database performance. Audits are scheduled by the preceding audit, so changing the audit time does not affect the next scheduled audit. For example, if your next audit is scheduled to take place at 11:59PM and you use SET_AUDIT_TIME to change the audit schedule 3AM, the previously scheduled 11:59PM audit still runs. As that audit finishes, it schedules the next audit to occur at 3AM. If you want to prevent the next scheduled audit from running at its scheduled time, you can change the scheduled time using SET_AUDIT_TIME then manually trigger an audit to run immediately using AUDIT_LICENSE_SIZE. As the manually-triggered audit finishes, it schedules the next audit to occur at the time you set using SET_AUDIT_TIME (effectively overriding the previously scheduled audit). Syntax SET_AUDIT_TIME(time) time A string containing the time in 'HH:MM AM/PM' format (for example, '1:00 AM') when the audit should run daily. HP Vertica Analytic Database (7.0.x) Page 688 of 1539 SQL Reference Manual SQL Functions Privileges Must be a superuser. Example => SELECT SET_AUDIT_TIME('3:00 AM'); SET_AUDIT_TIME ----------------------------------------------------------------------The scheduled audit time will be set to 3:00 AM after the next audit. (1 row) SET_CONTROL_SET_SIZE For existing database clusters, use this function to specify the number of cluster nodes on which you want to deploy control messaging (spread). The SET_CONTROL_SET_SIZE() function works the same as the install_vertica --large cluster option. You can run SET_CONTROL_SET_SIZE()after the database cluster is already defined, but before you call this function, the database must be up. Note: You use this function with other cluster management functions. For details, see Defining and Realigning Control Nodes on an Existing Cluster in the Administrator's Guide. Syntax SET_CONTROL_SET_SIZE(integer) Parameters integer Specifies the number of cluster hosts from the database cluster on which spread runs. Privileges Must be a superuser. Note To see if the current spread hosts and the control designations in the Catalog match, query the V_ CATALOG.LARGE_CLUSTER_CONFIGURATION_STATUS system table. HP Vertica Analytic Database (7.0.x) Page 689 of 1539 SQL Reference Manual SQL Functions Example The following command tells HP Vertica that you want to run spread on two cluster nodes: => SELECT set_control_set_size(2); SET_CONTROL_SET_SIZE ---------------------Control size set (1 row) See Also Cluster Management Functions V_CATALOG.CLUSTER_LAYOUT Large Cluster in the Administrator's Guide SET_DATA_COLLECTOR_POLICY Sets a size restraint (memory and disk space in kilobytes) for the specified Data Collector table on all nodes. If nodes are down, the failed nodes receive the setting when they rejoin the cluster. You can use this function to set a size restraint only, or you can include the optional interval argument to set disk capacity for both size and time in a single command. If you specify interval, HP Vertica enforces the setting that is exceeded first (size or time). Before you include a time restraint, be sure the disk size capacity is sufficiently large. If you want to specify just a time restraint, or you want to turn off a time restraint you set using this function, see SET_DATA_COLLECTOR_TIME_POLICY(). Syntax SET_DATA_COLLECTOR_POLICY('component', 'memoryKB', 'diskKB' [,'interval'] ) Parameters component Configures the retention policy for the specified component. memoryKB Specifies the memory size to retain in kilobytes. diskKB Specifies the disk size in kilobytes. HP Vertica Analytic Database (7.0.x) Page 690 of 1539 SQL Reference Manual SQL Functions interval [Default off] Takes an optional interval argument to specify how long to retain the specified component on disk. To disable a time restraint, set interval to -1. Note: Any negative input will turn off the time restraint Privileges Must be a superuser. Notes l Before you change a retention policy, view its current setting by calling the GET_DATA_ COLLECTOR_POLICY() function. l If you don't know the name of a component, query the V_MONITOR.DATA_COLLECTOR system table for a list; for example: => SELECT DISTINCT component, description FROM data_collector ORDER BY 1 ASC; Examples The following command returns the retention policy for the ResourceAcquisitions component: => SELECT get_data_collector_policy('ResourceAcquisitions'); get_data_collector_policy ---------------------------------------------1000KB kept in memory, 10000KB kept on disk. (1 row) This command changes the memory and disk setting for ResourceAcquisitions from its current setting of 1,000 KB memory and 10,000 KB disk space to 1500 KB and 25000 KB, respectively: => SELECT set_data_collector_policy('ResourceAcquisitions', '1500', '25000'); set_data_collector_policy --------------------------SET (1 row) This command sets the RequestsIssued component to 1500 KB memory and 11000 KB on disk, and includes a 3-minute time restraint: => SELECT set_data_collector_policy('RequestsIssued', '1500', '11000', '3 minutes'::inter val); set_data_collector_policy HP Vertica Analytic Database (7.0.x) Page 691 of 1539 SQL Reference Manual SQL Functions --------------------------SET (1 row) The following command disables the 3-minute retention policy for the RequestsIssued component: => SELECT set_data_collector_policy('RequestsIssued', '-1'); set_data_collector_policy --------------------------SET (1 row) See Also l GET_DATA_COLLECTOR_POLICY l SET_DATA_COLLECTOR_TIME_POLICY() l DATA_COLLECTOR l Retaining Monitoring Information in the Administrator's Guide SET_DATA_COLLECTOR_TIME_POLICY Sets a time capacity for individual Data Collector tables on all nodes. If nodes are down, the failed nodes receive the setting when they rejoin the cluster. If you want to configure both time and size restraints at the same time, see SET_DATA_COLLECTOR_ POLICY(). Syntax SET_DATA_COLLECTOR_TIME_POLICY( ['component',] 'interval' ) Parameters component [Optional] Configures the time retention policy for the specified component. If you omit the component argument, HP Vertica sets the specified time capacity for all Data Collector tables. interval Specifies the time restraint on disk using an INTERVAL type. To disable a time restraint, set interval to -1. Note: Any negative input turns off the time restraint HP Vertica Analytic Database (7.0.x) Page 692 of 1539 SQL Reference Manual SQL Functions Privileges Must be a superuser. Notes l Before you change a retention policy, view its current setting by calling the GET_DATA_ COLLECTOR_POLICY() function. l If you don't know the name of a component, query the V_MONITOR.DATA_COLLECTOR system table for a list. For example: => SELECT DISTINCT component, description FROM data_collector ORDER BY 1 ASC; Setting time interval for system tables You can also use the interval argument to query system tables the same way you query Data Collector tables; for example: set_data_collector_time_policy(' ', <'interval'>); To illustrate, the following command in the left column is equivalent to running the series of commands on the right: Run one command Instead of a series of commands SELECT set_data_collector_time_policy ('v_monitor.query_requests', '3 minutes'::interval); SELECT set_data_collector_time_policy ('RequestsIssued', '3 minutes'::interval); SELECT set_data_collector_time_policy ('RequestsCompleted', '3 minutes'::interval); SELECT set_data_collector_time_policy ('Errors', '3 minutes'::interval); SELECT set_data_collector_time_policy ('ResourceAcquisitions', '3 minutes'::interval); The SET_DATA_COLLECTOR_TIME_POLICY() function updates the time capacity for all Data Collector tables in the V_MONITOR.QUERY_REQUESTS view. The new setting overrides any previous settings for every Data Collector table in that view. HP Vertica Analytic Database (7.0.x) Page 693 of 1539 SQL Reference Manual SQL Functions Examples The following command configures the Backups component to be retained on disk for 1 day: => SELECT set_data_collector_time_policy('Backups', '1 day'::interval); set_data_collector_time_policy -------------------------------SET (1 row) This command disables the 1-day restraint for the Backups component: => SELECT set_data_collector_time_policy('Backups', '-1'); set_data_collector_time_policy -------------------------------SET (1 row) This command sets a 30-minute time capacity for all Data Collector tables in a single command: => SELECT set_data_collector_time_policy('30 minutes'::interval); set_data_collector_time_policy -------------------------------SET (1 row) To view current retention policy settings for each Data Collector table, call the GET_DATA_ COLLECTION_POLICY() function. In the next example, the time restraint is included. => SELECT get_data_collector_policy('RequestsIssued'); get_data_collector_policy ----------------------------------------------------------------------------2000KB kept in memory, 50000KB kept on disk. 2 years 3 days 15:08 hours kept on disk. (1 row) If the time policy setting is disabled, the output of GET_DATA_COLLECTION_POLICY() returns "Time based retention disabled." 2000KB kept in memory, 50000KB kept on disk. Time based retention disabled. See Also l GET_DATA_COLLECTOR_POLICY l SET_DATA_COLLECTOR_POLICY HP Vertica Analytic Database (7.0.x) Page 694 of 1539 SQL Reference Manual SQL Functions l DATA_COLLECTOR SET_LOAD_BALANCE_POLICY Sets how native connection load balancing chooses a host to handle a client connection. See About Native Connection Load Balancing in the Administrator's Guide for more information. Syntax SET_LOAD_BALANCE_POLICY('policy') Parameters policy The name of the load balancing policy to use. Can be one of the following: l NONE: Disables native connection load balancing. This is the default setting. l ROUNDROBIN: Chooses the next host from a circular list of currently up hosts in the database (i.e. node #1, node #2, node #3, etc. until it wraps back to node #1 again). Each host in the cluster maintains its own pointer to the next host in the circular list, rather than there being a single cluster-wide state. l RANDOM: Chooses a host at random from the list of currently up hosts in the cluster. Notes Even if the load balancing policy is set to something other than NONE on the server, clients must indicate they want their connections to be load balanced by setting a connection property. Permissions Can only be used by a superuser . Example The following example demonstrates enabling native connection load balancing on the server by setting the load balancing scheme to ROUNDROBIN: => SELECT SET_LOAD_BALANCE_POLICY('ROUNDROBIN'); SET_LOAD_BALANCE_POLICY -------------------------------------------------------------------------------Successfully changed the client initiator load balancing policy to: roundrobin (1 row) HP Vertica Analytic Database (7.0.x) Page 695 of 1539 SQL Reference Manual SQL Functions SET_LOCATION_PERFORMANCE Sets disk performance for the location specified. Syntax SET_LOCATION_PERFORMANCE ( 'path' , 'node' , 'throughput' , 'average_latency' ) Parameters path Specifies where the storage location to set is mounted. node Is the HP Vertica node where the location to be set is available. If this parameter is omitted, node defaults to the initiator. throughput Specifies the throughput for the location, which must be 1 or more. average_latency Specifies the average latency for the location. The average_latency must be 1 or more. Privileges Must be a superuser. Notes To obtain the throughput and average latency for the location, run the MEASURE_LOCATION_ PERFORMANCE() function before you attempt to set the location's performance. Example The following example sets the performance of a storage location on node2 to a throughput of 122 megabytes per second and a latency of 140 seeks per second. => SELECT SET_LOCATION_PERFORMANCE('/secondVerticaStorageLocation/','node2','122','140'); See Also l ADD_LOCATION l MEASURE_LOCATION_PERFORMANCE HP Vertica Analytic Database (7.0.x) Page 696 of 1539 SQL Reference Manual SQL Functions l Measuring Storage Performance l Setting Storage Performance SET_SCALING_FACTOR Sets the scaling factor that determines the size of the storage containers used when rebalancing the database and when using local data segmentation is enabled. See Cluster Scaling for details. Syntax SET_SCALING_FACTOR(factor) Parameters factor An integer value between 1 and 32. HP Vertica uses this value to calculate the number of storage containers each projection is broken into when rebalancing or when local data segmentation is enabled. Note: Setting the scaling factor value too high can cause nodes to create too many small container files, greatly reducing efficiency and potentially causing a "Too many ROS containers" error (also known as "ROS pushback"). The scaling factor should be set high enough so that rebalance can transfer local segments to satisfy the skew threshold, but small enough that the number of storage containers does not exceed ROS pushback. The number of storage containers should be greater than or equal to the number of partitions multiplied by the number local of segments (# storage containers >= # partitions * # local segments). Privileges Must be a superuser. Example => SELECT SET_SCALING_FACTOR(12); SET_SCALING_FACTOR -------------------SET (1 row) SET_OBJECT_STORAGE_POLICY Creates or changes an object storage policy by associating a database object with a labeled storage location. HP Vertica Analytic Database (7.0.x) Page 697 of 1539 SQL Reference Manual SQL Functions Note: You cannot create a storage policy on a USER type storage location. Syntax SET_OBJECT_STORAGE_POLICY ( 'object_name', 'location_label' [, 'key_min, key_max'] [, 'enforc e_storage_move' ] ) Parameters object_name Identifies the database object assigned to a labeled storage location. The object_name can resolve to a database, schema, or table. location_label The label of the storage location with which object_name is being associated. key_min, key_max Applicable only when object_name is a table, key_min and key_max specify the table partition key value range to be stored at the location. enforce_storage_move= {true | false} [Optional] Applicable only when setting a storage policy for an object that has data stored at another labeled location. Specify this parameter as true to move all existing storage data to the target location within this function's transaction. Privileges Must be the object owner to set the storage policy, and have access to the storage location. New Storage Policy If an object does not have a storage policy, this function creates a new policy. The labeled location is then used as the default storage location during TM operations, such as moveout and mergeout. Existing Storage Policy If the object already has an active storage policy, calling this function changes the default storage for the object to the new labeled location. Any existing data stored on the previous storage location is marked to move to the new location during the next TM moveout operations, unless you use the enforce_storage_move option. Forcing Existing Data Storage to a New Storage Location You can optionally use this function to move existing data storage to a new location as part of completing the current transaction, by specifying the last parameter as true. HP Vertica Analytic Database (7.0.x) Page 698 of 1539 SQL Reference Manual SQL Functions To move existing data as part of the next TM moveout, either omit the parameter, or specify its value as false. Note: Specifying the parameter as true performs a cluster-wide operation. If an error occurs on any node, the function displays a warning message, skips the offending node, and continues execution on the remaining nodes. Example This example sets a storage policy for the table states to use the storage labeled SSD as its default location: VMART=> select set_object_storage_policy ('states', 'SSD'); set_object_storage_policy ----------------------------------Default storage policy set. (1 row) See Also l ALTER_LOCATION_LABEL l CLEAR_OBJECT_STORAGE_POLICY l Creating Storage Policies l Moving Data Storage Locations SHUTDOWN Forces a database to shut down, even if there are users connected. Syntax SHUTDOWN ( [ 'false' | 'true' ] ) Parameters false [Default] Returns a message if users are connected. Has the same effect as supplying no parameters. true Performs a moveout operation and forces the database to shut down, disallowing further connections. HP Vertica Analytic Database (7.0.x) Page 699 of 1539 SQL Reference Manual SQL Functions Privileges Must be a superuser. Notes l Quotes around the true or false arguments are optional. l Issuing the shutdown command without arguments or with the default (false) argument returns a message if users are connected, and the shutdown fails. If no users are connected, the database performs a moveout operation and shuts down. l Issuing the SHUTDOWN('true') command forces the database to shut down whether users are connected or not. l You can check the status of the shutdown operation in the vertica.log file: 2010-03-09 16:51:52.625 unknown:0x7fc6d6d2e700 [Init] Shutdown complete. Exiting. l As an alternative to SHUTDOWN(), you can also temporarily set MaxClientSessions to 0 and then use CLOSE_ALL_SESSIONS(). New client connections cannot connect unless they connect using the dbadmin account. See CLOSE_ALL_SESSIONS for details. Examples The following command attempts to shut down the database. Because users are connected, the command fails: => SELECT SHUTDOWN('false'); NOTICE: Cannot shut down while users are connected SHUTDOWN ----------------------------Shutdown: aborting shutdown (1 row) SHUTDOWN() and SHUTDOWN('false') perform the same operation: => SELECT SHUTDOWN(); NOTICE: Cannot shut down while users are connected SHUTDOWN ----------------------------Shutdown: aborting shutdown (1 row) Using the 'true' parameter forces the database to shut down, even though clients might be connected: HP Vertica Analytic Database (7.0.x) Page 700 of 1539 SQL Reference Manual SQL Functions => SELECT SHUTDOWN('true'); SHUTDOWN ---------------------------Shutdown: moveout complete (1 row) See Also l SESSIONS SLEEP Waits a specified number of seconds before executing another statement or command. Syntax SLEEP( seconds ) Parameters seconds The wait time, specified in one or more seconds (0 or higher) expressed as a positive integer. Single quotes are optional; for example, SLEEP(3) is the same as SLEEP ('3'). Notes l This function returns value 0 when successful; otherwise it returns an error message due to syntax errors. l You cannot cancel a sleep operation. l Be cautious when using SLEEP() in an environment with shared resources, such as in combination with transactions that take exclusive locks. Example The following command suspends execution for 100 seconds: => SELECT SLEEP(100); sleep ------0 (1 row) HP Vertica Analytic Database (7.0.x) Page 701 of 1539 SQL Reference Manual SQL Functions START_REBALANCE_CLUSTER A rebalance operation performs the following tasks: l Distributes data based on user-defined fault groups, if specified, or based on large cluster automatic fault groups l Redistributes the database projections' data across all nodes l Refreshes projections l Sets the Ancient History Mark l Drops projections that are no longer needed When to rebalance the cluster Rebalancing is useful (or necessary) after you: l Mark one or more nodes as ephemeral in preparation of removing them from the cluster l Add one or more nodes to the cluster so HP Vertica can populate the empty nodes with data l Remove one or more nodes from the cluster so HP Vertica can redistribute the data among the remaining nodes l Change the scaling factor of an elastic cluster, which determines the number of storage containers used to store a projection across the database l Set the control node size or realign control nodes on a large cluster layout l Specify more than 120 nodes in your initial HP Vertica cluster configuration l Add nodes to or remove nodes from a fault group Asynchronously starts a data rebalance task. Since this function starts the rebalance task in the background, it returns immediately after the task has started. Since it is a background task, rebalancing will continue even if the session that started it is closed. It even continues after a cluster recovery if the database shuts down while it is in progress. The only way to stop the task is by the CANCEL_REBALANCE_CLUSTER function. Syntax START_REBALANCE_CLUSTER() Privileges Must be a superuser. HP Vertica Analytic Database (7.0.x) Page 702 of 1539 SQL Reference Manual SQL Functions Example => SELECT START_REBALANCE_CLUSTER(); START_REBALANCE_CLUSTER ------------------------REBALANCING (1 row) See Also l Rebalancing Data Across Nodes l CANCEL_REBALANCE_CLUSTER l REBALANCE_CLUSTER START_REFRESH Transfers data to projections that are not able to participate in query execution due to missing or out-of-date data. Syntax START_REFRESH() Notes l When a design is deployed through the Database Designer, it is automatically refreshed. See Deploying a Design in the Administrator's Guide. l All nodes must be up in order to start a refresh. l START_REFRESH() has no effect if a refresh is already running. l A refresh is run asynchronously. l Shutting down the database ends the refresh. l To view the progress of the refresh, see the PROJECTION_REFRESHES and PROJECTIONS system tables. l If a projection is updated from scratch, the data stored in the projection represents the table columns as of the epoch in which the refresh commits. As a result, the query optimizer might not choose the new projection for AT EPOCH queries that request historical data at epochs older HP Vertica Analytic Database (7.0.x) Page 703 of 1539 SQL Reference Manual SQL Functions than the refresh epoch of the projection. Projections refreshed from buddies retain history and can be used to answer historical queries. Privileges None Example The following command starts the refresh operation: => SELECT START_REFRESH(); start_refresh ---------------------------------------Starting refresh background process. See Also l CLEAR_PROJECTION_REFRESHES l MARK_DESIGN_KSAFE l PROJECTION_REFRESHES l PROJECTIONS l Clearing PROJECTION_REFRESHES History SYNCH_WITH_HCATALOG_SCHEMA Copies the structure of a Hive database schema available through the HCatalog Connector to an Vertica Analytic Database schema. Syntax SYNC_WITH_HCATALOG_SCHEMA( local_schema, hcatalog_schema, [drop_tables] ) Parameters local_schema The existing Vertica Analytic Database schema to store the copied HCatalog schema's metadata HP Vertica Analytic Database (7.0.x) Page 704 of 1539 SQL Reference Manual SQL Functions hcatalog_ schema The HCatalog schema to copy [drop_ tables] Drop any tables in local_schema that do not correspond to a table in hcatalog_ schema Notes You should always create an empty schema for the local_schema parameter. Tables in the hcatalog_schema overwrite any identically named table in local_schema, which can lead to data loss. Permissions The user must have CREATE privileges on local_schema and USAGE permissions on hcatalog_ schema. Example The following example shows using SYNCH_WITH_HCATALOG_SCHEMA to copy the metadata from an HCatalog schema named hcat to an Vertica Analytic Database schema named hcat_local: => CREATE SCHEMA hcat_local; CREATE SCHEMA => SELECT sync_with_hcatalog_schema('hcat_local', 'hcat'); sync_with_hcatalog_schema ---------------------------------------Schema hcat_local synchronized with hcat tables in hcat = 56 tables altered in hcat_local = 0 tables created in hcat_local = 56 stale tables in hcat_local = 0 table changes erred in hcat_local = 0 (1 row) => -- Use vsql's \d command to describe a table in the synced schema => \d hcat_local.messages List of Fields by Tables Schema | Table | Column | Type | Size | Default | Not Null | Primary K ey | Foreign Key -----------+----------+---------+----------------+-------+---------+----------+------------+------------hcat_local | messages | id | int | 8 | | f | f | hcat_local | messages | userid | varchar(65000) | 65000 | | f | f | hcat_local | messages | "time" | varchar(65000) | 65000 | | f | f | hcat_local | messages | message | varchar(65000) | 65000 | | f | f HP Vertica Analytic Database (7.0.x) Page 705 of 1539 SQL Reference Manual SQL Functions | (4 rows) HP Vertica Analytic Database (7.0.x) Page 706 of 1539 SQL Reference Manual SQL Functions Catalog Management Functions This section contains catalog management functions specific to HP Vertica. DROP_LICENSE Drops a Flex Zone license key from the global catalog. Syntax DROP_LICENSE( 'license name' ) Parameters license name The name of the license to drop. The name can be found in the licenses table. Privileges Must be a superuser. Notes For more information about license keys, see Managing Licenses in the Administrator's Guide. Examples => SELECT DROP_LICENSE('/tmp/vlicense.dat'); DUMP_CATALOG Returns an internal representation of the HP Vertica catalog. This function is used for diagnostic purposes. Syntax DUMP_CATALOG() Privileges None; however, function dumps only the objects visible to the user. HP Vertica Analytic Database (7.0.x) Page 707 of 1539 SQL Reference Manual SQL Functions Notes To obtain an internal representation of the HP Vertica catalog for diagnosis, run the query: => SELECT DUMP_CATALOG(); The output is written to the specified file: \o /tmp/catalog.txtSELECT DUMP_CATALOG(); \o EXPORT_CATALOG Generates a SQL script that you can use to recreate a physical schema design in its current state on a different cluster. This function always attempts to recreate projection statements with KSAFE clauses, if they exist in the original definitions, or OFFSET clauses if they do not. Syntax EXPORT_CATALOG ( [ 'destination' ] , [ 'scope' ] ) Parameters destination Specifies the path and name of the SQL output file. An empty string (''), which is the default, outputs the script to standard output. The function writes the script to the catalog directory if no destination is specified. If you specify a file that does not exist, the function creates one. If the file preexists, the function silently overwrites its contents. scope Determines what to export: l DESIGN—Exports schemas, tables, constraints, views, and projections to which the user has access. This is the default value. l DESIGN_ALL—Exports all the design objects plus system objects created in Database Designer (for example, design contexts and their tables). The objects that are exported are those to which the user has access. l TABLES—Exports all tables, constraints, and projections for for which the user has permissions. See also EXPORT_TABLES. Privileges None. However: HP Vertica Analytic Database (7.0.x) Page 708 of 1539 SQL Reference Manual SQL Functions l EXPORT_CATALOG exports only the objects visible to the user . l Only a superuser can export output to a file. Example The following example exports the design to standard output: => SELECT EXPORT_CATALOG('','DESIGN'); See Also EXPORT_OBJECTS l EXPORT_TABLES l l EXPORT_OBJECTS Generates a SQL script you can use to recreate catalog objects on a different cluster. The generated script includes only the non-virtual objects to which the user has access. The function exports catalog objects in order dependency for correct recreation. Running the generated SQL script on another cluster then creates all referenced objects before their dependent objects. The EXPORT_OBJECTS function always attempts to recreate projection statements with KSAFE clauses, if they existed in the original definitions, or OFFSET clauses, if they did not. None of the EXPORT_OBJECTS parameters accepts a NULL value as input. EXPORT_ OBJECTS returns an error if an explicitly-specified object does not exist, or the user does not have access to the object. Syntax EXPORT_OBJECTS( [ 'destination' ] , [ 'scope' ] , [ 'ksafe' ] ) Parameters destination Specifies the path and name of the SQL output file. The default empty string ('') outputs the script to standard output. The function writes the script to the catalog directory if no destination is specified. If you specify a file that does not exist, the function creates one. If the file preexists, the function silently overwrites its contents. HP Vertica Analytic Database (7.0.x) Page 709 of 1539 SQL Reference Manual SQL Functions scope ksafe Determines the scope of the catalog objects to export: l An empty string (' ')—exports all non-virtual objects to which the user has access, including constraints. (Note that constraints are not objects that can be passed as individual arguments.) An empty string is the default scope value if you do not limit the export. l A comma-delimited list of catalog objects to export, which can include the following: n ' [dbname.]schema.object '—matches each named schema object. You can optionally qualify the schema with a database prefix. A named schema object can be a table, projection, view, sequence, or user-defined SQL function. n ' [dbname.]schema—matches the named schema, which you can optionally qualify with a database prefix. For a schema, HP Vertica exports all nonvirtual objects that the user has access to within the schema. If a schema and table have the same name, the schema takes precedence. Determines if the statistics are regenerated before loading them into the design context Specifies whether to incorporate a MARK_DESIGN_KSAFE statement with the correct K-safe value for the database: l true—adds the MARK_DESIGN_KSAFE statement to the end of the output script. This is the default value. l false—omits the MARK_DESIGN_KSAFE statement from the script. Privileges None. However: l EXPORT_OBJECTS exports only the objects visible to the user . l Only a superuser can export output to a file. Example The following example exports all the non-virtual objects to which the user has access to standard output. The example uses false for the last parameter, indicating that the file will not include the MARK_DESIGN_KSAFE statement at the end. => SELECT EXPORT_OBJECTS(' ',' ',false); HP Vertica Analytic Database (7.0.x) Page 710 of 1539 SQL Reference Manual SQL Functions See Also l EXPORT_CATALOG l EXPORT_TABLES l Exporting Objects INSTALL_LICENSE Installs the license key in the global catalog. Syntax INSTALL_LICENSE( 'filename' ) Parameters filename specifies the absolute pathname of a valid license file. Privileges Must be a superuser. Notes For more information about license keys, see Managing Your License Key in the Administrator's Guide. Examples => SELECT INSTALL_LICENSE('/tmp/vlicense.dat'); MARK_DESIGN_KSAFE Enables or disables high availability in your environment, in case of a failure. Before enabling recovery, MARK_DESIGN_KSAFE queries the catalog to determine whether a cluster's physical schema design meets the following requirements: l Small, unsegmented tables are replicated on all nodes. l Large table superprojections are segmented with each segment on a different node. HP Vertica Analytic Database (7.0.x) Page 711 of 1539 SQL Reference Manual SQL Functions l Each large table projection has at least one buddy projection for K-safety=1 (or two buddy projections for K-safety=2). Buddy projections are also segmented across database nodes, but the distribution is modified so that segments that contain the same data are distributed to different nodes. See High Availability Through Projections in the Concepts Guide. Note: Projections are considered to be buddies if they contain the same columns and have the same segmentation. They can have different sort orders. MARK_DESIGN_KSAFE does not change the physical schema in any way. Syntax MARK_DESIGN_KSAFE ( k ) Parameters k 2 enables high availability if the schema design meets requirements for K-safety=2 1 enables high availability if the schema design meets requirements for K-safety=1 0 disables high availability If you specify a k value of one (1) or two (2), HP Vertica returns one of the following messages. Success: Marked design n-safe Failure: The schema does not meet requirements for K=n. Fact table projection projection-name has insufficient "buddy" projections. n in the message is 1 or 2 and represents the k value. Privileges Must be a superuser. HP Vertica Analytic Database (7.0.x) Page 712 of 1539 SQL Reference Manual SQL Functions Notes l The database's internal recovery state persists across database restarts but it is not checked at startup time. l If a database has automatic recovery enabled, you must temporarily disable automatic recovery before creating a new table. l When one node fails on a system marked K-safe=1, the remaining nodes are available for DML operations. Examples => SELECT MARK_DESIGN_KSAFE(1); mark_design_ksafe ---------------------Marked design 1-safe (1 row) If the physical schema design is not K-Safe, messages indicate which projections do not have a buddy: => SELECT MARK_DESIGN_KSAFE(1); The given K value is not correct; the schema is 0-safe Projection pp1 has 0 buddies, which is smaller that the given K of 1 Projection pp2 has 0 buddies, which is smaller that the given K of 1 . . . (1 row) See Also l SYSTEM l High Availability and Recovery l HP Vertica System Tables l Avoiding Resegmentation During Joins l Failure Recovery HP Vertica Analytic Database (7.0.x) Page 713 of 1539 SQL Reference Manual SQL Functions SYNCH_WITH_HCATALOG_SCHEMA Copies the structure of a Hive database schema available through the HCatalog Connector to an Vertica Analytic Database schema. Syntax SYNC_WITH_HCATALOG_SCHEMA( local_schema, hcatalog_schema, [drop_tables] ) Parameters local_schema The existing Vertica Analytic Database schema to store the copied HCatalog schema's metadata hcatalog_ schema The HCatalog schema to copy [drop_ tables] Drop any tables in local_schema that do not correspond to a table in hcatalog_ schema Notes You should always create an empty schema for the local_schema parameter. Tables in the hcatalog_schema overwrite any identically named table in local_schema, which can lead to data loss. Permissions The user must have CREATE privileges on local_schema and USAGE permissions on hcatalog_ schema. Example The following example shows using SYNCH_WITH_HCATALOG_SCHEMA to copy the metadata from an HCatalog schema named hcat to an Vertica Analytic Database schema named hcat_local: => CREATE SCHEMA hcat_local; CREATE SCHEMA => SELECT sync_with_hcatalog_schema('hcat_local', 'hcat'); sync_with_hcatalog_schema ---------------------------------------Schema hcat_local synchronized with hcat tables in hcat = 56 tables altered in hcat_local = 0 tables created in hcat_local = 56 HP Vertica Analytic Database (7.0.x) Page 714 of 1539 SQL Reference Manual SQL Functions stale tables in hcat_local = 0 table changes erred in hcat_local = 0 (1 row) => -- Use vsql's \d command to describe a table in the synced schema => \d hcat_local.messages List of Fields by Tables Schema | Table | Column | Type | Size | Default | Not Null | Primary K ey | Foreign Key -----------+----------+---------+----------------+-------+---------+----------+------------+------------hcat_local | messages | id | int | 8 | | f | f | hcat_local | messages | userid | varchar(65000) | 65000 | | f | f | hcat_local | messages | "time" | varchar(65000) | 65000 | | f | f | hcat_local | messages | message | varchar(65000) | 65000 | | f | f | (4 rows) Client Connection Management Functions This section contains client connection management functions specific to HP Vertica. SET_LOAD_BALANCE_POLICY Sets how native connection load balancing chooses a host to handle a client connection. See About Native Connection Load Balancing in the Administrator's Guide for more information. Syntax SET_LOAD_BALANCE_POLICY('policy') Parameters policy The name of the load balancing policy to use. Can be one of the following: l NONE: Disables native connection load balancing. This is the default setting. l ROUNDROBIN: Chooses the next host from a circular list of currently up hosts in the database (i.e. node #1, node #2, node #3, etc. until it wraps back to node #1 again). Each host in the cluster maintains its own pointer to the next host in the circular list, rather than there being a single cluster-wide state. l RANDOM: Chooses a host at random from the list of currently up hosts in the cluster. HP Vertica Analytic Database (7.0.x) Page 715 of 1539 SQL Reference Manual SQL Functions Notes Even if the load balancing policy is set to something other than NONE on the server, clients must indicate they want their connections to be load balanced by setting a connection property. Permissions Can only be used by a superuser . Example The following example demonstrates enabling native connection load balancing on the server by setting the load balancing scheme to ROUNDROBIN: => SELECT SET_LOAD_BALANCE_POLICY('ROUNDROBIN'); SET_LOAD_BALANCE_POLICY -------------------------------------------------------------------------------Successfully changed the client initiator load balancing policy to: roundrobin (1 row) RESET_LOAD_BALANCE_POLICY Resets the counter each host in the cluster maintains to track which host it will refer a client to when the native connection load balancing scheme is set to ROUNDROBIN. Syntax RESET_LOAD_BALANCE_POLICY() Notes This function only has an effect if the current native connection load balancing scheme is ROUNDROBIN. Permissions This function can be called only by a superuser . Example The following example demonstrates calling RESET_LOAD_BALANCE_POLICY: HP Vertica Analytic Database (7.0.x) Page 716 of 1539 SQL Reference Manual SQL Functions => SELECT RESET_LOAD_BALANCE_POLICY(); RESET_LOAD_BALANCE_POLICY ------------------------------------------------------------------------Successfully reset stateful client load balance policies: "roundrobin". (1 row) Cluster Management Functions This section contains functions that manage spread deployment on large, distributed database clusters. SET_CONTROL_SET_SIZE For existing database clusters, use this function to specify the number of cluster nodes on which you want to deploy control messaging (spread). The SET_CONTROL_SET_SIZE() function works the same as the install_vertica --large cluster option. You can run SET_CONTROL_SET_SIZE()after the database cluster is already defined, but before you call this function, the database must be up. Note: You use this function with other cluster management functions. For details, see Defining and Realigning Control Nodes on an Existing Cluster in the Administrator's Guide. Syntax SET_CONTROL_SET_SIZE(integer) Parameters integer Specifies the number of cluster hosts from the database cluster on which spread runs. Privileges Must be a superuser. Note To see if the current spread hosts and the control designations in the Catalog match, query the V_ CATALOG.LARGE_CLUSTER_CONFIGURATION_STATUS system table. Example The following command tells HP Vertica that you want to run spread on two cluster nodes: HP Vertica Analytic Database (7.0.x) Page 717 of 1539 SQL Reference Manual SQL Functions => SELECT set_control_set_size(2); SET_CONTROL_SET_SIZE ---------------------Control size set (1 row) See Also Cluster Management Functions V_CATALOG.CLUSTER_LAYOUT Large Cluster in the Administrator's Guide REALIGN_CONTROL_NODES Chooses control nodes (spread hosts) from all cluster nodes and assigns the rest of the nodes in the cluster to a control node. Calling this function respects existing fault groups, which you can view by querying the V_CATALOG.CLUSTER_LAYOUT system table. This view also lets you see the proposed new layout for nodes in the cluster. Note: You use this function with other cluster management functions. For details, see Defining and Realigning Control Nodes on an Existing Cluster in the Administrator's Guide. Syntax REALIGN_CONTROL_NODES() Privileges Must be a superuser. Example The following command chooses control nodes from all cluster nodes and assigns the rest of the nodes in the cluster to a control node: => SELECT realign_control_nodes(); See Also Cluster Management Functions V_CATALOG.CLUSTER_LAYOUT HP Vertica Analytic Database (7.0.x) Page 718 of 1539 SQL Reference Manual SQL Functions Large Cluster in the Administrator's Guide RELOAD_SPREAD Calling this function with the required true argument updates cluster changes (such as new or realigned control nodes spread hosts or fault groups or new/dropped cluster nodes), to the catalog's spread configuration file. Important: This function is often used in a multi-step process for large and elastic cluster arrangements. Calling RELOAD_SPREAD(true) might require that you restart the database, which you do using the Administration Tools. You must then rebalance the cluster for fault tolerance to be realized. See Defining and Realigning Control Nodes in the Administrator's Guide for more information. Syntax RELOAD_SPREAD(true) Parameters true Updates cluster changes related to control message responsibilities to the spread configuration file. Privileges Must be a superuser. Example The following command updates the cluster with changes to control messaging: => SELECT reload_spread(true); reload_spread --------------reloaded (1 row) See Also Cluster Management Functions REBALANCE_CLUSTER V_CATALOG.CLUSTER_LAYOUT HP Vertica Analytic Database (7.0.x) Page 719 of 1539 SQL Reference Manual SQL Functions Large Cluster in the Administrator's Guide REBALANCE_CLUSTER Call this function to begin rebalancing data in the cluster synchronously. A rebalance operation performs the following tasks: l Distributes data based on user-defined fault groups, if specified, or based on large cluster automatic fault groups l Redistributes the database projections' data across all nodes l Refreshes projections l Sets the Ancient History Mark l Drops projections that are no longer needed When to rebalance the cluster Rebalancing is useful (or necessary) after you: l Mark one or more nodes as ephemeral in preparation of removing them from the cluster l Add one or more nodes to the cluster so HP Vertica can populate the empty nodes with data l Remove one or more nodes from the cluster so HP Vertica can redistribute the data among the remaining nodes l Change the scaling factor of an elastic cluster, which determines the number of storage containers used to store a projection across the database l Set the control node size or realign control nodes on a large cluster layout l Specify more than 120 nodes in your initial HP Vertica cluster configuration l Add nodes to or remove nodes from a fault group Because this function runs the rebalance task synchronously, it does not return until the data has been rebalanced. Closing or dropping the session cancels the rebalance task. Important: On large cluster arrangements, you typically use this function in a flow, described Defining and Realigning Control Nodes in the Administrator's Guide. After you change the number and distribution of control nodes (spread hosts), you must run REBALANCE_CLUSTER() for fault tolerance to be realized. HP Vertica Analytic Database (7.0.x) Page 720 of 1539 SQL Reference Manual SQL Functions Syntax REBALANCE_CLUSTER() Privileges Must be a superuser. Example The following command rebalances data across the cluster. => SELECT REBALANCE_CLUSTER(); REBALANCE_CLUSTER ------------------REBALANCED (1 row) See Also START_REBALANCE_CLUSTER CANCEL_REBALANCE_CLUSTER Rebalancing Data Across Nodes in the Administrator's Guide HP Vertica Analytic Database (7.0.x) Page 721 of 1539 SQL Reference Manual SQL Functions Cluster Scaling Functions This section contains functions that control how the cluster organizes data for rebalancing. CANCEL_REBALANCE_CLUSTER Stops any rebalance task currently in progress. Syntax CANCEL_REBALANCE_CLUSTER() Privileges Must be a superuser. Example => SELECT CANCEL_REBALANCE_CLUSTER(); CANCEL_REBALANCE_CLUSTER -------------------------CANCELED (1 row) See Also l START_REBALANCE_CLUSTER l REBALANCE_CLUSTER DISABLE_ELASTIC_CLUSTER Disables elastic cluster scaling, which prevents HP Vertica from bundling data into chunks that are easily transportable to other nodes when performing cluster resizing. The main reason to disable elastic clustering is if you find that the slightly unequal data distribution in your cluster caused by grouping data into discrete blocks results in performance issues. Syntax DISABLE_ELASTIC_CLUSTER() HP Vertica Analytic Database (7.0.x) Page 722 of 1539 SQL Reference Manual SQL Functions Privileges Must be a superuser. Example => SELECT DISABLE_ELASTIC_CLUSTER(); DISABLE_ELASTIC_CLUSTER ------------------------DISABLED (1 row) See Also l ENABLE_ELASTIC_CLUSTER DISABLE_LOCAL_SEGMENTS Disable local data segmentation, which breaks projections segments on nodes into containers that can be easily moved to other nodes. See Local Data Segmentation in the Administrator's Guide for details. Syntax DISABLE_LOCAL_SEGMENTS() Privileges Must be a superuser. Example => SELECT DISABLE_LOCAL_SEGMENTS(); DISABLE_LOCAL_SEGMENTS -----------------------DISABLED (1 row) ENABLE_ELASTIC_CLUSTER Enables elastic cluster scaling, which makes enlarging or reducing the size of your database cluster more efficient by segmenting a node's data into chunks that can be easily moved to other hosts. HP Vertica Analytic Database (7.0.x) Page 723 of 1539 SQL Reference Manual SQL Functions Note: Databases created using HP Vertica Version 5.0 and later have elastic cluster enabled by default. You need to use this function on databases created before version 5.0 in order for them to use the elastic clustering feature. Syntax ENABLE_ELASTIC_CLUSTER() Privileges Must be a superuser. Example => SELECT ENABLE_ELASTIC_CLUSTER(); ENABLE_ELASTIC_CLUSTER -----------------------ENABLED (1 row) See Also l DISABLE_ELASTIC_CLUSTER ENABLE_LOCAL_SEGMENTS Enables local storage segmentation, which breaks projections segments on nodes into containers that can be easily moved to other nodes. See Local Data Segmentation in the Administrator's Guide for more information. Syntax ENABLE_LOCAL_SEGMENTS() Privileges Must be a superuser. HP Vertica Analytic Database (7.0.x) Page 724 of 1539 SQL Reference Manual SQL Functions Example => SELECT ENABLE_LOCAL_SEGMENTS(); ENABLE_LOCAL_SEGMENTS ----------------------ENABLED (1 row) REBALANCE_CLUSTER Call this function to begin rebalancing data in the cluster synchronously. A rebalance operation performs the following tasks: l Distributes data based on user-defined fault groups, if specified, or based on large cluster automatic fault groups l Redistributes the database projections' data across all nodes l Refreshes projections l Sets the Ancient History Mark l Drops projections that are no longer needed When to rebalance the cluster Rebalancing is useful (or necessary) after you: l Mark one or more nodes as ephemeral in preparation of removing them from the cluster l Add one or more nodes to the cluster so HP Vertica can populate the empty nodes with data l Remove one or more nodes from the cluster so HP Vertica can redistribute the data among the remaining nodes l Change the scaling factor of an elastic cluster, which determines the number of storage containers used to store a projection across the database l Set the control node size or realign control nodes on a large cluster layout l Specify more than 120 nodes in your initial HP Vertica cluster configuration l Add nodes to or remove nodes from a fault group Because this function runs the rebalance task synchronously, it does not return until the data has been rebalanced. Closing or dropping the session cancels the rebalance task. HP Vertica Analytic Database (7.0.x) Page 725 of 1539 SQL Reference Manual SQL Functions Important: On large cluster arrangements, you typically use this function in a flow, described Defining and Realigning Control Nodes in the Administrator's Guide. After you change the number and distribution of control nodes (spread hosts), you must run REBALANCE_CLUSTER() for fault tolerance to be realized. Syntax REBALANCE_CLUSTER() Privileges Must be a superuser. Example The following command rebalances data across the cluster. => SELECT REBALANCE_CLUSTER(); REBALANCE_CLUSTER ------------------REBALANCED (1 row) See Also START_REBALANCE_CLUSTER CANCEL_REBALANCE_CLUSTER Rebalancing Data Across Nodes in the Administrator's Guide SET_SCALING_FACTOR Sets the scaling factor that determines the size of the storage containers used when rebalancing the database and when using local data segmentation is enabled. See Cluster Scaling for details. Syntax SET_SCALING_FACTOR(factor) Parameters factor An integer value between 1 and 32. HP Vertica uses this value to calculate the number of storage containers each projection is broken into when rebalancing or when local data segmentation is enabled. HP Vertica Analytic Database (7.0.x) Page 726 of 1539 SQL Reference Manual SQL Functions Note: Setting the scaling factor value too high can cause nodes to create too many small container files, greatly reducing efficiency and potentially causing a "Too many ROS containers" error (also known as "ROS pushback"). The scaling factor should be set high enough so that rebalance can transfer local segments to satisfy the skew threshold, but small enough that the number of storage containers does not exceed ROS pushback. The number of storage containers should be greater than or equal to the number of partitions multiplied by the number local of segments (# storage containers >= # partitions * # local segments). Privileges Must be a superuser. Example => SELECT SET_SCALING_FACTOR(12); SET_SCALING_FACTOR -------------------SET (1 row) START_REBALANCE_CLUSTER A rebalance operation performs the following tasks: l Distributes data based on user-defined fault groups, if specified, or based on large cluster automatic fault groups l Redistributes the database projections' data across all nodes l Refreshes projections l Sets the Ancient History Mark l Drops projections that are no longer needed When to rebalance the cluster Rebalancing is useful (or necessary) after you: l Mark one or more nodes as ephemeral in preparation of removing them from the cluster l Add one or more nodes to the cluster so HP Vertica can populate the empty nodes with data l Remove one or more nodes from the cluster so HP Vertica can redistribute the data among the remaining nodes HP Vertica Analytic Database (7.0.x) Page 727 of 1539 SQL Reference Manual SQL Functions l Change the scaling factor of an elastic cluster, which determines the number of storage containers used to store a projection across the database l Set the control node size or realign control nodes on a large cluster layout l Specify more than 120 nodes in your initial HP Vertica cluster configuration l Add nodes to or remove nodes from a fault group Asynchronously starts a data rebalance task. Since this function starts the rebalance task in the background, it returns immediately after the task has started. Since it is a background task, rebalancing will continue even if the session that started it is closed. It even continues after a cluster recovery if the database shuts down while it is in progress. The only way to stop the task is by the CANCEL_REBALANCE_CLUSTER function. Syntax START_REBALANCE_CLUSTER() Privileges Must be a superuser. Example => SELECT START_REBALANCE_CLUSTER(); START_REBALANCE_CLUSTER ------------------------REBALANCING (1 row) See Also l Rebalancing Data Across Nodes l CANCEL_REBALANCE_CLUSTER l REBALANCE_CLUSTER HP Vertica Analytic Database (7.0.x) Page 728 of 1539 SQL Reference Manual SQL Functions Constraint Management Functions This section contains constraint management functions specific to HP Vertica. See also SQL system table V_CATALOG.TABLE_CONSTRAINTS ANALYZE_CONSTRAINTS Analyzes and reports on constraint violations within the current schema search path, or external to that path if you specify a database name (noted in the syntax statement and parameter table). You can check for constraint violations by passing arguments to the function as follows: l An empty argument (' '), which returns violations on all tables within the current schema l One argument, referencing a table l Two arguments, referencing a table name and a column or list of columns Syntax ANALYZE_CONSTRAINTS [ ( '' ) ... | ( '[[db-name.]schema.]table [.column_name]' ) ... | ( '[[db-name.]schema.]table' , 'column' ) ] Parameters ('') Analyzes and reports on all tables within the current schema search path. [[db-name.]schema.] [Optional] Specifies the database name and optional schema name. Using a database name identifies objects that are not unique within the current search path (see Setting Search Paths). You must be connected to the database you specify, and you cannot change objects in other databases. Specifying different database objects lets you qualify database objects as explicitly as required. For example, you can use a database and a schema name (mydb.myschema). table Analyzes and reports on all constraints referring to the specified table. column Analyzes and reports on all constraints referring to the specified table that contains the column. HP Vertica Analytic Database (7.0.x) Page 729 of 1539 SQL Reference Manual SQL Functions Privileges l SELECT privilege on table l USAGE privilege on schema Notes ANALYZE_CONSTRAINTS() performs a lock in the same way that SELECT * FROM t1 holds a lock on table t1. See LOCKS for additional information. Detecting Constraint Violations During a Load Process HP Vertica checks for constraint violations when queries are run, not when data is loaded. To detect constraint violations as part of the load process, use a COPY statement with the NO COMMIT option. By loading data without committing it, you can run a post-load check of your data using the ANALYZE_CONSTRAINTS function. If the function finds constraint violations, you can roll back the load because you have not committed it. If ANALYZE_CONSTRAINTS finds violations, such as when you insert a duplicate value into a primary key, you can correct errors using the following functions. Effects last until the end of the session only: l SELECT DISABLE_DUPLICATE_KEY_ERROR l SELECT REENABLE_DUPLICATE_KEY_ERROR Return Values ANALYZE_CONSTRAINTS returns results in a structured set (see table below) that lists the schema name, table name, column name, constraint name, constraint type, and the column values that caused the violation. If the result set is empty, then no constraint violations exist; for example: > SELECT ANALYZE_CONSTRAINTS ('public.product_dimension', 'product_key'); Schema Name | Table Name | Column Names | Constraint Name | Constraint Type | Column Valu es -------------+------------+--------------+-----------------+-----------------+-------------(0 rows) The following result set shows a primary key violation, along with the value that caused the violation ('10'): => SELECT ANALYZE_CONSTRAINTS (''); HP Vertica Analytic Database (7.0.x) Page 730 of 1539 SQL Reference Manual SQL Functions Schema Name | Table Name | Column Names | Constraint Name | Constraint Type | Column Valu es -------------+------------+--------------+-----------------+-----------------+-------------store t1 c1 pk_t1 PRIMARY ('10') (1 row) The result set columns are described in further detail in the following table: Column Name Data Type Description Schema Name VARCHAR The name of the schema. Table Name VARCHAR The name of the table, if specified. Column Names VARCHAR Names of columns containing constraints. Multiple columns are in a comma-separated list: store_key, store_key, date_key, Constraint Name VARCHAR The given name of the primary key, foreign key, unique, or not null constraint, if specified. Constraint Type VARCHAR Identified by one of the following strings: 'PRIMARY KEY', 'FOREIGN KEY', 'UNIQUE', or 'NOT NULL'. Column Values VARCHAR Value of the constraint column, in the same order in which Column Names contains the value of that column in the violating row. When interpreted as SQL, the value of this column forms a list of values of the same type as the columns in Column Names; for example: ('1'), ('1', 'z') Understanding Function Failures If ANALYZE_CONSTRAINTS() fails, HP Vertica returns an error identifying the failure condition, such as if there are insufficient resources for the database to perform constraint checks. If you specify the wrong table, the system returns an error message: > SELECT ANALYZE_CONSTRAINTS('abc'); ERROR 2069: 'abc' is not a table in the current search_path If you run the function on a table that has no constraints declared (even if duplicates are present), the system returns an error message: > SELECT ANALYZE_CONSTRAINTS('source'); ERROR 4072: No constraints defined HP Vertica Analytic Database (7.0.x) Page 731 of 1539 SQL Reference Manual SQL Functions If you run the function with incorrect syntax, the system returns an error message with a hint; for example, if you run one of the following: l ANALYZE ALL CONSTRAINT; l ANALYZE CONSTRAINT abc; The system returns an informative error with hint: ERROR: ANALYZE CONSTRAINT is not supported. HINT: You may consider using analyze_constraints(). If you run ANALYZE_CONSTRAINTS from a non-default locale, the function returns an error with a hint: > \locale LENINFO 2567: Canonical locale: 'en' Standard collation: 'LEN' English > SELECT ANALYZE_CONSTRAINTS('t1'); ERROR: ANALYZE_CONSTRAINTS is currently not supported in non-default locales HINT: Set the locale in this session to en_US@collation=binary using the command "\locale en_US@collation=binary" Examples Given the following inputs, HP Vertica returns one row, indicating one violation, because the same primary key value (10) was inserted into table t1 twice: CREATE TABLE t1(c1 INT); ALTER TABLE t1 ADD CONSTRAINT pk_t1 PRIMARY KEY (c1); CREATE PROJECTION t1_p (c1) AS SELECT * FROM t1 UNSEGMENTED ALL NODES; INSERT INTO t1 values (10); INSERT INTO t1 values (10); --Duplicate primary key value \x Expanded display is on. SELECT ANALYZE_CONSTRAINTS('t1'); -[ RECORD 1 ]---+-------Schema Name | public Table Name | t1 Column Names | c1 Constraint Name | pk_t1 Constraint Type | PRIMARY Column Values | ('10') If the second INSERT statement above had contained any different value, the result would have been 0 rows (no violations). HP Vertica Analytic Database (7.0.x) Page 732 of 1539 SQL Reference Manual SQL Functions In the following example, create a table that contains three integer columns, one a unique key and one a primary key: CREATE TABLE table_1( a INTEGER, b_UK INTEGER UNIQUE, c_PK INTEGER PRIMARY KEY ); Issue a command that refers to a nonexistent table and column: SELECT ANALYZE_CONSTRAINTS('a_BB'); ERROR: 'a_BB' is not a table name in the current search path Issue a command that refers to a nonexistent column: SELECT ANALYZE_CONSTRAINTS('table_1','x'); ERROR 41614: Nonexistent columns: 'x ' Insert some values into table table_1 and commit the changes: INSERT INTO table_1 values (1, 1, 1); COMMIT; Run ANALYZE_CONSTRAINTS on table table_1. No constraint violations are reported: SELECT ANALYZE_CONSTRAINTS('table_1'); (No rows) Insert duplicate unique and primary key values and run ANALYZE_CONSTRAINTS on table table_1 again. The system shows two violations: one against the primary key and one against the unique key: INSERT INTO table_1 VALUES (1, 1, 1); COMMIT; SELECT ANALYZE_CONSTRAINTS('table_1'); -[ RECORD 1 ]---+---------Schema Name | public Table Name | table_1 Column Names | b_UK Constraint Name | C_UNIQUE Constraint Type | UNIQUE Column Values | ('1') -[ RECORD 2 ]---+---------Schema Name | public Table Name | table_1 Column Names | c_PK Constraint Name | C_PRIMARY Constraint Type | PRIMARY HP Vertica Analytic Database (7.0.x) Page 733 of 1539 SQL Reference Manual SQL Functions Column Values | ('1') The following command looks for constraint validations on only the unique key in the table table_1, qualified with its schema name: => SELECT ANALYZE_CONSTRAINTS('public.table_1', 'b_UK'); -[ RECORD 1 ]---+--------Schema Name | public Table Name | table_1 Column Names | b_UK Constraint Name | C_UNIQUE Constraint Type | UNIQUE Column Values | ('1') (1 row) The following example shows that you can specify the same column more than once; ANALYZE_ CONSTRAINTS, however, returns the violation only once: SELECT ANALYZE_CONSTRAINTS('table_1', 'c_PK, C_PK'); -[ RECORD 1 ]---+---------Schema Name | public Table Name | table_1 Column Names | c_PK Constraint Name | C_PRIMARY Constraint Type | PRIMARY Column Values | ('1') The following example creates a new table, table_2, and inserts a foreign key and different (character) data types: CREATE TABLE table_2 ( x VARCHAR(3), y_PK VARCHAR(4), z_FK INTEGER REFERENCES table_1(c_PK)); Alter the table to create a multicolumn unique key and multicolumn foreign key and create superprojections: ALTER TABLE table_2 ADD CONSTRAINT table_2_multiuk PRIMARY KEY (x, y_PK); WARNING 2623: Column "x" definition changed to NOT NULL WARNING 2623: Column "y_PK" definition changed to NOT NULL The following command inserts a missing foreign key (0) into table dim_1 and commits the changes: INSERT INTO table_2 VALUES ('r1', 'Xpk1', 0); COMMIT; HP Vertica Analytic Database (7.0.x) Page 734 of 1539 SQL Reference Manual SQL Functions Checking for constraints on the table table_2 in the public schema detects a foreign key violation: => SELECT ANALYZE_CONSTRAINTS('public.table_2'); -[ RECORD 1 ]---+---------Schema Name | public Table Name | table_2 Column Names | z_FK Constraint Name | C_FOREIGN Constraint Type | FOREIGN Column Values | ('0') Now add a duplicate value into the unique key and commit the changes: INSERT INTO table_2 VALUES ('r2', 'Xpk1', 1); INSERT INTO table_2 VALUES ('r1', 'Xpk1', 1); COMMIT; Checking for constraint violations on table table_2 detects the duplicate unique key error: SELECT ANALYZE_CONSTRAINTS('table_2'); -[ RECORD 1 ]---+---------------Schema Name | public Table Name | table_2 Column Names | z_FK Constraint Name | C_FOREIGN Constraint Type | FOREIGN Column Values | ('0') -[ RECORD 2 ]---+---------------Schema Name | public Table Name | table_2 Column Names | x, y_PK Constraint Name | table_2_multiuk Constraint Type | PRIMARY Column Values | ('r1', 'Xpk1') Create a table with multicolumn foreign key and create the superprojections: CREATE TABLE table_3( z_fk1 VARCHAR(3), z_fk2 VARCHAR(4)); ALTER TABLE table_3 ADD CONSTRAINT table_3_multifk FOREIGN KEY (z_fk1, z_fk2) REFERENCES table_2(x, y_PK); Insert a foreign key that matches a foreign key in table table_2 and commit the changes: INSERT INTO table_3 VALUES ('r1', 'Xpk1'); COMMIT; Checking for constraints on table table_3 detects no violations: HP Vertica Analytic Database (7.0.x) Page 735 of 1539 SQL Reference Manual SQL Functions SELECT ANALYZE_CONSTRAINTS('table_3'); (No rows) Add a value that does not match and commit the change: INSERT INTO table_3 VALUES ('r1', 'NONE'); COMMIT; Checking for constraints on table dim_2 detects a foreign key violation: SELECT ANALYZE_CONSTRAINTS('table_3'); -[ RECORD 1 ]---+---------------Schema Name | public Table Name | table_3 Column Names | z_fk1, z_fk2 Constraint Name | table_3_multifk Constraint Type | FOREIGN Column Values | ('r1', 'NONE') Analyze all constraints on all tables: SELECT ANALYZE_CONSTRAINTS(''); -[ RECORD 1 ]---+---------------Schema Name | public Table Name | table_3 Column Names | z_fk1, z_fk2 Constraint Name | table_3_multifk Constraint Type | FOREIGN Column Values | ('r1', 'NONE') -[ RECORD 2 ]---+---------------Schema Name | public Table Name | table_2 Column Names | x, y_PK Constraint Name | table_2_multiuk Constraint Type | PRIMARY Column Values | ('r1', 'Xpk1') -[ RECORD 3 ]---+---------------Schema Name | public Table Name | table_2 Column Names | z_FK Constraint Name | C_FOREIGN Constraint Type | FOREIGN Column Values | ('0') -[ RECORD 4 ]---+---------------Schema Name | public Table Name | t1 Column Names | c1 Constraint Name | pk_t1 Constraint Type | PRIMARY Column Values | ('10') -[ RECORD 5 ]---+---------------Schema Name | public Table Name | table_1 HP Vertica Analytic Database (7.0.x) Page 736 of 1539 SQL Reference Manual SQL Functions Column Names | b_UK Constraint Name | C_UNIQUE Constraint Type | UNIQUE Column Values | ('1') -[ RECORD 6 ]---+---------------Schema Name | public Table Name | table_1 Column Names | c_PK Constraint Name | C_PRIMARY Constraint Type | PRIMARY Column Values | ('1') -[ RECORD 7 ]---+---------------Schema Name | public Table Name | target Column Names | a Constraint Name | C_PRIMARY Constraint Type | PRIMARY Column Values | ('1') (5 rows) To quickly clean up your database, issue the following command: DROP TABLE table_1 CASCADE; DROP TABLE table_2 CASCADE; DROP TABLE table_3 CASCADE; To learn how to remove violating rows, see the DISABLE_DUPLICATE_KEY_ERROR function. ANALYZE_CORRELATIONS Analyzes the specified tables for columns that are strongly correlated. In addition, ANALYZE_ CORRELATIONS also collects statistics. For example, state name and country name columns are strongly correlated because the city name usually, but perhaps not always, identifies the state name. The city of Conshohoken is uniquely associated with Pennsylvania, whereas the city of Boston exists in Georgia, Indiana, Kentucky, New York, Virginia, and Massachusetts. In this case, city name is strongly correlated with state name. For Database Designer to take advantage of these correlations, run Database Designer programmatically. Use DESIGNER_SET_ANALYZE_CORRELATIONS_MODE to specify that Database Designer should consider existing column correlations. Make sure to specify that Database Designer not analyze statistics so that Database Designer does not override the existing statistics. Behavior Type Immutable HP Vertica Analytic Database (7.0.x) Page 737 of 1539 SQL Reference Manual SQL Functions Syntax ANALYZE_CORRELATIONS ( '[database_name.][schema_name.]table_name', [recalculate] ) Parameters database_name Specifies the table(s) for which to analyze correlated columns, type VARCHAR. schema_name table_name recalculate Specifies to analyze the correlated columns, even if they have been analyzed before, type BOOLEAN. Default: 'false'. Permissions l To run ANALYZE_CORRELATIONS on a table, you must be a superuser, or a user w USAGE privilege on the design schema. Notes l Column correlation analysis typically needs to be done only once. l Currently, ANALYZE_CORRELATIONS can analyze only pairwise single-column correlations. l Projections do not change based on the analysis results. To implement the results of ANALYZE_CORRELATIONS, you must run Database Designer. Example In the following example, ANALYZE_CORRELATIONS analyzes column correlations for all tables in the public schema,even if they currently exist. The correlations that ANALYZE_ CORRELATIONS finds are saved so that Database Designer can use them the next time it runs on the VMart database: => SELECT ANALYZE_CORRELATIONS ( 'public.*', 'true'); ANALYZE_CORRELATIONS ---------------------0 (1 row) HP Vertica Analytic Database (7.0.x) Page 738 of 1539 SQL Reference Manual SQL Functions See Also l DESIGNER_SET_ANALYZE_CORRELATIONS_MODE DISABLE_DUPLICATE_KEY_ERROR Disables error messaging when HP Vertica finds duplicate PRIMARY KEY/UNIQUE KEY values at run time. Queries execute as though no constraints are defined on the schema. Effects are session scoped. Caution: When called, DISABLE_DUPLICATE_KEY_ERROR() suppresses data integrity checking and can lead to incorrect query results. Use this function only after you insert duplicate primary keys into a dimension table in the presence of a pre-join projection. Then correct the violations and turn integrity checking back on with REENABLE_DUPLICATE_ KEY_ERROR(). Syntax DISABLE_DUPLICATE_KEY_ERROR(); Privileges Must be a superuser. Examples The following series of commands create a table named dim and the corresponding projection: CREATE TABLE dim (pk INTEGER PRIMARY KEY, x INTEGER); CREATE PROJECTION dim_p (pk, x) AS SELECT * FROM dim ORDER BY x UNSEGMENTED ALL NODES; The next two statements create a table named fact and the pre-join projection that joins fact to dim. CREATE TABLE fact(fk INTEGER REFERENCES dim(pk)); CREATE PROJECTION prejoin_p (fk, pk, x) AS SELECT * FROM fact, dim WHERE pk=fk ORDER BY x ; The following statements load values into table dim. The last statement inserts a duplicate primary key value of 1: INSERT INTO dim values (1,1);INSERT INTO dim values (2,2); INSERT INTO dim values (1,2); --Constraint violation HP Vertica Analytic Database (7.0.x) Page 739 of 1539 SQL Reference Manual SQL Functions COMMIT; Table dim now contains duplicate primary key values, but you cannot delete the violating row because of the presence of the pre-join projection. Any attempt to delete the record results in the following error message: ROLLBACK: Duplicate primary key detected in FK-PK join Hash-Join (x dim_p), value 1 In order to remove the constraint violation (pk=1), use the following sequence of commands, which puts the database back into the state just before the duplicate primary key was added. To remove the violation: 1. Save the original dim rows that match the duplicated primary key: CREATE TEMP TABLE dim_temp(pk integer, x integer); INSERT INTO dim_temp SELECT * FROM dim WHERE pk=1 AND x=1; -- original dim row 2. Temporarily disable error messaging on duplicate constraint values: SELECT DISABLE_DUPLICATE_KEY_ERROR(); Caution: Remember that running the DISABLE_DUPLICATE_KEY_ERROR function suppresses the enforcement of data integrity checking. 3. Remove the original row that contains duplicate values: DELETE FROM dim WHERE pk=1; 4. Allow the database to resume data integrity checking: SELECT REENABLE_DUPLICATE_KEY_ERROR(); 5. Reinsert the original values back into the dimension table: INSERT INTO dim SELECT * from dim_temp;COMMIT; 6. Validate your dimension and fact tables. If you receive the following error message, it means that the duplicate records you want to delete are not identical. That is, the records contain values that differ in at least one column that is not a primary key; for example, (1,1) and (1,2). HP Vertica Analytic Database (7.0.x) Page 740 of 1539 SQL Reference Manual SQL Functions ROLLBACK: Delete: could not find a data row to delete (data integrity violation?) The difference between this message and the rollback message in the previous example is that a fact row contains a foreign key that matches the duplicated primary key, which has been inserted. A row with values from the fact and dimension table is now in the pre-join projection. In order for the DELETE statement (Step 3 in the following example) to complete successfully, extra predicates are required to identify the original dimension table values (the values that are in the pre-join). This example is nearly identical to the previous example, except that an additional INSERT statement joins the fact table to the dimension table by a primary key value of 1: INSERT INTO dim values (1,1);INSERT INTO dim values (2,2); INSERT INTO fact values (1); -- New insert statement joins fact with dim on primar y key value=1 INSERT INTO dim values (1,2); -- Duplicate primary key value=1 COMMIT; To remove the violation: 1. Save the original dim and fact rows that match the duplicated primary key: CREATE TEMP TABLE dim_temp(pk integer, x integer);CREATE TEMP TABLE fact_temp(fk inte ger); INSERT INTO dim_temp SELECT * FROM dim WHERE pk=1 AND x=1; -- original dim row INSERT INTO fact_temp SELECT * FROM fact WHERE fk=1; 2. Temporarily suppresses the enforcement of data integrity checking: SELECT DISABLE_DUPLICATE_KEY_ERROR(); 3. Remove the duplicate primary keys. These steps also implicitly remove all fact rows with the matching foreign key. 4. Remove the original row that contains duplicate values: DELETE FROM dim WHERE pk=1 AND x=1; Note: The extra predicate (x=1) specifies removal of the original (1,1) row, rather than the newly inserted (1,2) values that caused the violation. 5. Remove all remaining rows: HP Vertica Analytic Database (7.0.x) Page 741 of 1539 SQL Reference Manual SQL Functions DELETE FROM dim WHERE pk=1; 6. Reenable integrity checking: SELECT REENABLE_DUPLICATE_KEY_ERROR(); 7. Reinsert the original values back into the fact and dimension table: INSERT INTO dim SELECT * from dim_temp; INSERT INTO fact SELECT * from fact_temp; COMMIT; 8. Validate your dimension and fact tables. See Also l ANALYZE_CONSTRAINTS l REENABLE_DUPLICATE_KEY_ERROR LAST_INSERT_ID Returns the last value of a column whose value is automatically incremented through the AUTO_ INCREMENT or IDENTITY Column-Constraint. If multiple sessions concurrently load the same table, the returned value is the last value generated for an AUTO_INCREMENT column by an insert in that session. Behavior Type Volatile Syntax LAST_INSERT_ID() Privileges l Table owner l USAGE privileges on schema HP Vertica Analytic Database (7.0.x) Page 742 of 1539 SQL Reference Manual SQL Functions Notes l This function works only with AUTO_INCREMENT and IDENTITY columns. See columnconstraints for the CREATE TABLE statement. l LAST_INSERT_ID does not work with sequence generators created through the CREATE SEQUENCE statement. Examples Create a sample table called customer4. => CREATE TABLE customer4( ID IDENTITY(2,2), lname VARCHAR(25), fname VARCHAR(25), membership_card INTEGER ); => INSERT INTO customer4(lname, fname, membership_card) VALUES ('Gupta', 'Saleem', 475987); Notice that the IDENTITY column has a seed of 2, which specifies the value for the first row loaded into the table, and an increment of 2, which specifies the value that is added to the IDENTITY value of the previous row. Query the table you just created: => SELECT * FROM customer4; ID | lname | fname | membership_card ----+-------+--------+----------------2 | Gupta | Saleem | 475987 (1 row) Insert some additional values: => INSERT INTO customer4(lname, fname, membership_card) VALUES ('Lee', 'Chen', 598742); Call the LAST_INSERT_ID function: => SELECT LAST_INSERT_ID(); LAST_INSERT_ID ---------------4 (1 row) Query the table again: HP Vertica Analytic Database (7.0.x) Page 743 of 1539 SQL Reference Manual SQL Functions => SELECT * FROM customer4; ID | lname | fname | membership_card ----+-------+--------+----------------2 | Gupta | Saleem | 475987 4 | Lee | Chen | 598742 (2 rows) Add another row: => INSERT INTO customer4(lname, fname, membership_card) VALUES ('Davis', 'Bill', 469543); Call the LAST_INSERT_ID function: => SELECT LAST_INSERT_ID(); LAST_INSERT_ID ---------------6 (1 row) Query the table again: => SELECT * FROM customer4; ID | lname | fname ----+-------+--------+----------------2 | Gupta | Saleem | 475987 4 | Lee | Chen | 598742 6 | Davis | Bill | 469543 (3 rows) | membership_card See Also l ALTER SEQUENCE l CREATE SEQUENCE l DROP SEQUENCE l SEQUENCES l Using Named Sequences l Sequence Privileges REENABLE_DUPLICATE_KEY_ERROR Restores the default behavior of error reporting by reversing the effects of DISABLE_DUPLICATE_ KEY_ERROR. Effects are session scoped. HP Vertica Analytic Database (7.0.x) Page 744 of 1539 SQL Reference Manual SQL Functions Syntax REENABLE_DUPLICATE_KEY_ERROR(); Privileges Must be a superuser. Examples For examples and usage, see DISABLE_DUPLICATE_KEY_ERROR. See Also l ANALYZE_CONSTRAINTS HP Vertica Analytic Database (7.0.x) Page 745 of 1539 SQL Reference Manual SQL Functions Data Collector Functions The HP Vertica Data Collector is a utility that extends system table functionality by providing a framework for recording events. It gathers and retains monitoring information about your database cluster and makes that information available in system tables, requiring few configuration parameter tweaks, and having negligible impact on performance. Collected data is stored on disk in the DataCollector directory under the HP Vertica /catalog path. You can use the information the Data Collector retains to query the past state of system tables and extract aggregate information, as well as do the following: l See what actions users have taken l Locate performance bottlenecks l Identify potential improvements to HP Vertica configuration Data Collector works in conjunction with an advisor tool called Workload Analyzer, which intelligently monitors the performance of SQL queries and workloads and recommends tuning actions based on observations of the actual workload history. By default, Data Collector is on and retains information for all sessions. If performance issues arise, a superuser can disable DC. See Data Collector Parameters and Enabling and Disabling Data Collector in the Administrator's Guide. This section describes the Data Collection control functions. Related Topics V_MONITOR.DATA_COLLECTOR Retaining monitoring information and Analyzing Workloads in the Administrator's Guide CLEAR_DATA_COLLECTOR Clears all memory and disk records on the Data Collector tables and functions and resets collection statistics in the V_MONITOR.DATA_COLLECTOR system table. A superuser can clear Data Collector data for all components or specify an individual component After you clear the Data Collector log, the information is no longer available for querying. Syntax CLEAR_DATA_COLLECTOR( [ 'component' ] ) HP Vertica Analytic Database (7.0.x) Page 746 of 1539 SQL Reference Manual SQL Functions Parameters component Clears memory and disk records for the specified component only. If you provide no argument, the function clears all Data Collector memory and disk records for all components. For the current list of component names, query the V_MONITOR.DATA_ COLLECTOR system table. Privileges Must be a superuser. Examples The following command clears memory and disk records for the ResourceAcquisitions component: => SELECT clear_data_collector('ResourceAcquisitions'); clear_data_collector ---------------------CLEAR (1 row) The following command clears data collection for all components on all nodes: => SELECT clear_data_collector(); clear_data_collector ---------------------CLEAR (1 row) See Also DATA_COLLECTOR l l DATA_COLLECTOR_HELP Returns online usage instructions about the Data Collector, the DATA_COLLECTOR system table, and the Data Collector control functions. Syntax DATA_COLLECTOR_HELP() HP Vertica Analytic Database (7.0.x) Page 747 of 1539 SQL Reference Manual SQL Functions Privileges None Returns The DATA_COLLECTOR_HELP() function returns the following information: => SELECT DATA_COLLECTOR_HELP(); ----------------------------------------------------------------------------Usage Data Collector The data collector retains history of important system activities. This data can be used as a reference of what actions have been taken by users, but it can also be used to locate performance bottlenecks, or identify potential improvements to the Vertica configuration. This data is queryable via Vertica system tables. Acccess a list of data collector components, and some statistics, by running: SELECT * FROM v_monitor.data_collector; The amount of data retained by size and time can be controlled with several functions. To just set the size amount: set_data_collector_policy( , , ); To set both the size and time amounts (the smaller one will dominate): set_data_collector_policy( , , , ); To set just the time amount: set_data_collector_time_policy( , ); To set the time amount for all tables: set_data_collector_time_policy( ); The current retention policy for a component can be queried with: get_data_collector_policy( ); Data on disk is kept in the "DataCollector" directory under the Vertica \catalog path. This directory also contains instructions on how to load the monitoring data into another Vertica database. To move the data collector logs and instructions to other storage locations, create labeled storage locations using add_location and then use: set_data_collector_storage_location( ); Additional commands can be used to configure the data collection logs. HP Vertica Analytic Database (7.0.x) Page 748 of 1539 SQL Reference Manual SQL Functions The log can be cleared with: clear_data_collector([ ]); The log can be synchronized with the disk storage using: flush_data_collector([ ]); See Also l DATA_COLLECTOR l TUNING_RECOMMENDATIONS l Analyzing Workloads l Retaining Monitoring Information FLUSH_DATA_COLLECTOR Waits until memory logs are moved to disk and then flushes the Data Collector, synchronizing the log with the disk storage. A superuser can flush Data Collector information for an individual component or for all components. Syntax FLUSH_DATA_COLLECTOR( [ 'component' ] ) Parameters component Flushes the specified component. If you provide no argument, the function flushes the Data Collector in full. For the current list of component names, query the V_MONITOR.DATA_ COLLECTOR system table. Privileges Must be a superuser. Examples The following command flushes the Data Collector for the ResourceAcquisitions component: => SELECT flush_data_collector('ResourceAcquisitions'); HP Vertica Analytic Database (7.0.x) Page 749 of 1539 SQL Reference Manual SQL Functions flush_data_collector ---------------------FLUSH (1 row) The following command flushes data collection for all components: => SELECT flush_data_collector(); flush_data_collector ---------------------FLUSH (1 row) See Also DATA_COLLECTOR l l GET_DATA_COLLECTOR_POLICY Retrieves a brief statement about the retention policy for the specified component. Syntax GET_DATA_COLLECTOR_POLICY( 'component' ) Parameters component Returns the retention policy for the specified component. For a current list of component names, query the V_MONITOR.DATA_ COLLECTOR system table Privileges None Example The following query returns the history of all resource acquisitions by specifying the ResourceAcquisitions component: => SELECT get_data_collector_policy('ResourceAcquisitions'); HP Vertica Analytic Database (7.0.x) Page 750 of 1539 SQL Reference Manual SQL Functions get_data_collector_policy ---------------------------------------------1000KB kept in memory, 10000KB kept on disk. (1 row) See Also DATA_COLLECTOR l l SET_DATA_COLLECTOR_POLICY Sets a size restraint (memory and disk space in kilobytes) for the specified Data Collector table on all nodes. If nodes are down, the failed nodes receive the setting when they rejoin the cluster. You can use this function to set a size restraint only, or you can include the optional interval argument to set disk capacity for both size and time in a single command. If you specify interval, HP Vertica enforces the setting that is exceeded first (size or time). Before you include a time restraint, be sure the disk size capacity is sufficiently large. If you want to specify just a time restraint, or you want to turn off a time restraint you set using this function, see SET_DATA_COLLECTOR_TIME_POLICY(). Syntax SET_DATA_COLLECTOR_POLICY('component', 'memoryKB', 'diskKB' [,'interval'] ) Parameters component Configures the retention policy for the specified component. memoryKB Specifies the memory size to retain in kilobytes. diskKB Specifies the disk size in kilobytes. interval [Default off] Takes an optional interval argument to specify how long to retain the specified component on disk. To disable a time restraint, set interval to -1. Note: Any negative input will turn off the time restraint Privileges Must be a superuser. HP Vertica Analytic Database (7.0.x) Page 751 of 1539 SQL Reference Manual SQL Functions Notes l Before you change a retention policy, view its current setting by calling the GET_DATA_ COLLECTOR_POLICY() function. l If you don't know the name of a component, query the V_MONITOR.DATA_COLLECTOR system table for a list; for example: => SELECT DISTINCT component, description FROM data_collector ORDER BY 1 ASC; Examples The following command returns the retention policy for the ResourceAcquisitions component: => SELECT get_data_collector_policy('ResourceAcquisitions'); get_data_collector_policy ---------------------------------------------1000KB kept in memory, 10000KB kept on disk. (1 row) This command changes the memory and disk setting for ResourceAcquisitions from its current setting of 1,000 KB memory and 10,000 KB disk space to 1500 KB and 25000 KB, respectively: => SELECT set_data_collector_policy('ResourceAcquisitions', '1500', '25000'); set_data_collector_policy --------------------------SET (1 row) This command sets the RequestsIssued component to 1500 KB memory and 11000 KB on disk, and includes a 3-minute time restraint: => SELECT set_data_collector_policy('RequestsIssued', '1500', '11000', '3 minutes'::inter val); set_data_collector_policy --------------------------SET (1 row) The following command disables the 3-minute retention policy for the RequestsIssued component: => SELECT set_data_collector_policy('RequestsIssued', '-1'); set_data_collector_policy --------------------------SET (1 row) HP Vertica Analytic Database (7.0.x) Page 752 of 1539 SQL Reference Manual SQL Functions See Also l GET_DATA_COLLECTOR_POLICY l SET_DATA_COLLECTOR_TIME_POLICY() l DATA_COLLECTOR l Retaining Monitoring Information in the Administrator's Guide SET_DATA_COLLECTOR_TIME_POLICY Sets a time capacity for individual Data Collector tables on all nodes. If nodes are down, the failed nodes receive the setting when they rejoin the cluster. If you want to configure both time and size restraints at the same time, see SET_DATA_COLLECTOR_ POLICY(). Syntax SET_DATA_COLLECTOR_TIME_POLICY( ['component',] 'interval' ) Parameters component [Optional] Configures the time retention policy for the specified component. If you omit the component argument, HP Vertica sets the specified time capacity for all Data Collector tables. interval Specifies the time restraint on disk using an INTERVAL type. To disable a time restraint, set interval to -1. Note: Any negative input turns off the time restraint Privileges Must be a superuser. Notes l Before you change a retention policy, view its current setting by calling the GET_DATA_ COLLECTOR_POLICY() function. l If you don't know the name of a component, query the V_MONITOR.DATA_COLLECTOR system table for a list. For example: HP Vertica Analytic Database (7.0.x) Page 753 of 1539 SQL Reference Manual SQL Functions => SELECT DISTINCT component, description FROM data_collector ORDER BY 1 ASC; Setting time interval for system tables You can also use the interval argument to query system tables the same way you query Data Collector tables; for example: set_data_collector_time_policy(' ', <'interval'>); To illustrate, the following command in the left column is equivalent to running the series of commands on the right: Run one command Instead of a series of commands SELECT set_data_collector_time_policy ('v_monitor.query_requests', '3 minutes'::interval); SELECT set_data_collector_time_policy ('RequestsIssued', '3 minutes'::interval); SELECT set_data_collector_time_policy ('RequestsCompleted', '3 minutes'::interval); SELECT set_data_collector_time_policy ('Errors', '3 minutes'::interval); SELECT set_data_collector_time_policy ('ResourceAcquisitions', '3 minutes'::interval); The SET_DATA_COLLECTOR_TIME_POLICY() function updates the time capacity for all Data Collector tables in the V_MONITOR.QUERY_REQUESTS view. The new setting overrides any previous settings for every Data Collector table in that view. Examples The following command configures the Backups component to be retained on disk for 1 day: => SELECT set_data_collector_time_policy('Backups', '1 day'::interval); set_data_collector_time_policy -------------------------------SET (1 row) This command disables the 1-day restraint for the Backups component: => SELECT set_data_collector_time_policy('Backups', '-1'); HP Vertica Analytic Database (7.0.x) Page 754 of 1539 SQL Reference Manual SQL Functions set_data_collector_time_policy -------------------------------SET (1 row) This command sets a 30-minute time capacity for all Data Collector tables in a single command: => SELECT set_data_collector_time_policy('30 minutes'::interval); set_data_collector_time_policy -------------------------------SET (1 row) To view current retention policy settings for each Data Collector table, call the GET_DATA_ COLLECTION_POLICY() function. In the next example, the time restraint is included. => SELECT get_data_collector_policy('RequestsIssued'); get_data_collector_policy ----------------------------------------------------------------------------2000KB kept in memory, 50000KB kept on disk. 2 years 3 days 15:08 hours kept on disk. (1 row) If the time policy setting is disabled, the output of GET_DATA_COLLECTION_POLICY() returns "Time based retention disabled." 2000KB kept in memory, 50000KB kept on disk. Time based retention disabled. See Also l GET_DATA_COLLECTOR_POLICY l SET_DATA_COLLECTOR_POLICY l DATA_COLLECTOR HP Vertica Analytic Database (7.0.x) Page 755 of 1539 SQL Reference Manual SQL Functions Database Designer Functions The Database Designer functions allow you to access Database Designer functionality outside the Administration Tools. Permissions To run the Database Designer functions, you must be one of the following: l Superuser l Have been granted the DBDUSER role and executed the SET ROLE DBDUSER command. Once you have been granted the DBDUSER role, the role is in effect until it is revoked. Important: When you grant the DBDUSER role, make sure to associate a resource pool with that user to manage resources during Database Designer runs. Multiple users can run Database Designer concurrently without interfering with each other or using up all the cluster resources. When a user runs Database Designer, either using the Administration Tools or programmatically, its execution is mostly contained by the user's resource pool, but may spill over into some system resource pools for less-intensive tasks. You can run Database Designer functions in vsql: Setup Functions This function directs Database Designer to create a new design: l DESIGNER_CREATE_DESIGN Configuration Functions The following functions allow you to specify properties of a particular design: l DESIGNER_DESIGN_PROJECTION_ENCODINGS l DESIGNER_SET_DESIGN_KSAFETY l DESIGNER_SET_OPTIMIZATION_OBJECTIVE l DESIGNER_SET_DESIGN_TYPE l DESIGNER_SET_PROPOSED_UNSEGMENTED_PROJECTIONS l DESIGNER_SET_ANALYZE_CORRELATIONS_MODE HP Vertica Analytic Database (7.0.x) Page 756 of 1539 SQL Reference Manual SQL Functions Input Functions The following functions allow you to add tables and queries to your Database Designer design: l DESIGNER_ADD_DESIGN_QUERIES l DESIGNER_ADD_DESIGN_QUERIES_FROM RESULTS l DESIGNER_ADD_DESIGN_QUERY l DESIGNER_ADD_DESIGN_TABLES Invocation Functions These functions populate the Database Designer workspace and create design and deployment scripts. You can also analyze statistics, deploy the design automatically, and drop the workspace after the deployment: l DESIGNER_RUN_POPULATE_DESIGN_AND_DEPLOY l DESIGNER_WAIT_FOR_DESIGN Output Functions The following functions display information about projections and scripts that the Database Designer created: l DESIGNER_OUTPUT_ALL_DESIGN_PROJECTIONS l DESIGNER_OUTPUT_DEPLOYMENT_SCRIPT Cleanup Functions The following functions cancel any running Database Designer operation or drop a Database Designer design and all its contents: l DESIGNER_CANCEL_POPULATE_DESIGN l DESIGNER_DROP_DESIGN l DESIGNER_DROP_ALL_DESIGNS DESIGNER_ADD_DESIGN_QUERIES Reads and parses all queries in the specified file. Adds accepted queries to the design and sets the weight of each accepted query to 1. HP Vertica Analytic Database (7.0.x) Page 757 of 1539 SQL Reference Manual SQL Functions Behavior Type Immutable Syntax DESIGNER_ADD_DESIGN_QUERIES ( 'design_name', 'file_name', 'informative_output' ) Parameters design_name Name of the design to which to add the queries, type VARCHAR. file_name Absolute path to the queries file, type VARCHAR. informative_output Specifies verbose output, type BOOLEAN. Default: 'false'. If 'true', the output lists: l Number of accepted queries l Number of queries referencing non-design tables l Number of unsupported queries l Number of illegal queries l Maximum number of queries was reached, if applicable. Only valid if the optimization objective is set to INCREMENTAL. Permissions l To run DESIGNER_ADD_DESIGN_QUERIES, you must be a superuser, a user granted the DBDUSER role, or the user who created the design. l You must have READ privilege on the storage location of the queries file. l You must have sufficient privileges to execute all queries in the queries file. Notes l You must run DESIGNER_ADD_DESIGN_TABLES before you run DESIGNER_ADD_ DESIGN_QUERIES. If no tables have been added to the design, HP Vertica does not accept HP Vertica Analytic Database (7.0.x) Page 758 of 1539 SQL Reference Manual SQL Functions the design queries. l l l Database Designer rejects a query and returns an error if the query: n Contains illegal syntax. n References only external tables or system tables. n Is a DELETE or UPDATE query with one or more subqueries. n References a local temporary table or other non-design table. n Is an INSERT query that does not include a SELECT clause. n Is unoptimizable by the Database Designer. If the third parameter is 'true', DESIGNER_ADD_DESIGN_QUERIES returns the following information: n Number of accepted queries n Number of queries referencing non-design tables n Number of unsupported queries n Number of illegal queries n If the number of accepted queries exceeds 100, DESIGNER_ADD_DESIGN_QUERIES reports that as well. If you are running an incremental design, any queries after 100 are ignored. DESIGNER_ADD_DESIGN_QUERIES: n Populates the V_MONITOR.DESIGN_QUERIES system table. n Creates the V_MONITOR.OUTPUT_EVENT_HISTORY system table. Examples The following example adds the queries file vmart_queries.sql to the VMART_DESIGN design. This file contains nine queries. Since the third parameter is 'true', DESIGNER_ADD_DESIGN_ QUERIES returns the results of adding the queries: => SELECT DESIGNER_ADD_DESIGN_QUERIES ( 'VMART_DESIGN', '/tmp/examples/vmart_queries.sql', 'true' ); ... DESIGNER_ADD_DESIGN_QUERIES HP Vertica Analytic Database (7.0.x) Page 759 of 1539 SQL Reference Manual SQL Functions ---------------------------------------------------Number of accepted queries =9 Number of queries referencing non-design tables =0 Number of unsupported queries =0 Number of illegal queries =0 (1 row) See Also l DESIGNER_ADD_DESIGN_QUERIES_FROM_RESULTS l DESIGNER_ADD_DESIGN_QUERY l DESIGNER_ADD_DESIGN_TABLES DESIGNER_ADD_DESIGN_QUERIES_FROM_RESULTS Parses and executes a user query and retrieves only queries that contain the following columns: l QUERY_TEXT: Text of potential design queries l QUERY_WEIGHT: Corresponding query weight values for the queries in QUERY_TEXT, a value greater than 0 but no greater than 1. If empty, Database Designer sets the weight of that query to 1. DESIGNER_ADD_DESIGN_QUERIES_FROM_RESULTS parses all the retrieved queries and adds all accepted queries to the design. Behavior Type Immutable Syntax DESIGNER_ADD_DESIGN_QUERIES_FROM_RESULTS ( 'design_name', 'user_query' ) Parameters design_name Name of the design to which to add the queries, type VARCHAR. user_query A valid SQL query whose results contain columns named QUERY_TEXT and QUERY_ WEIGHT, type VARCHAR. HP Vertica Analytic Database (7.0.x) Page 760 of 1539 SQL Reference Manual SQL Functions Permissions l To run DESIGNER_ADD_DESIGN_QUERIES_FROM_RESULTS, you must be a superuser or a user granted the DBDUSER role who created the design. l You must have sufficient privileges to execute user_query and to execute each design query retrieved from the results of user_query. Notes l You must run DESIGNER_ADD_DESIGN_TABLES before you run DESIGNER_ADD_ DESIGN_QUERIES_FROM_RESULTS. If no tables have been added to the design, HP Vertica does not accept the design queries. l An unlimited number of queries can be added to the design using DESIGNER_ADD_DESIGN_ QUERIES_FROM_RESULTS. l The rejects a query and returns an error if the query: n Contains illegal syntax. n References only external tables. n Is a DELETE or UPDATE query with one or more subqueries. n References a local temporary table or other non-design table. n Is an INSERT query that does not include a SELECT clause. n Is unoptimizable by the Database Designer. Example The following example retrieves the query_text field from the 10 most recent queries in the QUERY_REQUESTS system table QUERY_REQUESTS and adds the accepted queries to the VMART_DESIGN design. QUERY_REQUESTS does not contains any weight value for those queries: => SELECT DESIGNER_ADD_DESIGN_QUERIES_FROM_RESULTS ( 'VMART_DESIGN', 'SELECT request AS query_text FROM query_requests ORDER BY start_timestamp DESC LIMIT 10;'); ); HP Vertica Analytic Database (7.0.x) Page 761 of 1539 SQL Reference Manual SQL Functions See Also l DESIGNER_ADD_DESIGN_QUERIES l DESIGNER_ADD_DESIGN_TABLES l DESIGNER_SET_PROPOSE_UNSEGMENTED_PROJECTIONS DESIGNER_ADD_DESIGN_QUERY Reads and parses the specified query, and if accepted, adds it to the design. Behavior Type Immutable Syntax DESIGNER_ADD_DESIGN_QUERY ( 'design_name', 'design_query', query_weight ) Parameters design_name Name of the Database Designer design to which you want to add the query, type VARCHAR. design_query Executable SQL query, type VARCHAR. query_weight Weight of query, any positive real number greater than 0 and no greater than 1. Default: 1 You cannot add a weight of 0. The weight of a query indicates its relative importance so that Database Designer can prioritize the query when creating the design. Permissions l To run DESIGNER_ADD_DESIGN_QUERY, you must be a superuser or a user granted the DBDUSER role who created the design. l You must have sufficient privileges to execute the specified query. HP Vertica Analytic Database (7.0.x) Page 762 of 1539 SQL Reference Manual SQL Functions Notes l You must run DESIGNER_ADD_DESIGN_TABLES before you run DESIGNER_ADD_ DESIGN_QUERY. If no tables have been added to the design, HP Vertica does not accept the design queries. l Database Designer rejects a query and returns an error if the query: n Contains illegal syntax. n References only external tables or system tables. n Is a DELETE or UPDATE query with one or more subqueries. n References a local temporary table or other non-design table. n Is an INSERT query that does not include a SELECT clause. n Is unoptimizable by the Database Designer. Examples The following example adds the specified query to the VMART_DESIGN design and assigns that query a weight of 0.5: => SELECT DESIGNER_ADD_DESIGN_QUERY ( 'VMART_DESIGN', 'SELECT customer_name, customer_type FROM vmart_design ORDER BY customer_name ASC;', 0.5 ); See Also l DESIGNER_ADD_DESIGN_QUERIES l DESIGNER_ADD_DESIGN_QUERIES_FROM_RESULTS l DESIGNER_ADD_DESIGN_TABLES DESIGNER_ADD_DESIGN_TABLES Adds the specified tables to a design. Behavior Type Immutable HP Vertica Analytic Database (7.0.x) Page 763 of 1539 SQL Reference Manual SQL Functions Syntax DESIGNER_ADD_DESIGN_TABLES ( 'design_name', 'table_pattern', [ 'analyze_statistics'] ) Parameters design_name Name of the Database Designer design, type VARCHAR. table_pattern Comma-delimited list of tables, type VARCHAR. The table_pattern must be one of the following: analyze_statistics l '*' indicates to add all user tables in the current database. l ' .*' indicates to add all tables in the specified schema. l ' .' indicates to add the specified table in the specified schema. l ' ' indicates the specified table in the current search path. (Optional) BOOLEAN that specifies whether or not to analyze statistics for the design tables when adding them to the design. Default is 'false'. Accurate statistics help Database Designer optimize compression and query performance. Updating statistics takes time and resources. If 'true' , DESIGNER_ADD_DESIGN_TABLES executes the ANALYZE_ STATISTICS function. If ANALYZE_STATISTICS has previously run, set this parameter to 'false'. Permissions To run DESIGNER_ADD_DESIGN_TABLES and add tables to your design, you must be a superuser or a user granted the DBDUSER role who: l Created the design. l Has USAGE privilege on the design table schema. l Is the owner of the design table. Notes You must run DESIGNER_ADD_DESIGN_TABLES before you add design queries to the design. If no tables have been added to the design, HP Vertica does not accept the design queries. HP Vertica Analytic Database (7.0.x) Page 764 of 1539 SQL Reference Manual SQL Functions Examples The following example adds all the VMart tables to the VMART_DESIGN design, and analyzes statistics for those tables: => SELECT DESIGNER_ADD_DESIGN_TABLES( 'VMART_DESIGN', 'online_sales.*', 'true' ); DESIGNER_ADD_DESIGN_TABLES ---------------------------9 (1 row) => SELECT DESIGNER_ADD_DESIGN_TABLES( 'VMART_DESIGN', 'public.*', 'true' ); DESIGNER_ADD_DESIGN_TABLES ---------------------------9 (1 row) => SELECT DESIGNER_ADD_DESIGN_TABLES( 'VMART_DESIGN', 'store.*', 'true' ); DESIGNER_ADD_DESIGN_TABLES ---------------------------3 (1 row) See Also l DESIGNER_ADD_DESIGN_QUERIES l DESIGNER_ADD_DESIGN_QUERIES_FROM_RESULTS l DESIGNER_ADD_DESIGN_QUERY DESIGNER_CANCEL_POPULATE_DESIGN Cancels the population or deployment operation for the specified design if it is currently running. Behavior Type Immutable HP Vertica Analytic Database (7.0.x) Page 765 of 1539 SQL Reference Manual SQL Functions Syntax DESIGNER_CANCEL_POPULATE_DESIGN ( 'design_name' ) Parameters design_name Name of a currently running design that you want to cancel, type VARCHAR. Permissions To run DESIGNER_CANCEL_POPULATE_DESIGN on a design, you must be a superuser, or a user granted the DBDUSER role who created the design. Notes l When you cancel a deployment, the Database Designer cancels the projection refresh operation. It does not roll back any projections that it has already deployed and refreshed. l Use DESIGNER_DROP_DESIGN to clean up any leftover files from the design. Examples The following example cancels a currently running design for VMART_DESIGN and then drops the design: => SELECT DESIGNER_CANCEL_POPULATE_DESIGN ('VMART_DESIGN');=> SELECT DESIGNER_DROP_DESIGN ('VMART_DESIGN'); See Also l DESIGNER_DROP_ALL_DESIGNS l DESIGNER_DROP_DESIGN DESIGNER_CREATE_DESIGN Creates a design with the specified name. Note: Be sure to back up the existing design using the EXPORT_CATALOG function before running the Database Designer functions on an existing schema. You must explicitly back up the existing design when using Database Designer programmatically. HP Vertica Analytic Database (7.0.x) Page 766 of 1539 SQL Reference Manual SQL Functions Behavior Type Immutable Syntax DESIGNER_CREATE_DESIGN ( 'design_name' ) Parameters design_name Name of the design you want to create, type VARCHAR. A Database Designer design name can only contain alphanumeric characters and underscore (_) characters. Permissions To run DESIGNER_CREATE_DESIGN, you must be a superuser, or a user granted the DBDUSER role. Notes l Two users may not have designs with the same name at the same time. l DESIGNER_CREATE_DESIGN creates the following system tables in the V_MONITOR schema: n DESIGNS n DESIGN_TABLES n DEPLOYMENT_PROJECTIONS n DEPLOYMENT_PROJECTION_STATEMENTS n DESIGN_QUERIES n OUTPUT_DEPLOYMENT_STATUS n OUTPUT_EVENT_HISTORY Examples The following example creates a design named VMART_DESIGN: HP Vertica Analytic Database (7.0.x) Page 767 of 1539 SQL Reference Manual SQL Functions => SELECT DESIGNER_CREATE_DESIGN('VMART_DESIGN'); DESIGNER_CREATE_DESIGN -----------------------0 (1 row) See Also l DESIGNER_DROP_DESIGN l DESIGNER_DROP_ALL_DESIGNS DESIGNER_DESIGN_PROJECTION_ENCODINGS Analyze encoding in the specified projections, create a script to implement encoding recommendations, and deploy the recommendations. Behavior Type Immutable Syntax DESIGNER_DESIGN_PROJECTION_ENCODINGS ( 'projection_list', 'projection_ddl_script_file, 'deploy' ) Parameters projection_list List of projections for which encoding analysis should be performed: l ''—All projections in the design l '
.*'—All projections in the specified schema l ' . '—The named projection in the specified schema l ' '—The named projection in the PUBLIC schema. If you omit the schema name, DESIGNER_DESIGN_ PROJECTION_ENCODINGS uses the PUBLIC schema. HP Vertica Analytic Database (7.0.x) Page 768 of 1539 SQL Reference Manual SQL Functions projection_ddl_script_file deploy Script to deploy the encoding changes, type VARCHAR. Can be one of the following l Full path to script file l ''—Output results to STDOUT in a foreground process Specifies to deploy the encoding changes or not. Default: 'false'. Privileges l To run DESIGNER_DESIGN_PROJECTION_ENCODINGS on a design, you must n Be the OWNER of the projections for which you want to perform encoding analysis. n Have USAGE privilege on the schema that corresponds to all the specified projections. Examples The following example requests that Database Designer analyze the encoding of the projections of the online_sales schema in the VMart example database, save the SQL statements in the script file encodings.sql, but do not deploy the changes: => SELECT DESIGNER_DESIGN_PROJECTION_ENCODINGS ( 'online_sales.*', 'encodings.sql', 'false' ); DESIGNER_DESIGN_PROJECTION_ENCODINGS -------------------------------------(1 row) DESIGNER_DROP_ALL_DESIGNS Removes all Database Designer-related schemas associated with the current user. Use DESIGNER_DROP_ALL_DESIGNS to remove database objects after one or more Database Designer session completes. Behavior Type Immutable Syntax DESIGNER_DROP_ALL_DESIGNS() HP Vertica Analytic Database (7.0.x) Page 769 of 1539 SQL Reference Manual SQL Functions Parameters None. Privileges If the superuser runs DESIGNER_DROP_ALL_DESIGNS, all designs are dropped. If the DBDUSER runs DESIGNER_DROP_ALL_DESIGNS, the function only drops the designs that the DBDUSER created. Example The following example removes all schema and their contents associated with the current user. DESIGNER_DROP_ALL_DESIGNS returns the number of designs dropped: => SELECT DESIGNER_DROP_ALL_DESIGNS(); DESIGNER_DROP_ALL_DESIGNS --------------------------2 (1 row) See Also l DESIGNER_CANCEL_POPULATE_DESIGN l DESIGNER_DROP_DESIGN DESIGNER_DROP_DESIGN Removes the schema associated with the specified design and all its contents. Use DESIGNER_ DROP_DESIGN after a Database Designer design or deployment completes successfully or terminates unexpectedly. Behavior Type Immutable Syntax DESIGNER_DROP_DESIGN ( 'design_name' ) HP Vertica Analytic Database (7.0.x) Page 770 of 1539 SQL Reference Manual SQL Functions Parameters design_name Name of the design you want to drop, type VARCHAR. To drop all designs you have created, use DESIGNER_DROP_ALL_DESIGNS. Permissions You must be a superuser or a user granted the DBDUSER role who created the design, to run DESIGNER_DROP_DESIGN. Notes If a Database Designer session terminates unexpectedly, you cannot recreate that design with the same name unless you run DESIGNER_DROP_DESIGN to clean up any leftover files. Example The following example deletes the Database Designer design VMART_DESIGN, and all its contents: => SELECT DESIGNER_DROP_DESIGN ('VMART_DESIGN'); See Also l DESIGNER_CANCEL_POPULATE_DESIGN l DESIGNER_DROP_ALL_DESIGNS DESIGNER_OUTPUT_ALL_DESIGN_PROJECTIONS Displays the DDL statements that define the design projections to STDOUT. Behavior Type Immutable Syntax DESIGNER_OUTPUT_ALL_DESIGN_PROJECTIONS ( 'design_name' ) HP Vertica Analytic Database (7.0.x) Page 771 of 1539 SQL Reference Manual SQL Functions Parameters design_name Name of the design for which to display the DDL statements that create the design projections, type VARCHAR. Permissions You must be a superuser, or a user assigned the DBDUSER role, to run DESIGNER_OUTPUT_ ALL_DESIGN_PROJECTIONS. Examples The following example returns the design projection DDL statements for vmart_design: => SELECT DESIGNER_OUTPUT_ALL_DESIGN_PROJECTIONS('vmart_design'); CREATE PROJECTION customer_dimension_DBD_1_rep_VMART_DESIGN /*createtype(D)*/ ( customer_key ENCODING DELTAVAL, customer_type ENCODING AUTO, customer_name ENCODING AUTO, customer_gender ENCODING REL, title ENCODING AUTO, household_id ENCODING DELTAVAL, customer_address ENCODING AUTO, customer_city ENCODING AUTO, customer_state ENCODING AUTO, customer_region ENCODING AUTO, marital_status ENCODING AUTO, customer_age ENCODING DELTAVAL, number_of_children ENCODING BLOCKDICT_COMP, annual_income ENCODING DELTARANGE_COMP, occupation ENCODING AUTO, largest_bill_amount ENCODING DELTAVAL, store_membership_card ENCODING BLOCKDICT_COMP, customer_since ENCODING DELTAVAL, deal_stage ENCODING AUTO, deal_size ENCODING DELTARANGE_COMP, last_deal_update ENCODING DELTARANGE_COMP ) AS SELECT customer_key, customer_type, customer_name, customer_gender, title, household_id, customer_address, customer_city, customer_state, customer_region, marital_status, HP Vertica Analytic Database (7.0.x) Page 772 of 1539 SQL Reference Manual SQL Functions customer_age, number_of_children, annual_income, occupation, largest_bill_amount, store_membership_card, customer_since, deal_stage, deal_size, last_deal_update FROM public.customer_dimension ORDER BY customer_gender, annual_income UNSEGMENTED ALL NODES; CREATE PROJECTION product_dimension_DBD_2_rep_VMART_DESIGN /*+createtype(D)*/ ( ... See Also l DESIGNER_OUTPUT_DEPLOYMENT_SCRIPT DESIGNER_OUTPUT_DEPLOYMENT_SCRIPT Displays the deployment script for the specified design to STDOUT. The deployment script includes the CREATE PROJECTION commands that are also in the design script that you can display using DESIGNER_OUTPUT_ALL_DESIGN_PROJECTIONS. If you have already deployed the design, this function does not return the script. Behavior Type Immutable Syntax DESIGNER_OUTPUT_DEPLOYMENT_SCRIPT ( 'design_name' ) Parameters design_name Name of the design for which you want to display the deployment script, type VARCHAR. HP Vertica Analytic Database (7.0.x) Page 773 of 1539 SQL Reference Manual SQL Functions Permissions To run DESIGNER_OUTPUT_DEPLOYMENT_SCRIPT, you must be a superuser or a user granted the DBDUSER role who created the design. Examples The following example displays the deployment script for VMART_DESIGN at STDOUT: => SELECT DESIGNER_OUTPUT_DEPLOYMENT_SCRIPT('VMART_DESIGN'); CREATE PROJECTION customer_dimension_DBD_1_rep_VMART_DESIGN /*createtype(D)*/ ... CREATE PROJECTION product_dimension_DBD_2_rep_VMART_DESIGN /*+createtype(D)*/ ... select refresh('public.customer_dimension, public.product_dimension, public.promotion.dimension, public.date_dimension'); select make_ahm_now(); DROP PROJECTION public.customer_dimension_super CASCADE; DROP PROJECTION public.product_dimension_super CASCADE; ... See Also l DESIGNER_OUTPUT_ALL_DESIGN_PROJECTIONS DESIGNER_RESET_DESIGN Clears the specified design but preserves the design's configuration such as parameters, tables, and queries. Running this function allows you to make additional changes to the design with or without deployment. DESIGNER_RESET_DESIGN discards all the run-specific information of the previous Database Designer build or deployment of the specified design but keeps its configuration. You can make changes to the design as needed, for example, by changing parameters or adding additional tables and/or queries, before rerunning the design. Behavior Type Immutable Syntax DESIGNER_RESET_DESIGN ( 'design_name' ) HP Vertica Analytic Database (7.0.x) Page 774 of 1539 SQL Reference Manual SQL Functions Parameters design_name Name of the design you want to reset, type VARCHAR. Permissions You must be a superuser or a user granted the DBDUSER role who created the design, to run DESIGNER_RESET_DESIGN. Example The following example resets the Database Designer design VMART_DESIGN: => SELECT DESIGNER_RESET_DESIGN ('VMART_DESIGN'); DESIGNER_RUN_POPULATE_DESIGN_AND_DEPLOY Populates the design and creates the design and deployment scripts. If specified, DESIGNER_ RUN_POPULATE_DESIGN_AND_DEPLOY also analyzes statistics, deploys the design, and drops the workspace after the deployment. Note: Make sure to back up the existing design using the EXPORT_CATALOG function before running the Database Designer functions on an existing schema. You must explicitly back up the existing design when using Database Designer programmatically. Behavior Type Immutable Syntax DESIGNER_RUN_POPULATE_DESIGN_AND_DEPLOY ( 'design_name', 'output_design_file', 'output_deployment_file', [ 'analyze_statistics', ] [ 'deploy', ] [ 'drop_design_workspace', ] [ 'continue_after_error', ] ) HP Vertica Analytic Database (7.0.x) Page 775 of 1539 SQL Reference Manual SQL Functions Parameters design_name Name of the design that you want to populate and deploy, type VARCHAR. output_design_file Absolute path for saving the file that contains the DDL statements that create the design projections, type VARCHAR. output_deployment_file Absolute path for saving the file that contains the deployment script, type VARCHAR. analyze_statistics (Optional) BOOLEAN that specifies whether or not to collect or refresh statistics for the tables before populating the design. Default is 'false'. Accurate statistics help Database Designer optimize compression and query performance. Updating statistics takes time and resources. If 'true', executes ANALYZE_STATISTICS. If ANALYZE_STATISTICS has run recently, set this parameter to 'false'. deploy (Optional) BOOLEAN that specifies whether or not to deploy the Database Designer design using the deployment script created by this function. Default: 'true'. drop_design_workspace (Optional) BOOLEAN that specifies whether or not to drop the design workspace after the design has been deployed. Default: 'true'. continue_after_error (Optional) BOOLEAN that specifies whether DESIGNER_RUN_ POPULATE_DESIGN_AND_DEPLOY should continue running if an error occurs. Default: 'false'. Permissions To run DESIGNER_RUN_POPULATE_DESIGN_AND_DEPLOY, you must l Be a superuser, or a user granted the DBDUSER role who created the design. l Have WRITE privilege on the storage locations of the design and deployment scripts. If you do not have permission, DESIGNER_RUN_POPULATE_DESIGN_AND_DEPLOY creates and deploys the design (if the deploy parameter is 'true'), but it cannot save the design and deployment scripts. HP Vertica Analytic Database (7.0.x) Page 776 of 1539 SQL Reference Manual SQL Functions Notes l l Prior to calling DESIGNER_RUN_POPULATE_DESIGN_AND_DEPLOY, you must: n Create a design, a logical schema with tables. n Associates tables with the design. n Load queries to the design. n Set the design properties (K-safety level, mode, and policy) DESIGNER_RUN_POPULATE_DESIGN_AND_DEPLOY does not create a backup copy of the current design before deploying the new design. Examples The following example creates projections for and deploys the VMART_DESIGN design, and analyzes statistics about the design tables. If an error occurs during execution, DESIGNER_RUN_ POPULATE_DESIGN_AND_DEPLOY terminates. => SELECT DESIGNER_RUN_POPULATE_DESIGN_AND_DEPLOY ( 'VMART_DESIGN', '/tmp/examples/vmart_design_files/vmart_design_DDL', '/tmp/examples/vmart_design_files/vmart_design_deployment_scripts', 'true', 'false', 'false', ); DESIGNER_SET_ANALYZE_CORRELATIONS_MODE Specifies how Database Designer should handle column correlations in a design. Depending on what mode you set, Database Designer analyzes or reanalyzes existing column correlations and considers them when creating a database design.. Behavior Type Immutable Syntax DESIGNER_SET_ANALYZE_CORRELATIONS_MODE ( 'design_name', analyze_correlations_mode ) HP Vertica Analytic Database (7.0.x) Page 777 of 1539 SQL Reference Manual SQL Functions Parameters design_name Name of the design for which to specify how Database Designer handles correlated columns, type VARCHAR. analyze_correlations_mode Specifies the mode for analyzing correlations, type INTEGER. Default: 0. l 0—(Default) When creating a design, ignore any column correlations in the specified tables. l 1—Consider the existing correlations in the tables when creating the design. If you set the mode to 1, and there are no existing correlations, Database Designer does not consider correlations. l 2—Analyze column correlations on tables where the correlation analysis was not previously performed. When creating the design, consider all column correlations (new and existing). l 3—Analyze all column correlations in the tables and consider them when creating the design. Even if correlations exist for a table, reanalyze the table for correlations. Permissions l To run DESIGNER_SET_ANALYZE_CORRELATIONS_MODE on a design, you must be a superuser, or a user assigned the DBDUSER role with USAGE privilege on the design schema. Notes l Analyzing column correlations typically needs to be done only once. l HP Vertica recommends that you analyze correlations when the row count is at least DBDCorrelationSampleRowCount, which defaults to 4000. l To enable correlation analysis, you need to run DESIGNER_SET_ANALYZE_CORRELATIONS_ MODE for each design. HP Vertica Analytic Database (7.0.x) Page 778 of 1539 SQL Reference Manual SQL Functions l Setting the correlation analysis mode does not have any affect on whether or not Database Designer analyzes statistics when creating a design. Example The following example specifies that Database Designer analyze all correlated columns and consider them when creating a design: => SELECT DESIGNER_SET_ANALYZE_CORRELATIONS_MODE ( 'VMART_DESIGN', '3); DESIGNER_SET_ANALYZE_CORRELATIONS_MODE ---------------------------------------3 (1 row) See Also l ANALYZE_CORRELATIONS DESIGNER_SET_DESIGN_KSAFETY Sets the K-safety value for a comprehensive design and stores the K-safety value in the DESIGNS table. Behavior Type Immutable Syntax DESIGNER_SET_DESIGN_KSAFETY ( 'design_name' [, ksafety_level ] ) Parameters design_name Name of the design for which you want to set the K-safety value, type VARCHAR. HP Vertica Analytic Database (7.0.x) Page 779 of 1539 SQL Reference Manual SQL Functions ksafety_level Value of K-safety that you want for the specified design, type INTEGER. The value must be a valid K-safety value for your cluster. For example, if you have a three- or four-node cluster, you cannot set the K-safety to 2. If you do not set the design K-safety using this function, the defaults are: l Number of nodes = 1 or 2, K-safety = 0. l Number of nodes => 3, K-safety = 1. If you are a DBDUSER, the system K-safety value is unchanged after creating your design. If you are a DBADMIN user and you set the design K-safety to a value: l Lower than the system K-safety, the system K-safety changes to the lower value after Database Designer deploys the design. l Higher than the system K-safety, Database Designer creates the correct number of buddy projections for the specified value. If you create the design on all tables in the database, or all the tables in your database have the right number of buddy projections, Database Designer changes the system Ksafety to that value if it’s valid on your cluster. If you are not creating a design on all tables in the database and some of the tables do not have enough buddy projections, Database Designer gives a warning that the system K-safety cannot change and recommends which tables need buddy projections in order to raise the K -safety value. Permissions To run DESIGNER_SET_DESIGN_KSAFETY, you must be a superuser, or a user granted the DBDUSER role who created the design. Notes l You cannot change the K-safety value of an incremental design. Incremental designs assume the K-safety value of the database. Examples The following examples set the K-safety level for the VMART_DESIGN design: => SELECT DESIGNER_SET_DESIGN_KSAFETY('VMART_DESIGN', 1); => SELECT DESIGNER_SET_DESIGN_KSAFETY('VMART_DESIGN', 2); HP Vertica Analytic Database (7.0.x) Page 780 of 1539 SQL Reference Manual SQL Functions See Also l DESIGNER_SET_OPTIMIZATION_OBJECTIVE l DESIGNER_SET_DESIGN_TYPE l DESIGNER_SET_PROPOSE_UNSEGMENTED_PROJECTIONS DESIGNER_SET_DESIGN_TYPE Specifies whether Database Designer should create an initial or replacement design (the COMPREHENSIVE option) or make incremental changes to the existing design using the design queries you loaded into the design (the INCREMENTAL option). DESIGNER_SET_DESIGN_TYPE stores the design mode in the DESIGNS table. If you do not specify a design mode, Database Designer creates a comprehensive design. Behavior Type Immutable Syntax DESIGNER_SET_DESIGN_TYPE ( 'design_name', 'mode_name' ) Parameters design_name Name of the design for which you want to specify the design mode, type VARCHAR. mode_name Name of the mode that Database Designer should use when designing the database, type VARCHAR. Valid values are: l 'COMPREHENSIVE' l 'INCREMENTAL' Permissions You must be a superuser, or a user assigned the DBDUSER role, to run DESIGNER_SET_ DESIGN_TYPE. Notes Incremental designs always inherit the K-safety value of the database. HP Vertica Analytic Database (7.0.x) Page 781 of 1539 SQL Reference Manual SQL Functions Examples The following examples show the two design mode options for the VMART_DESIGN design: => SELECT DESIGNER_SET_DESIGN_TYPE( 'VMART_DESIGN', 'COMPREHENSIVE'); DESIGNER_SET_DESIGN_TYPE -------------------------0 (1 row) => SELECT DESIGNER_SET_DESIGN_TYPE( 'VMART_DESIGN', 'INCREMENTAL'); DESIGNER_SET_DESIGN_TYPE -------------------------0 (1 row) See Also l DESIGNER_SET_DESIGN_KSAFETY l DESIGNER_SET_OPTIMIZATION_OBJECTIVE l DESIGNER_SET_PROPOSE_UNSEGMENTED_PROJECTIONS DESIGNER_SET_OPTIMIZATION_OBJECTIVE Designates what optimization objective Database Designer should use when optimizing your database design: l QUERY—Optimize for query performance, so that the queries run faster. This can result in a larger database storage footprint because additional projections might be created. l LOAD—Optimize for load performance, so that the size of the database is minimized. This can result in slower query performance. l BALANCED—Balance the design between query performance and database size. DESIGNER_SET_OPTIMIZATION_OBJECTIVE stores the optimization objective in the DESIGNS table. Behavior Type Immutable HP Vertica Analytic Database (7.0.x) Page 782 of 1539 SQL Reference Manual SQL Functions Syntax DESIGNER_SET_OPTIMIZATION_OBJECTIVE ( 'design_name', 'policy_name' ) Parameters design_name Name of the design for which you want to specify the optimization policy, type VARCHAR. policy_name Name of the optimization policy for Database Designer to use when designing the database, type VARCHAR. Valid values are: l 'QUERY' l 'LOAD' l 'BALANCED' Permissions To run DESIGNER_SET_OPTIMIZATION_OBJECTIVE, you must be a superuser, or a user granted the DBDUSER role who created the design. Notes The optimization only applies to a comprehensive design; Database Designer ignores this value for incremental designs. Examples The following examples show the three optimization objective options for the VMART_DESIGN design: => SELECT DESIGNER_SET_OPTIMIZATION_OBJECTIVE( 'VMART_DESIGN', 'LOAD'); DESIGNER_SET_OPTIMIZATION_OBJECTIVE ----------------------------------0 (1 row) => SELECT DESIGNER_SET_OPTIMIZATION_OBJECTIVE( 'VMART_DESIGN', 'QUERY'); DESIGNER_SET_OPTIMIZATION_OBJECTIVE -----------------------------------0 (1 row) HP Vertica Analytic Database (7.0.x) Page 783 of 1539 SQL Reference Manual SQL Functions => SELECT DESIGNER_SET_OPTIMIZATION_OBJECTIVE( 'VMART_DESIGN', 'BALANCED'); DESIGNER_SET_OPTIMIZATION_OBJECTIVE -----------------------------------0 (1 row) See Also l DESIGNER_SET_DESIGN_KSAFETY l DESIGNER_SET_DESIGN_TYPE l DESIGNER_SET_PROPOSE_UNSEGMENTED_PROJECTIONS DESIGNER_SET_PROPOSE_UNSEGMENTED_ PROJECTIONS Specifies that Database Designer can propose unsegmented projections for all design tables and stores the setting in the DESIGNS table. Segmentation splits individual projections into chunks of data of similar size, called segments. One segment is created for and stored on each node. Projection segmentation provides high availability and recovery, and optimizes query execution. For more information about segmentation, see Projection Segmentation. Behavior Type Immutable Syntax DESIGNER_SET_PROPOSE_UNSEGMENTED_PROJECTIONS ( 'design_name', unsegmented ) Parameters design_name Name of the design for which you want segmented projections, type VARCHAR. unsegmented If 'true', Database Designer can proposed unsegmented projections for design tables. If 'false' (the default), Database Designer only proposes segmented projections. HP Vertica Analytic Database (7.0.x) Page 784 of 1539 SQL Reference Manual SQL Functions Permissions To run DESIGNER_SET_PROPOSE_UNSEGMENTED_PROJECTIONS, you must be a superuser, or a user granted the DBDUSER role who created the design. Notes DESIGNER_SET_PROPOSE_UNSEGMENTED_PROJECTIONS has no affect on one-node clusters; all projections will be unsegmented. Example The following example specifies that Database Designer propose a database design where all projections are segmented: => SELECT DESIGNER_SET_PROPOSE_UNSEGMENTED_PROJECTIONS( 'VMART_DESIGN', 'false'); See Also l DESIGNER_SET_DESIGN_KSAFETY l DESIGNER_SET_DESIGN_TYPE l DESIGNER_SET_OPTIMIZATION_OBJECTIVE DESIGNER_WAIT_FOR_DESIGN Waits for a currently running design to complete. DESIGNER_WAIT_FOR_DESIGN waits for operations that are populating and deploying the design. Behavior Type Immutable Syntax DESIGNER_WAIT_FOR_DESIGN ( 'design_name' ) Parameters design_name Name of the currently running design that you want to complete, type VARCHAR. HP Vertica Analytic Database (7.0.x) Page 785 of 1539 SQL Reference Manual SQL Functions Permissions To run DESIGNER_WAIT_FOR_DESIGN on a currently running design, you must be a superuser, or a user granted the DBDUSER role with USAGE privilege on the design schema. Notes If DESIGNER_WAIT_FOR_DESIGN is running in the foreground and still waiting, you can enter Ctrl+C to stop DESIGNER_WAIT_FOR_DESIGN and return control to the user. Examples The following example requests to wait for the currently running design of VMART_DESIGN to complete: => SELECT DESIGNER_WAIT_FOR_DESIGN ('VMART_DESIGN'); See Also l DESIGNER_CANCEL_POPULATE_DESIGN l DESIGNER_DROP_ALL_DESIGNS l DESIGNER_DROP_DESIGN HP Vertica Analytic Database (7.0.x) Page 786 of 1539 SQL Reference Manual SQL Functions Database Management Functions This section contains the database management functions specific to HP Vertica. CLEAR_RESOURCE_REJECTIONS Clears the content of the RESOURCE_REJECTIONS and DISK_RESOURCE_REJECTIONS system tables. Normally, these tables are only cleared during a node restart. This function lets you clear the tables whenever you need. For example, you might want to clear the system tables after you resolved a disk space issue that was causing disk resource rejections. Syntax CLEAR_RESOURCE_REJECTIONS(); Privileges Must be a superuser. Example The following command clears the content of the RESOURCE_REJECTIONS and DISK_ RESOURCE_REJECTIONS system tables: => SELECT clear_resource_rejections(); clear_resource_rejections --------------------------OK (1 row) See Also l DISK_RESOURCE_REJECTIONS l RESOURCE_REJECTIONS DUMP_LOCKTABLE Returns information about deadlocked clients and the resources they are waiting for. Syntax DUMP_LOCKTABLE() HP Vertica Analytic Database (7.0.x) Page 787 of 1539 SQL Reference Manual SQL Functions Privileges None Notes Use DUMP_LOCKTABLE if HP Vertica becomes unresponsive: 1. Open an additional vsql connection. 2. Execute the query: => SELECT DUMP_LOCKTABLE(); The output is written to vsql. See Monitoring the Log Files. You can also see who is connected using the following command: => SELECT * FROM SESSIONS; Close all sessions using the following command: =>SELECT CLOSE_ALL_SESSIONS(); Close a single session using the following command: => SELECT CLOSE_SESSION('session_id'); You get the session_id value from the V_MONITOR.SESSIONS system table. See Also l CLOSE_ALL_SESSIONS l CLOSE_SESSION l LOCKS l SESSIONS DUMP_PARTITION_KEYS Dumps the partition keys of all projections in the system. HP Vertica Analytic Database (7.0.x) Page 788 of 1539 SQL Reference Manual SQL Functions Syntax DUMP_PARTITION_KEYS( ) Note: The ROS objects of partitioned tables without partition keys are ignored by the tuple mover and are not merged during automatic tuple mover operations. Privileges None; however function dumps only the tables for which user has SELECT privileges. Example => SELECT DUMP_PARTITION_KEYS( ); Partition keys on node v_vmart_node0001 Projection 'states_b0' Storage [ROS container] No of partition keys: 1 Partition keys: NH Storage [ROS container] No of partition keys: 1 Partition keys: MA Projection 'states_b1' Storage [ROS container] No of partition keys: 1 Partition keys: VT Storage [ROS container] No of partition keys: 1 Partition keys: ME Storage [ROS container] No of partition keys: 1 Partition keys: CT See Also l DO_TM_TASK l DROP_PARTITION l DUMP_PROJECTION_PARTITION_KEYS l DUMP_TABLE_PARTITION_KEYS l PARTITION_PROJECTION l PARTITION_TABLE HP Vertica Analytic Database (7.0.x) Page 789 of 1539 SQL Reference Manual SQL Functions l PARTITIONS l Working with Table Partitions EXPORT_TABLES Generates a SQL script that can be used to recreate a logical schema (schemas, tables, constraints, and views) on a different cluster. Syntax EXPORT_TABLES ( [ 'destination' ] , [ 'scope' ] ) Parameters destination Specifies the path and name of the SQL output file. An empty string (''), which is the default, outputs the script to standard output. The function writes the script to the catalog directory if no destination is specified. If you specify a file that does not exist, the function creates one. If the file preexists, the function silently overwrites its contents. scope Determines the tables to export. Specify the scope as follows: l An empty string (' ')—exports all non-virtual table objects to which the user has access, including table schemas, sequences, and constraints. Exporting all non-virtual objects is the default scope, and what the function exports if you do not specify a scope. l A comma-delimited list of objects, which can include the following: n ' [dbname.][schema.]object '—matches the named objects, which can be schemas, tables, or views, in the schema. You can optionally qualify a schema with a database prefix, and objects with a schema. You cannot pass constraints as individual arguments. n ' [dbname.]object '—matches a named object, which can be a schema, table, or view. You can optionally qualify a schema with a database prefix, and an object with its schema. For a schema, HP Vertica exports all non-virtual objects to which the user has access within the schema. If a schema and table both have the same name, the schema takes precedence. Privileges None; however: HP Vertica Analytic Database (7.0.x) Page 790 of 1539 SQL Reference Manual SQL Functions l Function exports only the objects visible to the user l Only a superuser can export output to file Example The following example exports the store_orders_fact table of the store schema (in the current database) to standard output: => SELECT EXPORT_TABLES(' ','store.store_orders_fact'); EXPORT_TABLES returns an error if: l You explicitly specify an object that does not exist l The current user does not have access to a specified object See Also EXPORT_CATALOG l EXPORT_OBJECTS l l HAS_ROLE Indicates, with a Boolean value, whether a role has been assigned to a user. This function is useful for letting you check your own role membership. Behavior Type Stable Syntax 1 HAS_ROLE( [ 'user_name' ,] 'role_name' ); Syntax 2 HAS_ROLE( 'role_name' ); HP Vertica Analytic Database (7.0.x) Page 791 of 1539 SQL Reference Manual SQL Functions Parameters user_name [Optional] The name of a user to look up. Currently, only a superuser can supply the user_name argument. role_name The name of the role you want to verify has been granted. Privileges Users can check their own role membership by calling HAS_ROLE('role_name'), but only a superuser can look up other users' memberships using the optional user_name parameter. Notes You can query V_CATALOG system tables ROLES, GRANTS, and USERS to show any directlyassigned roles; however, these tables do not indicate whether a role is available to a user when roles may be available through other roles (indirectly). Examples User Bob wants to see if he has been granted the commentor role: => SELECT HAS_ROLE('commentor'); Output t for true indicates that Bob has been assigned the commentor role: HAS_ROLE ---------t (1 row) In the following function call, a superuser checks if the logadmin role has been granted to user Bob: => SELECT HAS_ROLE('Bob', 'logadmin'); HAS_ROLE ---------t (1 row) To view the names of all roles users can access, along with any roles that have been assigned to those roles, query the V_CATALOG.ROLES system table. An asterisk in the output means role granted WITH ADMIN OPTION. => SELECT * FROM roles; HP Vertica Analytic Database (7.0.x) Page 792 of 1539 SQL Reference Manual SQL Functions role_id | name | assigned_roles -------------------+-----------------+---------------45035996273704964 | public | 45035996273704966 | dbduser | 45035996273704968 | dbadmin | dbduser* 45035996273704972 | pseudosuperuser | dbadmin* 45035996273704974 | logreader | 45035996273704976 | logwriter | 45035996273704978 | logadmin | logreader, logwriter (7 rows) See Also l GRANTS l ROLES l USERS l Managing Users and Privileges l Viewing a user's Role SET_CONFIG_PARAMETER Use SET_CONFIG_PARAMETER to specify the value of a configuration parameter. Note: HP Vertica is designed to operate with minimal configuration changes. Use this function sparingly and carefully follow any documented guidelines for that parameter. If a node is down when you invoke this function, changes will occur on UP nodes only. You must reissue the function after down nodes recover in order for the changes to take effect on those nodes. Alternatively, use the Administration Tools to copy the files. Redistributing Configuration Files to Nodes. Syntax SET_CONFIG_PARAMETER( 'parameter', value ) Parameters parameter Specifies the name of the parameter value being set. See Configuration Parameters in the Administrator's Guide for a list of supported parameters, their function, and usage examples. HP Vertica Analytic Database (7.0.x) Page 793 of 1539 SQL Reference Manual SQL Functions value Specifies the value of the supplied parameter argument. Syntax for this argument will vary depending upon the parameter and its expected data type. For strings, you must enclose the argument in single quotes; integer arguments can be unquoted. You can also query the V_MONITOR.CONFIGURATION_PARAMETERS system table to get information about configuration parameters currently in use by the system. Examples The following example sets the AnalyzeRowCountInterval parameter to 3600. SELECT SET_CONFIG_PARAMETER ('AnalyzeRowCountInterval',3600); The following statement returns all current configuration parameters and information about them, including their current and default values: => SELECT * FROM CONFIGURATION_PARAMETERS; SHUTDOWN Forces a database to shut down, even if there are users connected. Syntax SHUTDOWN ( [ 'false' | 'true' ] ) Parameters false [Default] Returns a message if users are connected. Has the same effect as supplying no parameters. true Performs a moveout operation and forces the database to shut down, disallowing further connections. Privileges Must be a superuser. Notes l Quotes around the true or false arguments are optional. HP Vertica Analytic Database (7.0.x) Page 794 of 1539 SQL Reference Manual SQL Functions l Issuing the shutdown command without arguments or with the default (false) argument returns a message if users are connected, and the shutdown fails. If no users are connected, the database performs a moveout operation and shuts down. l Issuing the SHUTDOWN('true') command forces the database to shut down whether users are connected or not. l You can check the status of the shutdown operation in the vertica.log file: 2010-03-09 16:51:52.625 unknown:0x7fc6d6d2e700 [Init] Shutdown complete. Exiting. l As an alternative to SHUTDOWN(), you can also temporarily set MaxClientSessions to 0 and then use CLOSE_ALL_SESSIONS(). New client connections cannot connect unless they connect using the dbadmin account. See CLOSE_ALL_SESSIONS for details. Examples The following command attempts to shut down the database. Because users are connected, the command fails: => SELECT SHUTDOWN('false'); NOTICE: Cannot shut down while users are connected SHUTDOWN ----------------------------Shutdown: aborting shutdown (1 row) SHUTDOWN() and SHUTDOWN('false') perform the same operation: => SELECT SHUTDOWN(); NOTICE: Cannot shut down while users are connected SHUTDOWN ----------------------------Shutdown: aborting shutdown (1 row) Using the 'true' parameter forces the database to shut down, even though clients might be connected: => SELECT SHUTDOWN('true'); SHUTDOWN ---------------------------Shutdown: moveout complete (1 row) HP Vertica Analytic Database (7.0.x) Page 795 of 1539 SQL Reference Manual SQL Functions See Also l SESSIONS HP Vertica Analytic Database (7.0.x) Page 796 of 1539 SQL Reference Manual SQL Functions Epoch Management Functions This section contains the epoch management functions specific to HP Vertica. ADVANCE_EPOCH Manually closes the current epoch and begins a new epoch. Syntax ADVANCE_EPOCH ( [ integer ] ) Parameters integer Specifies the number of epochs to advance. Privileges Must be a superuser. Notes This function is primarily maintained for backward compatibility with earlier versions of HP Vertica. Example The following command increments the epoch number by 1: => SELECT ADVANCE_EPOCH(1); See Also l ALTER PROJECTION RENAME GET_AHM_EPOCH Returns the number of the epoch in which the Ancient History Mark is located. Data deleted up to and including the AHM epoch can be purged from physical storage. Syntax GET_AHM_EPOCH() HP Vertica Analytic Database (7.0.x) Page 797 of 1539 SQL Reference Manual SQL Functions Note: The AHM epoch is 0 (zero) by default (purge is disabled). Privileges None Examples SELECT GET_AHM_EPOCH(); GET_AHM_EPOCH ---------------------Current AHM epoch: 0 (1 row) GET_AHM_TIME Returns a TIMESTAMP value representing the Ancient History Mark. Data deleted up to and including the AHM epoch can be purged from physical storage. Syntax GET_AHM_TIME() Privileges None Examples SELECT GET_AHM_TIME(); GET_AHM_TIME ------------------------------------------------Current AHM Time: 2010-05-13 12:48:10.532332-04 (1 row) See Also l SET DATESTYLE l TIMESTAMP HP Vertica Analytic Database (7.0.x) Page 798 of 1539 SQL Reference Manual SQL Functions GET_CURRENT_EPOCH The epoch into which data (COPY, INSERT, UPDATE, and DELETE operations) is currently being written. The current epoch advances automatically every three minutes. Returns the number of the current epoch. Syntax GET_CURRENT_EPOCH() Privileges None Examples SELECT GET_CURRENT_EPOCH(); GET_CURRENT_EPOCH ------------------683 (1 row) GET_LAST_GOOD_EPOCH A term used in manual recovery, LGE (Last Good Epoch) refers to the most recent epoch that can be recovered. Returns the number of the last good epoch. Syntax GET_LAST_GOOD_EPOCH() Privileges None Examples SELECT GET_LAST_GOOD_EPOCH(); GET_LAST_GOOD_EPOCH --------------------- HP Vertica Analytic Database (7.0.x) Page 799 of 1539 SQL Reference Manual SQL Functions 682 (1 row) MAKE_AHM_NOW Sets the Ancient History Mark (AHM) to the greatest allowable value, and lets you drop any projections that existed before the issue occurred. Caution: This function is intended for use by Administrators only. Syntax MAKE_AHM_NOW ( [ true ] ) Parameters true [Optional] Allows AHM to advance when nodes are down. Note: If the AHM is advanced after the last good epoch of the failed nodes, those nodes must recover all data from scratch. Use with care. Privileges Must be a superuser. Notes l l The MAKE_AHM_NOW function performs the following operations: n Advances the epoch. n Performs a moveout operation on all projections. n Sets the AHM to LGE — at least to the current epoch at the time MAKE_AHM_NOW() was issued. All history is lost and you cannot perform historical queries prior to the current epoch. Example => SELECT MAKE_AHM_NOW(); MAKE_AHM_NOW ------------------------------ HP Vertica Analytic Database (7.0.x) Page 800 of 1539 SQL Reference Manual SQL Functions AHM set (New AHM Epoch: 683) (1 row) The following command allows the AHM to advance, even though node 2 is down: => SELECT WARNING: WARNING: WARNING: MAKE_AHM_NOW(true); Received no response from v_vmartdb_node0002 in get cluster LGE Received no response from v_vmartdb_node0002 in get cluster LGE Received no response from v_vmartdb_node0002 in set AHM MAKE_AHM_NOW -----------------------------AHM set (New AHM Epoch: 684) (1 row) See Also l DROP PROJECTION l MARK_DESIGN_KSAFE l SET_AHM_EPOCH l SET_AHM_TIME SET_AHM_EPOCH Sets the Ancient History Mark (AHM) to the specified epoch. This function allows deleted data up to and including the AHM epoch to be purged from physical storage. SET_AHM_EPOCH is normally used for testing purposes. Consider SET_AHM_TIME instead, which is easier to use. Syntax SET_AHM_EPOCH ( epoch, [ true ] ) Parameters epoch Specifies one of the following: l The number of the epoch in which to set the AHM l Zero (0) (the default) disables PURGE HP Vertica Analytic Database (7.0.x) Page 801 of 1539 SQL Reference Manual SQL Functions true Optionally allows the AHM to advance when nodes are down. Note: If the AHM is advanced after the last good epoch of the failed nodes, those nodes must recover all data from scratch. Use with care. Privileges Must be a superuser. Notes If you use SET_AHM_EPOCH , the number of the specified epoch must be: l Greater than the current AHM epoch l Less than the current epoch l Less than or equal to the cluster last good epoch (the minimum of the last good epochs of the individual nodes in the cluster) l Less than or equal to the cluster refresh epoch (the minimum of the refresh epochs of the individual nodes in the cluster) Use the SYSTEM table to see current values of various epochs related to the AHM; for example: => SELECT * from SYSTEM; -[ RECORD 1 ]------------+--------------------------current_timestamp | 2009-08-11 17:09:54.651413 current_epoch | 1512 ahm_epoch | 961 last_good_epoch | 1510 refresh_epoch | -1 designed_fault_tolerance | 1 node_count | 4 node_down_count | 0 current_fault_tolerance | 1 catalog_revision_number | 1590 wos_used_bytes | 0 wos_row_count | 0 ros_used_bytes | 41490783 ros_row_count | 1298104 total_used_bytes | 41490783 total_row_count | 1298104 All nodes must be up. You cannot use SET_AHM_EPOCH when any node in the cluster is down, except by using the optional true parameter. When a node is down and you issue SELECT MAKE_AHM_NOW(), the following error is printed to the vertica.log: HP Vertica Analytic Database (7.0.x) Page 802 of 1539 SQL Reference Manual SQL Functions Some nodes were excluded from setAHM. If their LGE is before the AHM they will perform fu ll recovery. Examples The following command sets the AHM to a specified epoch of 12: => SELECT SET_AHM_EPOCH(12); The following command sets the AHM to a specified epoch of 2 and allows the AHM to advance despite a failed node: => SELECT SET_AHM_EPOCH(2, true); See Also l MAKE_AHM_NOW l SET_AHM_TIME l SYSTEM SET_AHM_TIME Sets the Ancient History Mark (AHM) to the epoch corresponding to the specified time on the initiator node. This function allows historical data up to and including the AHM epoch to be purged from physical storage. Syntax SET_AHM_TIME ( time , [ true ] ) Parameters time Is a TIMESTAMP value that is automatically converted to the appropriate epoch number. true [Optional] Allows the AHM to advance when nodes are down. Note: If the AHM is advanced after the last good epoch of the failed nodes, those nodes must recover all data from scratch. Privileges Must be a superuser. HP Vertica Analytic Database (7.0.x) Page 803 of 1539 SQL Reference Manual SQL Functions Notes l SET_AHM_TIME returns a TIMESTAMP WITH TIME ZONE value representing the end point of the AHM epoch. l You cannot change the AHM when any node in the cluster is down, except by using the optional true parameter. l When a node is down and you issue SELECT MAKE_AHM_NOW(), the following error is printed to the vertica.log: Some nodes were excluded from setAHM. If their LGE is before the AHM they will perform full recovery. Examples Epochs depend on a configured epoch advancement interval. If an epoch includes a three-minute range of time, the purge operation is accurate only to within minus three minutes of the specified timestamp: => SELECT SET_AHM_TIME('2008-02-27 18:13'); set_ahm_time -----------------------------------AHM set to '2008-02-27 18:11:50-05' (1 row) Note: The –05 part of the output string is a time zone value, an offset in hours from UTC (Universal Coordinated Time, traditionally known as Greenwich Mean Time, or GMT). In the previous example, the actual AHM epoch ends at 18:11:50, roughly one minute before the specified timestamp. This is because SET_AHM_TIME selects the epoch that ends at or before the specified timestamp. It does not select the epoch that ends after the specified timestamp because that would purge data deleted as much as three minutes after the AHM. For example, using only hours and minutes, suppose that epoch 9000 runs from 08:50 to 11:50 and epoch 9001 runs from 11:50 to 15:50. SET_AHM_TIME('11:51') chooses epoch 9000 because it ends roughly one minute before the specified timestamp. In the next example, if given an environment variable set as date =`date`; the following command fails if a node is down: => SELECT SET_AHM_TIME('$date'); In order to force the AHM to advance, issue the following command instead: => SELECT SET_AHM_TIME('$date', true); HP Vertica Analytic Database (7.0.x) Page 804 of 1539 SQL Reference Manual SQL Functions See Also l MAKE_AHM_NOW l SET_AHM_EPOCH l SET DATESTYLE l TIMESTAMP Flex Table Functions This section contains helper functions for use in working with flex tables. Note: While the functions are available to all users, they are applicable only to flex table, their associated flex_table_keys table and flex_table_view views. By computing keys and creating views from flex table data, the functions facilitate SELECT queries. One function restores the original keys table and view that were made when you first created the flex table. For more information, see the Flex Tables Guide. COMPUTE_FLEXTABLE_KEYS Computes the virtual columns (keys and values) from the map data of a flex table and repopulates the associated _keys table. The keys table has the following columns: l key_name l frequency l data_type_guess This function sorts the keys table by frequency and key_name. Use this function to compute keys without creating an associated table view. To build a view as well, use COMPUTE_FLEXTABLE_KEYS_AND_BUILD_VIEW. Usage compute_flextable_keys('flex_table') Arguments flex_table The name of the flex table. HP Vertica Analytic Database (7.0.x) Page 805 of 1539 SQL Reference Manual SQL Functions Examples During execution, this function determines a data type for each virtual column, casting the values it computes to VARCHAR, LONG VARCHAR, or LONG VARBINARY, depending on the length of the key, and whether the key includes nested maps. The following examples illustrate this function and the results of populating the _keys table, once you've created a flex table (darkdata1) and loaded data: kdb=> create flex table darkdata1(); CREATE TABLE kdb=> copy darkdata1 from '/test/flextable/DATA/tweets_12.json' parser fjsonparser(); Rows Loaded ------------12 (1 row) kdb=> select compute_flextable_keys('darkdata1'); compute_flextable_keys -------------------------------------------------Please see public.darkdata1_keys for updated keys (1 row) kdb=> select * from darkdata1_keys; key_name | frequency | data_type_guess ----------------------------------------------------------+-----------+--------------------contributors | 8 | varchar(20) coordinates | 8 | varchar(20) created_at | 8 | varchar(60) entities.hashtags | 8 | long varbinary(18 6) entities.urls | 8 | long varbinary(3 2) entities.user_mentions | 8 | long varbinary(67 4) . . . retweeted_status.user.time_zone | 1 | varchar(20) retweeted_status.user.url | 1 | varchar(68) retweeted_status.user.utc_offset | 1 | varchar(20) retweeted_status.user.verified | 1 | varchar(20) (125 rows) The flex keys table has these columns: Column Description key_name The name of the virtual column (key). frequency The number of times the virtual column occurs in the map. HP Vertica Analytic Database (7.0.x) Page 806 of 1539 SQL Reference Manual SQL Functions Column Description data_ type_ guess The data type for each virtual column, cast to VARCHAR, LONG VARCHAR or LONG VARBINARY, depending on the length of the key, and whether the key includes one or more nested maps. In the _keys table output, the data_type_guess column values are also followed by a value in parentheses, such as varchar(20). The value indicates the padded width of the key column, as calculated by the longest field, multiplied by the FlexTableDataTypeGuessMultiplier configuration parameter value. For more information, see Setting Flex Table Parameters. See Also l BUILD_FLEXTABLE_VIEW l COMPUTE_FLEXTABLE_KEYS_AND_BUILD_VIEW l MATERIALIZE_FLEXTABLE_COLUMNS l RESTORE_FLEXTABLE_DEFAULT_KEYS_TABLE_AND_VIEW COMPUTE_FLEXTABLE_KEYS Computes the virtual columns (keys and values) from the map data of a flex table and repopulates the associated _keys table. The keys table has the following columns: l key_name l frequency l data_type_guess This function sorts the keys table by frequency and key_name. Use this function to compute keys without creating an associated table view. To build a view as well, use COMPUTE_FLEXTABLE_KEYS_AND_BUILD_VIEW. Usage compute_flextable_keys('flex_table') Arguments flex_table The name of the flex table. HP Vertica Analytic Database (7.0.x) Page 807 of 1539 SQL Reference Manual SQL Functions Examples During execution, this function determines a data type for each virtual column, casting the values it computes to VARCHAR, LONG VARCHAR, or LONG VARBINARY, depending on the length of the key, and whether the key includes nested maps. The following examples illustrate this function and the results of populating the _keys table, once you've created a flex table (darkdata1) and loaded data: kdb=> create flex table darkdata1(); CREATE TABLE kdb=> copy darkdata1 from '/test/flextable/DATA/tweets_12.json' parser fjsonparser(); Rows Loaded ------------12 (1 row) kdb=> select compute_flextable_keys('darkdata1'); compute_flextable_keys -------------------------------------------------Please see public.darkdata1_keys for updated keys (1 row) kdb=> select * from darkdata1_keys; key_name | frequency | data_type_guess ----------------------------------------------------------+-----------+--------------------contributors | 8 | varchar(20) coordinates | 8 | varchar(20) created_at | 8 | varchar(60) entities.hashtags | 8 | long varbinary(18 6) entities.urls | 8 | long varbinary(3 2) entities.user_mentions | 8 | long varbinary(67 4) . . . retweeted_status.user.time_zone | 1 | varchar(20) retweeted_status.user.url | 1 | varchar(68) retweeted_status.user.utc_offset | 1 | varchar(20) retweeted_status.user.verified | 1 | varchar(20) (125 rows) The flex keys table has these columns: Column Description key_name The name of the virtual column (key). frequency The number of times the virtual column occurs in the map. HP Vertica Analytic Database (7.0.x) Page 808 of 1539 SQL Reference Manual SQL Functions Column Description data_ type_ guess The data type for each virtual column, cast to VARCHAR, LONG VARCHAR or LONG VARBINARY, depending on the length of the key, and whether the key includes one or more nested maps. In the _keys table output, the data_type_guess column values are also followed by a value in parentheses, such as varchar(20). The value indicates the padded width of the key column, as calculated by the longest field, multiplied by the FlexTableDataTypeGuessMultiplier configuration parameter value. For more information, see Setting Flex Table Parameters. See Also l BUILD_FLEXTABLE_VIEW l COMPUTE_FLEXTABLE_KEYS_AND_BUILD_VIEW l MATERIALIZE_FLEXTABLE_COLUMNS l RESTORE_FLEXTABLE_DEFAULT_KEYS_TABLE_AND_VIEW COMPUTE_FLEXTABLE_KEYS_AND_BUILD_VIEW Combines the functionality of BUILD_FLEXTABLE_VIEW and COMPUTE_FLEXTABLE_KEYS to compute virtual columns (keys) from the map data of a flex table , and construct a view. If you don't need to perform both operations together, use one of the single-operation functions. Usage compute_flextable_keys_and_build_view('flex_table') Arguments flex_table The name of a flex table. Examples The following example calls the function for the darkdata flex table. kdb=> select compute_flextable_keys_and_build_view('darkdata'); compute_flextable_keys_and_build_view ----------------------------------------------------------------------Please see public.darkdata_keys for updated keys The view public.darkdata_view is ready for querying HP Vertica Analytic Database (7.0.x) Page 809 of 1539 SQL Reference Manual SQL Functions (1 row) See Also l BUILD_FLEXTABLE_VIEW l COMPUTE_FLEXTABLE_KEYS l MATERIALIZE_FLEXTABLE_COLUMNS l RESTORE_FLEXTABLE_DEFAULT_KEYS_TABLE_AND_VIEW MATERIALIZE_FLEXTABLE_COLUMNS Materializes virtual columns that are listed as key_names in the flextable_keys table. You can optionally indicate the number of columns to materialize, and use a keys table other than the default. If you do not specify the number of columns, the function materializes up to 50 virtual column key names. Calling this function requires that you first compute flex table keys using either COMPUTE_FLEXTABLE_KEYS or COMPUTE_FLEXTABLE_KEYS_AND_BUILD_VIEW . Note: Materializing any virtual column into a real column with this function affects data storage limits. Each materialized column counts against the data storage limit of your HP Vertica Enterprise Edition (EE) license. This increase is reflected when HP Vertica next performs a license compliance audit. To manually check your EE license compliance, call the audit() function, described in the SQL Reference Manual. Usage materialize_flextable_columns('flex_table' [, n-columns [, keys_table_name] ]) Arguments flex_table The name of the flex table with columns to materialize. Specifying only the flex table name attempts to materialize up to 50 columns of key names in the default flex_table_keys table, skipping any columns already materialized. To materialize a specific number of columns, use the optional parameter n_columns, described next. HP Vertica Analytic Database (7.0.x) Page 810 of 1539 SQL Reference Manual SQL Functions n-columns [Optional ] The number of columns to materialize. The function attempts to materialize the number of columns from the flex_table_ keys table, skipping any columns already materialized. HP VERTICA tables support a total of 1600 columns, which is the greatest value you can specify for n-columns. The function orders the materialized results by frequency, descending, key_namewhen materializing the first n columns. keys_table_name [Optional] The name of a flex_keys_table from which to materialize columns. The function attempts to materialize the number of columns (value of n-columns) from keys_table_name, skipping any columns already materialized. The function orders the materialized results by frequency, descending, key_namewhen materializing the first n columns. Examples The following example loads a sample file of tweets (tweets_10000.json) into the flex table twitter_r. After loading data and computing keys for the sample flex table, the example calls materialize_ flextable_columns to materialize the first four columns: dbt=> copy twitter_r from '/home/release/KData/tweets_10000.json' parser fjsonparser(); Rows Loaded ------------10000 (1 row) dbt=> select compute_flextable_keys ('twitter_r'); compute_flextable_keys --------------------------------------------------Please see public.twitter_r_keys for updated keys (1 row) dbt=> select materialize_flextable_columns('twitter_r', 4); materialize_flextable_columns ------------------------------------------------------------------------------The following columns were added to the table public.twitter_r: contributors entities.hashtags entities.urls For more details, run the following query: SELECT * FROM v_catalog.materialize_flextable_columns_results WHERE table_schema = 'publi c' and table_name = 'twitter_r'; (1 row) The last message in the example recommends querying the materialize_flextable_columns_ results system table for the results of materializing the columns. Following is an example of running that query: HP Vertica Analytic Database (7.0.x) Page 811 of 1539 SQL Reference Manual SQL Functions dbt=> SELECT * FROM v_catalog.materialize_flextable_columns_results WHERE table_schema = 'public' and table_name = 'twitter_r'; table_id | table_schema | table_name | creation_time | key_name | status | message -------------------+--------------+------------+------------------------------+-------------------+--------+-------------------------------------------------------45035996273733172 | public | twitter_r | 2013-11-20 17:00:27.945484-05 | contributors | ADDED | Added successfully 45035996273733172 | public | twitter_r | 2013-11-20 17:00:27.94551-05 | entities.hashtags | ADDED | Added successfully 45035996273733172 | public | entities.urls | ADDED | twitter_r | 2013-11-20 17:00:27.945519-05 | Added successfully 45035996273733172 | public | twitter_r | 2013-11-20 17:00:27.945532-05 | created_at | EXISTS | Column of same name already exists in table definition (4 rows) See the MATERIALIZE_FLEXTABLE_COLUMNS_RESULTS system table in the SQL Reference Manual. See Also l BUILD_FLEXTABLE_VIEW l COMPUTE_FLEXTABLE_KEYS l COMPUTE_FLEXTABLE_KEYS_AND_BUILD_VIEW l RESTORE_FLEXTABLE_DEFAULT_KEYS_TABLE_AND_VIEW RESTORE_FLEXTABLE_DEFAULT_KEYS_TABLE_AND_ VIEW Restores the _keys table and the _view, linking them with their associated flex table if either is dropped. This function notes whether it restores one or both. Usage restore_flextable_default_keys_table_and_view('flex_table') Arguments flex_table The name of the flex table . HP Vertica Analytic Database (7.0.x) Page 812 of 1539 SQL Reference Manual SQL Functions Examples This example invokes the function with an existing flex table, restoring both the _keys table and _ view: kdb=> select restore_flextable_default_keys_table_and_view('darkdata'); restore_flextable_default_keys_table_and_view ---------------------------------------------------------------------------------The keys table public.darkdata_keys was restored successfully. The view public.darkdata_view was restored successfully. (1 row) This example shows the function restoring darkdata_view, but noting that darkdata_keys does not need restoring: kdb=> select restore_flextable_default_keys_table_and_view('darkdata'); restore_flextable_default_keys_table_and_view -----------------------------------------------------------------------------------------------The keys table public.darkdata_keys already exists and is linked to darkdata. The view public.darkdata_view was restored successfully. (1 row) The _keys table has no content after it is restored: kdb=> select * from darkdata_keys; key_name | frequency | data_type_guess ----------+-----------+----------------(0 rows) See Also l BUILD_FLEXTABLE_VIEW l COMPUTE_FLEXTABLE_KEYS l COMPUTE_FLEXTABLE_KEYS_AND_BUILD_VIEW l MATERIALIZE_FLEXTABLE_COLUMNS HP Vertica Analytic Database (7.0.x) Page 813 of 1539 SQL Reference Manual SQL Functions License Management Functions This section contains function that monitor HP Vertica license status and compliance. AUDIT Estimates the raw data size of a database, a schema, a projection, or a table as it is counted in an audit of the database size. The AUDIT function estimates the size using the same data sampling method as the audit that HP Vertica performs to determine if a database is compliant with the database size allowances in its license. The results of this function are not considered when HP Vertica determines whether the size of the database complies with the HP Vertica license's data allowance. See How HP Vertica Calculates Database Size in the Administrator's Guide for details. Note: This function can only audit the size of tables, projections, schemas, and databases which the user has permission to access. If a non-superuser attempts to audit the entire database, the audit will only estimate the size of the data that the user is allowed to read. Syntax AUDIT([name] [, granularity] [, error_tolerance [, confidence_level]]) Parameters name Specifies the schema, projection, or table to audit. Enter name as a string, in single quotes (''). If the name string is empty (''), the entire database is audited. HP Vertica Analytic Database (7.0.x) Page 814 of 1539 SQL Reference Manual SQL Functions granularity Indicates the level at which the audit reports its results. The recognized levels are: l 'schema' l 'table' l 'projection' By default, the granularity is the same level as name. For example, if name is a schema, then the size of the entire schema is reported. If you instead specify 'table' as the granularity, AUDIT reports the size of each table in the schema. The granularity must be finer than that of object: specifying 'schema' for an audit of a table has no effect. The results of an audit with a granularity are reported in the V_ CATALOG.USER_AUDITS system table. error_tolerance Specifies the percentage margin of error allowed in the audit estimate. Enter the tolerance value as a decimal number, between 0 and 100. The default value is 5, for a 5% margin of error. Note: The lower this value is, the more resources the audit uses since it will perform more data sampling. Setting this value to 0 results in a full audit of the database, which is very resource intensive, as all of the data in the database is analyzed. Doing a full audit of the database significantly impacts performance and is not recommended on a production database. confidence_level Specifies the statistical confidence level percentage of the estimate. Enter the confidence value as a decimal number, between 0 and 100. The default value is 99, indicating a confidence level of 99%. Note: The higher the confidence value, the more resources the function uses since it will perform more data sampling. Setting this value to 1 results in a full audit of the database, which is very resource intensive, as all of the database is analyzed. Doing a full audit of the database significantly impacts performance and is not recommended on a production database. Permissions l SELECT privilege on table l USAGE privilege on schema Note: AUDIT() works only on the tables where the user calling the function has SELECT permissions. HP Vertica Analytic Database (7.0.x) Page 815 of 1539 SQL Reference Manual SQL Functions Notes Due to the iterative sampling used in the auditing process, making the error tolerance a small fraction of a percent (0.00001, for example) can cause the AUDIT function to run for a longer period than a full database audit. Examples To audit the entire database: => SELECT AUDIT(''); AUDIT ---------76376696 (1 row) To audit the database with a 25% error tolerance: => SELECT AUDIT('',25); AUDIT ---------75797126 (1 row) To audit the database with a 25% level of tolerance and a 90% confidence level: => SELECT AUDIT('',25,90); AUDIT ---------76402672 (1 row) To audit just the online_sales schema in the VMart example database: VMart=> SELECT AUDIT('online_sales'); AUDIT ---------35716504 (1 row) To audit the online_sales schema and report the results by table: => SELECT AUDIT('online_sales','table'); AUDIT -----------------------------------------------------------------See table sizes in v_catalog.user_audits for schema online_sales (1 row) HP Vertica Analytic Database (7.0.x) Page 816 of 1539 SQL Reference Manual SQL Functions => \x Expanded display is on. => SELECT * FROM user_audits WHERE object_schema = 'online_sales'; -[ RECORD 1 ]-------------------------+-----------------------------size_bytes | 64960 user_id | 45035996273704962 user_name | dbadmin object_id | 45035996273717636 object_type | TABLE object_schema | online_sales object_name | online_page_dimension audit_start_timestamp | 2011-04-05 09:24:48.224081-04 audit_end_timestamp | 2011-04-05 09:24:48.337551-04 confidence_level_percent | 99 error_tolerance_percent | 5 used_sampling | f confidence_interval_lower_bound_bytes | 64960 confidence_interval_upper_bound_bytes | 64960 sample_count | 0 cell_count | 0 -[ RECORD 2 ]-------------------------+-----------------------------size_bytes | 20197 user_id | 45035996273704962 user_name | dbadmin object_id | 45035996273717640 object_type | TABLE object_schema | online_sales object_name | call_center_dimension audit_start_timestamp | 2011-04-05 09:24:48.340206-04 audit_end_timestamp | 2011-04-05 09:24:48.365915-04 confidence_level_percent | 99 error_tolerance_percent | 5 used_sampling | f confidence_interval_lower_bound_bytes | 20197 confidence_interval_upper_bound_bytes | 20197 sample_count | 0 cell_count | 0 -[ RECORD 3 ]-------------------------+-----------------------------size_bytes | 35614800 user_id | 45035996273704962 user_name | dbadmin object_id | 45035996273717644 object_type | TABLE object_schema | online_sales object_name | online_sales_fact audit_start_timestamp | 2011-04-05 09:24:48.368575-04 audit_end_timestamp | 2011-04-05 09:24:48.379307-04 confidence_level_percent | 99 error_tolerance_percent | 5 used_sampling | t confidence_interval_lower_bound_bytes | 34692956 confidence_interval_upper_bound_bytes | 36536644 sample_count | 10000 cell_count | 9000000 HP Vertica Analytic Database (7.0.x) Page 817 of 1539 SQL Reference Manual SQL Functions AUDIT_FLEX Estimates the ROS size of one or more flexible tables contained in a database, schema, or projection. Use this function for flex tables only. Invoking audit_flex() with a columnar table results in an error. The audit_flex() function measures encoded, compressed data stored in ROS containers for the __raw__ column of one or more flexible tables. The function does not audit other flex table columns that are created as, or promoted to, real columns. Temporary flex tables are not included in the audit. Each time a user calls audit_flex(), HP Vertica stores the results in the V_CATALOG.USER_ AUDITS system table. Syntax AUDIT_FLEX (name) Parameters name Specifies what database entity to audit. Enter the entity name as a string in single quotes (''), as follows: l Empty string ('') — Return the size of the ROS containers for all flexible tables in the database. You cannot enter the database name. l Schema name ('schema_name') — Return the size of all __raw__ columns of flexible tables in schema_name. l A projection name ('proj_name') — Return the ROS size of a projection for a __raw__ column. l A flex table name ('flex_table_name') — Return the ROS size of a flex table's __ raw__ column. Permissions l SELECT privilege on table l USAGE privilege on schema Note: AUDIT_FLEX() works only on the flexible tables, projections, schemas, and databases to which the user has permissions. HP Vertica Analytic Database (7.0.x) Page 818 of 1539 SQL Reference Manual SQL Functions Examples To audit the flex tables in the database: dbs=> select audit_flex(''); audit_flex -----------8567679 (1 row) To audit the flex tables in a specific schema, such as public: dbs=> select audit_flex('public'); audit_flex -----------8567679 (1 row) To audit the flex tables in a specific projection, such as bakery_b0: dbs=> select audit_flex('bakery_b0'); audit_flex -----------8566723 (1 row) To audit a flex table, such as bakery: dbs=> select audit_flex('bakery'); audit_flex -----------8566723 (1 row) To report the results of all audits saved in the USER_AUDITS, the following shows part of an extended display from the system table showing an audit run on a schema called test, and the entire database, dbs: dbs=> \x Expanded display is on. dbs=> select * from user_audits; -[ RECORD 1 ]-------------------------+-----------------------------size_bytes | 0 user_id | 45035996273704962 user_name | release object_id | 45035996273736664 object_type | SCHEMA object_schema | object_name | test HP Vertica Analytic Database (7.0.x) Page 819 of 1539 SQL Reference Manual SQL Functions audit_start_timestamp | 2014-02-04 14:52:15.126592-05 audit_end_timestamp | 2014-02-04 14:52:15.139475-05 confidence_level_percent | 99 error_tolerance_percent | 5 used_sampling | f confidence_interval_lower_bound_bytes | 0 confidence_interval_upper_bound_bytes | 0 sample_count | 0 cell_count | 0 -[ RECORD 2 ]-------------------------+-----------------------------size_bytes | 38051 user_id | 45035996273704962 user_name | release object_id | 45035996273704974 object_type | DATABASE object_schema | object_name | dbs audit_start_timestamp | 2014-02-05 13:44:41.11926-05 audit_end_timestamp | 2014-02-05 13:44:41.227035-05 confidence_level_percent | 99 error_tolerance_percent | 5 used_sampling | f confidence_interval_lower_bound_bytes | 38051 confidence_interval_upper_bound_bytes | 38051 sample_count | 0 cell_count | 0 -[ RECORD 3 ]-------------------------+-----------------------------. . . AUDIT_LICENSE_SIZE Triggers an immediate audit of the database size to determine if it is in compliance with the raw data storage allowance included in your HP Vertica license. The audit is performed in the background, so this function call returns immediately. To see the results of the audit when it is done, use the GET_ COMPLIANCE_STATUS function. Syntax AUDIT_LICENSE_SIZE() Privileges Must be a superuser. Example => SELECT audit_license_size(); HP Vertica Analytic Database (7.0.x) Page 820 of 1539 SQL Reference Manual SQL Functions audit_license_size -------------------Service hurried (1 row) AUDIT_LICENSE_TERM Triggers an immediate audit to determine if the HP Vertica license has expired. The audit happens in the background, so this function returns immediately. To see the result of the audit, use the GET_ COMPLIANCE_STATUS function. Syntax AUDIT_LICENSE_TERM() Privileges Must be a superuser. Example => SELECT AUDIT_LICENSE_TERM(); AUDIT_LICENSE_TERM -------------------Service hurried (1 row) GET_AUDIT_TIME Reports the time when the automatic audit of database size occurs. HP Vertica performs this audit if your HP Vertica license includes a data size allowance. For details of this audit, see Managing Your License Key in the Administrator's Guide. To change the time the audit runs, use the SET_ AUDIT_TIME function. Syntax GET_AUDIT_TIME() Privileges None HP Vertica Analytic Database (7.0.x) Page 821 of 1539 SQL Reference Manual SQL Functions Example => SELECT get_audit_time(); get_audit_time ----------------------------------------------------The audit is scheduled to run at 11:59 PM each day. (1 row) GET_COMPLIANCE_STATUS Displays whether your database is in compliance with your HP Vertica license agreement. This information includes the results of HP Vertica's most recent audit of the database size (if your license has a data allowance as part of its terms), and the license term (if your license has an end date). The information displayed by GET_COMPLIANCE_STATUS includes: l The estimated size of the database (see How HP Vertica Calculates Database Size in the Administrator's Guide for an explanation of the size estimate). l The raw data size allowed by your HP Vertica license. l The percentage of your allowance that your database is currently using. l The date and time of the last audit. l Whether your database complies with the data allowance terms of your license agreement. l The end date of your license. l How many days remain until your license expires. Note: If your license does not have a data allowance or end date, some of the values may not appear in the output for GET_COMPLIANCE_STATUS. If the audit shows your license is not in compliance with your data allowance, you should either delete data to bring the size of the database under the licensed amount, or upgrade your license. If your license term has expired, you should contact HP immediately to renew your license. See Managing Your License Key in the Administrator's Guide for further details. Syntax GET_COMPLIANCE_STATUS() Privileges None HP Vertica Analytic Database (7.0.x) Page 822 of 1539 SQL Reference Manual SQL Functions Example GET_COMPLIANCE_STATUS --------------------------------------------------------------------------------Raw Data Size: 2.00GB +/- 0.003GB License Size : 4.000GB Utilization : 50% Audit Time : 2011-03-09 09:54:09.538704+00 Compliance Status : The database is in compliance with respect to raw data size. License End Date: 04/06/2011 Days Remaining: 28.59 (1 row) DISPLAY_LICENSE Returns the terms of your HP Vertica license. The information this function displays is: l The start and end dates for which the license is valid (or "Perpetual" if the license has no expiration). l The number of days you are allowed to use HP Vertica after your license term expires (the grace period) l The amount of data your database can store, if your license includes a data allowance. Syntax DISPLAY_LICENSE() Privileges None Examples => SELECT DISPLAY_LICENSE(); DISPLAY_LICENSE ---------------------------------------------------HP Vertica Systems, Inc. 1/1/2011 12/31/2011 30 50TB (1 row) HP Vertica Analytic Database (7.0.x) Page 823 of 1539 SQL Reference Manual SQL Functions SET_AUDIT_TIME Sets the time that HP Vertica performs automatic database size audit to determine if the size of the database is compliant with the raw data allowance in your HP Vertica license. Use this function if the audits are currently scheduled to occur during your database's peak activity time. This is normally not a concern, since the automatic audit has little impact on database performance. Audits are scheduled by the preceding audit, so changing the audit time does not affect the next scheduled audit. For example, if your next audit is scheduled to take place at 11:59PM and you use SET_AUDIT_TIME to change the audit schedule 3AM, the previously scheduled 11:59PM audit still runs. As that audit finishes, it schedules the next audit to occur at 3AM. If you want to prevent the next scheduled audit from running at its scheduled time, you can change the scheduled time using SET_AUDIT_TIME then manually trigger an audit to run immediately using AUDIT_LICENSE_SIZE. As the manually-triggered audit finishes, it schedules the next audit to occur at the time you set using SET_AUDIT_TIME (effectively overriding the previously scheduled audit). Syntax SET_AUDIT_TIME(time) time A string containing the time in 'HH:MM AM/PM' format (for example, '1:00 AM') when the audit should run daily. Privileges Must be a superuser. Example => SELECT SET_AUDIT_TIME('3:00 AM'); SET_AUDIT_TIME ----------------------------------------------------------------------The scheduled audit time will be set to 3:00 AM after the next audit. (1 row) HP Vertica Analytic Database (7.0.x) Page 824 of 1539 SQL Reference Manual SQL Functions Partition Management Functions This section contains partition management functions specific to HP Vertica. DROP_PARTITION Forces the partition of projections (if needed) and then drops the specified partition. Syntax DROP_PARTITION ( table_name , partition_value [ , ignore_moveout_errors, reorganize_data ]) Parameters table-name Specifies the name of the table. Note: The table_name argument cannot be used as a dimension table in a pre-joined projection and cannot contain projections that are not up to date (have not been refreshed). partition_value The key of the partition to drop. For example: DROP_PARTITION ('trade', 2006); ignore_moveout_errors Optional Boolean, defaults to false. l true—Ignores any WOS moveout errors and forces the operation to continue. Set this parameter to true only if there is no WOS data for the partition. l false (or omit)—Displays any moveout errors and aborts the operation on error. Note: If you set this parameter to true and the WOS includes data for the partition in WOS, partition data in WOS is not dropped. reorganize_data Optional Boolean, defaults to false. l true—Reorganizes the data if it is not organized, and then drops the partition. l false—Does not attempt to reorganize the data before dropping the partition. If this parameter is false and the function encounters a ROS without partition keys, an error occurs. HP Vertica Analytic Database (7.0.x) Page 825 of 1539 SQL Reference Manual SQL Functions Permissions l Table owner l USAGE privilege on schema that contains the table Notes and Restrictions The results of a DROP_PARTITION call go into effect immediately. If you drop a partition using DROP_PARTITION and then try to add data to a partition with the same name, HP Vertica creates a new partition. If the operation cannot obtain an O Lock on the table(s), HP Vertica attempts to close any internal Tuple Mover (TM) sessions running on the same table(s) so that the operation can proceed. Explicit TM operations that are running in user sessions are not closed. If an explicit TM operation is running on the table, then the operation cannot proceed until the explicit TM operation completes. In general, if a ROS container has data that belongs to n+1 partitions and you want to drop a specific partition, the DROP_PARTITION operation: 1. Forces the partition of data into two containers where n One container holds the data that belongs to the partition that is to be dropped. n Another container holds the remaining n partitions. 2. Drops the specified partition. DROP_PARTITION forces a moveout if there is data in the WOS (WOS is not partition aware). DROP_PARTITION acquires an exclusive lock on the table to prevent DELETE | UPDATE | INSERT | COPY statements from affecting the table, as well as any SELECT statements issued at SERIALIZABLE isolation level. You cannot perform a DROP_PARTITION operation on tables with projections that are not up to date (have not been refreshed). DROP_PARTITION fails if you do not set the optional third parameter to true and the function encounters ROS's that do not have partition keys. Examples Using the example schema in Defining Partitions, the following command explicitly drops the 2009 partition key from table trade: SELECT DROP_PARTITION('trade', 2009); DROP_PARTITION ------------------- HP Vertica Analytic Database (7.0.x) Page 826 of 1539 SQL Reference Manual SQL Functions Partition dropped (1 row) Here, the partition key is specified: SELECT DROP_PARTITION('trade', EXTRACT('year' FROM '2009-01-01'::date)); DROP_PARTITION ------------------Partition dropped (1 row) The following example creates a table called dates and partitions the table by year: CREATE TABLE dates (year INTEGER NOT NULL, month VARCHAR(8) NOT NULL) PARTITION BY year * 12 + month; The following statement drops the partition using a constant for Oct 2010 (2010*12 + 10 = 24130): SELECT DROP_PARTITION('dates', '24130'); DROP_PARTITION ------------------Partition dropped (1 row) Alternatively, the expression can be placed in line: SELECT DROP_PARTITION('dates', 2010*12 + 10); The following command first reorganizes the data if it is unpartitioned and then explicitly drops the 2009 partition key from table trade: SELECT DROP_PARTITION('trade', 2009, false, true); DROP_PARTITION ------------------Partition dropped (1 row) See Also l Dropping Partitions l ADVANCE_EPOCH l ALTER PROJECTION RENAME l COLUMN_STORAGE l CREATE TABLE HP Vertica Analytic Database (7.0.x) Page 827 of 1539 SQL Reference Manual SQL Functions l DO_TM_TASK l DUMP_PARTITION_KEYS l DUMP_PROJECTION_PARTITION_KEYS l DUMP_TABLE_PARTITION_KEYS l MERGE_PARTITIONS l PARTITION_PROJECTION l PARTITION_TABLE l PROJECTIONS DUMP_PROJECTION_PARTITION_KEYS Dumps the partition keys of the specified projection. Syntax DUMP_PROJECTION_PARTITION_KEYS( 'projection_name' ) Parameters projection_name Specifies the name of the projection. Privileges l SELECT privilege on table l USAGE privileges on schema Example The following example creates a simple table called states and partitions the data by state: => CREATE TABLE states (year INTEGER NOT NULL, state VARCHAR NOT NULL) PARTITION BY state; => CREATE PROJECTION states_p (state, year) AS SELECT * FROM states ORDER BY state, year UNSEGMENTED ALL NODES; Now dump the partition key of the specified projection: HP Vertica Analytic Database (7.0.x) Page 828 of 1539 SQL Reference Manual SQL Functions => SELECT DUMP_PROJECTION_PARTITION_KEYS( 'states_p_node0001' ); Partition keys on node helios_node0001 Projection 'states_p_node0001' No of partition keys: 1 Partition keys on node helios_node0002 ... (1 row) See Also l DO_TM_TASK l DROP_PARTITION l DUMP_PARTITION_KEYS l DUMP_TABLE_PARTITION_KEYS l PARTITION_PROJECTION l PARTITION_TABLE l PROJECTIONS l Working with Table Partitions DUMP_TABLE_PARTITION_KEYS Dumps the partition keys of all projections anchored on the specified table. Syntax DUMP_TABLE_PARTITION_KEYS ( 'table_name' ) Parameters table_name Specifies the name of the table. Privilege l SELECT privilege on table l USAGE privileges on schema HP Vertica Analytic Database (7.0.x) Page 829 of 1539 SQL Reference Manual SQL Functions Examples The following example creates a simple table called states and partitions the data by state: => CREATE TABLE states (year INTEGER NOT NULL, state VARCHAR NOT NULL) PARTITION BY state; => CREATE PROJECTION states_p (state, year) AS SELECT * FROM states ORDER BY state, year UNSEGMENTED ALL NODES; Now dump the partition keys of all projections anchored on table states: => SELECT DUMP_TABLE_PARTITION_KEYS( 'states' ); Partition keys on helios_node0001 Projection 'states_p_node0004' No of partition keys: 1 Projection 'states_p_node0003' No of partition keys: 1 Projection 'states_p_node0002' No of partition keys: 1 Projection 'states_p_node0001' No of partition keys: 1 Partition keys on helios_node0002 ... (1 row) See Also l DO_TM_TASK l DROP_PARTITION l DUMP_PROJECTION_PARTITION_KEYS l DUMP_TABLE_PARTITION_KEYS l PARTITION_PROJECTION l PARTITION_TABLE l Working with Table Partitions MERGE_PARTITIONS Merges ROS containers that have data belonging to partitions in a specified partition key range: partitionKeyFromto partitionKeyTo. HP Vertica Analytic Database (7.0.x) Page 830 of 1539 SQL Reference Manual SQL Functions Note: This function is deprecated in HP Vertica 7.0. Syntax MERGE_PARTITIONS ( table_name , partition_key_from , partition_key_to ) Parameters table_name Specifies the name of the table partition_key_from Specifies the start point of the partition partition_key_to Specifies the end point of the partition Privileges l Table owner l USAGE privilege on schema that contains the table Notes l You cannot run MERGE_PARTITIONS() on a table with data that is not reorganized. You must reorganize the data first using ALTER_TABLE table REORGANIZE, or PARTITION_TABLE(table). l The edge values are included in the range, and partition_key_from must be less than or equal to partition_key_to. l Inclusion of partitions in the range is based on the application of less than (<)/greater than (>) operators of the corresponding data type. Note: No restrictions are placed on a partition key's data type. l If partition_key_from is the same as partition_key_to, all ROS containers of the partition key are merged into one ROS. Examples => => => => => SELECT SELECT SELECT SELECT SELECT MERGE_PARTITIONS('T1', MERGE_PARTITIONS('T1', MERGE_PARTITIONS('T1', MERGE_PARTITIONS('T1', MERGE_PARTITIONS('T1', HP Vertica Analytic Database (7.0.x) '200', '400'); '800', '800'); 'CA', 'MA'); 'false', 'true'); '06/06/2008', '06/07/2008'); Page 831 of 1539 SQL Reference Manual SQL Functions => SELECT MERGE_PARTITIONS('T1', '02:01:10', '04:20:40'); => SELECT MERGE_PARTITIONS('T1', '06/06/2008 02:01:10', '06/07/2008 02:01:10'); => SELECT MERGE_PARTITIONS('T1', '8 hours', '1 day 4 hours 20 seconds'); MOVE_PARTITIONS_TO_TABLE Moves partitions from a source table to a target table. The target table must have the same projection column definitions, segmentation, and partition expressions as the source table. If the target table does not exist, the function creates a new table based on the source definition. The function requires both minimum and maximum range values, indicating what partition values to move. Syntax MOVE_PARTITIONS_TO_TABLE ( '[[db-name.]schema.]source_table', 'min_range_value', 'max_range_value', '[[db-name.]schema.]target_table' ) Parameters [[db-name.]schema.]source_table The source table (optionally qualified), from which you want to move partitions. min_range_value The minimum value in the partition to move. max_range_value The maximum value of the partition being moved. target_table The table to which the partitions are being moved. Privileges l Table owner l If target table is created as part of moving partitions, the new table has the same owner as the target. If the target table exists, user must have own the target table, and have ability to call this function. Example If you call MOVE_PARTITIONS_TO_TABLE and the destination table does not exist, the function will create the table automatically: VMART=> SELECT MOVE_PARTITIONS_TO_TABLE ( 'prod_trades', '200801', '200801', HP Vertica Analytic Database (7.0.x) Page 832 of 1539 SQL Reference Manual SQL Functions 'partn_backup.trades_200801'); MOVE_PARTITIONS_TO_TABLE --------------------------------------------------------------------------1 distinct partition values moved at epoch 15. Effective move epoch: 14. (1 row) See Also l DROP_PARTITION l DUMP_PARTITION_KEYS l DUMP_PROJECTION_PARTITION_KEYS l DUMP_TABLE_PARTITION_KEYS l PARTITION_PROJECTION l Moving Partitions l Creating a Table Like Another PARTITION_PROJECTION Forces a split of ROS containers of the specified projection. Syntax PARTITION_PROJECTION ( '[[db-name.]schema.]projection_name' ) Parameters [[db-name.]schema.] [Optional] Specifies the database name and optional schema name. Using a database name identifies objects that are not unique within the current search path (see Setting Search Paths). You must be connected to the database you specify, and you cannot change objects in other databases. Specifying different database objects lets you qualify database objects as explicitly as required. For example, you can use a database and a schema name (mydb.myschema). projection_name Specifies the name of the projection. HP Vertica Analytic Database (7.0.x) Page 833 of 1539 SQL Reference Manual SQL Functions Privileges l Table owner l USAGE privilege on schema Notes Partitioning expressions take immutable functions only, in order that the same information be available across all nodes. PARTITION_PROJECTION() is similar to PARTITION_TABLE(), except that PARTITION_ PROJECTION works only on the specified projection, instead of the table. Users must have USAGE privilege on schema that contains the table. PARTITION_PROJECTION() purges data while partitioning ROS containers if deletes were applied before the AHM epoch. Example The following command forces a split of ROS containers on the states_p_node01 projection: => SELECT PARTITION_PROJECTION ('states_p_node01'); partition_projection -----------------------Projection partitioned (1 row) See Also l DO_TM_TASK l DROP_PARTITION l DUMP_PARTITION_KEYS l DUMP_PROJECTION_PARTITION_KEYS l DUMP_TABLE_PARTITION_KEYS l PARTITION_TABLE l Working with Table Partitions HP Vertica Analytic Database (7.0.x) Page 834 of 1539 SQL Reference Manual SQL Functions PARTITION_TABLE Forces the system to break up any ROS containers that contain multiple distinct values of the partitioning expression. Only ROS containers with more than one distinct value participate in the split. Syntax PARTITION_TABLE ( '[[db-name.]schema.]table_name' ) Parameters [[db-name.]schema.] [Optional] Specifies the database name and optional schema name. Using a database name identifies objects that are not unique within the current search path (see Setting Search Paths). You must be connected to the database you specify, and you cannot change objects in other databases. Specifying different database objects lets you qualify database objects as explicitly as required. For example, you can use a database and a schema name (mydb.myschema). table_name Specifies the name of the table. Privileges l Table owner l USAGE privilege on schema Notes PARTITION_TABLE is similar to PARTITION_PROJECTION, except that PARTITION_TABLE works on the specified table. Users must have USAGE privilege on schema that contains the table. Partitioning functions take immutable functions only, in order that the same information be available across all nodes. Example The following example creates a simple table called states and partitions data by state. => CREATE TABLE states (year INTEGER NOT NULL, HP Vertica Analytic Database (7.0.x) Page 835 of 1539 SQL Reference Manual SQL Functions state VARCHAR NOT NULL) PARTITION BY state; => CREATE PROJECTION states_p (state, year) AS SELECT * FROM states ORDER BY state, year UNSEGMENTED ALL NODES; Now call the PARTITION_TABLE function to partition table states: => SELECT PARTITION_TABLE('states'); PARTITION_TABLE ------------------------------------------------------partition operation for projection 'states_p_node0004' partition operation for projection 'states_p_node0003' partition operation for projection 'states_p_node0002' partition operation for projection 'states_p_node0001' (1 row) See Also l DO_TM_TASK l DROP_PARTITION l DUMP_PARTITION_KEYS l DUMP_PROJECTION_PARTITION_KEYS l DUMP_TABLE_PARTITION_KEYS l PARTITION_PROJECTION l Working with Table Partitions PURGE_PARTITION Purges a table partition of deleted rows. Similar to PURGE() and PURGE_PROJECTION(), this function removes deleted data from physical storage so you can reuse the disk space. PURGE_PARTITION() removes data from the AHM epoch and earlier only. Syntax PURGE_PARTITION ( '[[db_name.]schema_name.]table_name', partition_key ) HP Vertica Analytic Database (7.0.x) Page 836 of 1539 SQL Reference Manual SQL Functions Parameters [[db_name.]schema_name.] [Optional] Specifies the database name and optional schema name. Using a database name identifies objects that are not unique within the current search path (see Setting Search Paths). You must be connected to the database you specify, and you cannot change objects in other databases. Specifying different database objects lets you qualify database objects as explicitly as required. For example, you can use a database and a schema name (mydb.myschema). table_name The name of the partitioned table partition_key The key of the partition to be purged of deleted rows Privileges l Table owner l USAGE privilege on schema Example The following example lists the count of deleted rows for each partition in a table, then calls PURGE_ PARTITION() to purge the deleted rows from the data. => SELECT partition_key,table_schema,projection_name,sum(deleted_row_count) AS deleted_row_count FROM partitions GROUP BY partition_key,table_schema,projection_name ORDER BY partition_key; partition_key | table_schema | projection_name | deleted_row_count ---------------+--------------+-----------------+------------------0 | public | t_super | 2 1 | public | t_super | 2 2 | public | t_super | 2 3 | public | t_super | 2 4 | public | t_super | 2 5 | public | t_super | 2 6 | public | t_super | 2 7 | public | t_super | 2 8 | public | t_super | 2 9 | public | t_super | 1 (10 rows) => SELECT PURGE_PARTITION('t',5); -- Purge partition with key 5. purge_partition -----------------------------------------------------------------------Task: merge partitions HP Vertica Analytic Database (7.0.x) Page 837 of 1539 SQL Reference Manual SQL Functions (Table: public.t) (Projection: public.t_super) (1 row) => SELECT partition_key,table_schema,projection_name,sum(deleted_row_count) AS deleted_row_count FROM partitions GROUP BY partition_key,table_schema,projection_name ORDER BY partition_key; partition_key | table_schema | projection_name | deleted_row_count ---------------+--------------+-----------------+------------------0 | public | t_super | 2 1 | public | t_super | 2 2 | public | t_super | 2 3 | public | t_super | 2 4 | public | t_super | 2 5 | public | t_super | 0 6 | public | t_super | 2 7 | public | t_super | 2 8 | public | t_super | 2 9 | public | t_super | 1 (10 rows) See Also l MERGE_PARTITIONS l PURGE l PURGE_PROJECTION l PURGE_TABLE l STORAGE_CONTAINERS HP Vertica Analytic Database (7.0.x) Page 838 of 1539 SQL Reference Manual SQL Functions Profiling Functions This section contains profiling functions specific to HP Vertica. CLEAR_PROFILING HP Vertica stores profiled data is in memory, so depending on how much data you collect, profiling could be memory intensive. You can use this function to clear profiled data from memory. Syntax CLEAR_PROFILING( 'profiling-type' ) Parameters profiling-type The type of profiling data you want to clear. Can be one of: l session—clears profiling for basic session parameters and lock time out data l query—clears profiling for general information about queries that ran, such as the query strings used and the duration of queries l ee—clears profiling for information about the execution run of each query Example The following statement clears profiled data for queries: => SELECT CLEAR_PROFILING('query'); See Also l DISABLE_PROFILING l ENABLE_PROFILING l Profiling Database Performance DISABLE_PROFILING Disables profiling for the profiling type you specify. HP Vertica Analytic Database (7.0.x) Page 839 of 1539 SQL Reference Manual SQL Functions Syntax DISABLE_PROFILING( 'profiling-type' ) Parameters profiling-type The type of profiling data you want to disable. Can be one of: l session—disables profiling for basic session parameters and lock time out data l query—disables profiling for general information about queries that ran, such as the query strings used and the duration of queries l ee—disables profiling for information about the execution run of each query Example The following statement disables profiling on query execution runs: => SELECT DISABLE_PROFILING('ee'); DISABLE_PROFILING ----------------------EE Profiling Disabled (1 row) See Also l CLEAR_PROFILING l ENABLE_PROFILING l Profiling Database Performance ENABLE_PROFILING Enables profiling for the profiling type you specify. Note: HP Vertica stores profiled data is in memory, so depending on how much data you collect, profiling could be memory intensive. Syntax ENABLE_PROFILING( 'profiling-type' ) HP Vertica Analytic Database (7.0.x) Page 840 of 1539 SQL Reference Manual SQL Functions Parameters profiling-type The type of profiling data you want to enable. Can be one of: l session—enables profiling for basic session parameters and lock time out data l query—enables profiling for general information about queries that ran, such as the query strings used and the duration of queries l ee—enables profiling for information about the execution run of each query Example The following statement enables profiling on query execution runs: => SELECT ENABLE_PROFILING('ee'); ENABLE_PROFILING ---------------------EE Profiling Enabled (1 row) See Also l CLEAR_PROFILING l DISABLE_PROFILING l Profiling Database Performance HP Vertica Analytic Database (7.0.x) Page 841 of 1539 SQL Reference Manual SQL Functions Projection Management Functions This section contains projection management functions specific to HP Vertica. See also the following SQL system tables: l V_CATALOG.PROJECTIONS l V_CATALOG.PROJECTION_COLUMNS l V_MONITOR.PROJECTION_REFRESHES l V_MONITOR.PROJECTION_STORAGE EVALUATE_DELETE_PERFORMANCE Evaluates projections for potential DELETE performance issues. If there are issues found, a warning message is displayed. For steps you can take to resolve delete and update performance issues, see Optimizing Deletes and Updates for Performance in the Administrator's Guide. This function uses data sampling to determine whether there are any issues with a projection. Therefore, it does not generate false-positives warnings, but it can miss some cases where there are performance issues. Note: Optimizing for delete performance is the same as optimizing for update performance. So, you can use this function to help optimize a projection for updates as well as deletes. Syntax EVALUATE_DELETE_PERFORMANCE ( 'target' ) Parameters target The name of a projection or table. If you supply the name of a projection, only that projection is evaluated for DELETE performance issues. If you supply the name of a table, then all of the projections anchored to the table will be evaluated for issues. If you do not provide a projection or table name, EVALUATE_DELETE_ PERFORMANCE examines all of the projections that you can access for DELETE performance issues. Depending on the size you your database, this may take a long time. Privileges None HP Vertica Analytic Database (7.0.x) Page 842 of 1539 SQL Reference Manual SQL Functions Notes When evaluating multiple projections, EVALUATE_DELETE_PERFORMANCE reports up to ten projections that have issues, and refers you to a table that contains the full list of issues it has found. Example The following example demonstrates how you can use EVALUATE_DELETE_PERFORMANCE to evaluate your projections for slow DELETE performance. => create table example (A int, B int,C int); CREATE TABLE => create projection one_sort (A,B,C) as (select A,B,C from example) order by A; CREATE PROJECTION => create projection two_sort (A,B,C) as (select A,B,C from example) order by A,B; CREATE PROJECTION => select evaluate_delete_performance('one_sort'); evaluate_delete_performance --------------------------------------------------No projection delete performance concerns found. (1 row) => select evaluate_delete_performance('two_sort'); evaluate_delete_performance --------------------------------------------------No projection delete performance concerns found. (1 row) The previous example showed that there was no structural issue with the projection that would cause poor DELETE performance. However, the data contained within the projection can create potential delete issues if the sorted columns do not uniquely identify a row or small number of rows. In the following example, Perl is used to populate the table with data using a nested series of loops. The inner loop populates column C, the middle loop populates column B, and the outer loop populates column A. The result is column A contains only three distinct values (0, 1, and 2), while column B slowly varies between 20 and 0 and column C changes in each row. EVALUATE_ DELETE_PERFORMANCE is run against the projections again to see if the data within the projections causes any potential DELETE performance issues. => \! perl -e 'for ($i=0; $i<3; $i++) { for ($j=0; $j<21; $j++) { for ($k=0; $k<19; $k++) { printf "%d,%d,%d\n", $i,$j,$k;}}}' | /opt/vertica/bin/vsql -c "copy example from stdin delimiter ',' direct;" Password: => select * from example; A | B | C ---+----+---0 | 20 | 18 0 | 20 | 17 0 | 20 | 16 0 | 20 | 15 0 | 20 | 14 HP Vertica Analytic Database (7.0.x) Page 843 of 1539 SQL Reference Manual SQL Functions 0 | 20 | 13 0 | 20 | 12 0 | 20 | 11 0 | 20 | 10 0 | 20 | 9 0 | 20 | 8 0 | 20 | 7 0 | 20 | 6 0 | 20 | 5 0 | 20 | 4 0 | 20 | 3 0 | 20 | 2 0 | 20 | 1 0 | 20 | 0 0 | 19 | 18 1157 rows omitted 2 | 1 | 0 2 | 0 | 18 2 | 0 | 17 2 | 0 | 16 2 | 0 | 15 2 | 0 | 14 2 | 0 | 13 2 | 0 | 12 2 | 0 | 11 2 | 0 | 10 2 | 0 | 9 2 | 0 | 8 2 | 0 | 7 2 | 0 | 6 2 | 0 | 5 2 | 0 | 4 2 | 0 | 3 2 | 0 | 2 2 | 0 | 1 2 | 0 | 0 => SELECT COUNT (*) FROM example; COUNT ------1197 (1 row) => SELECT COUNT (DISTINCT A) FROM example; COUNT ------3 (1 row) => select evaluate_delete_performance('one_sort'); evaluate_delete_performance --------------------------------------------------Projection exhibits delete performance concerns. (1 row) release=> select evaluate_delete_performance('two_sort'); evaluate_delete_performance --------------------------------------------------No projection delete performance concerns found. (1 row) HP Vertica Analytic Database (7.0.x) Page 844 of 1539 SQL Reference Manual SQL Functions The one_sort projection has potential delete issues since it only sorts on column A which has few distinct values. This means that each value in the sort column corresponds to many rows in the projection, which negatively impacts DELETE performance. Since the two_sort projection is sorted on columns A and B, each combination of values in the two sort columns identifies just a few rows, allowing deletes to be performed faster. Not supplying a projection name results in all of the projections you can access being evaluated for DELETE performance issues. => select evaluate_delete_performance(); evaluate_delete_performance --------------------------------------------------------------------------The following projection exhibits delete performance concerns: "public"."one_sort" See v_catalog.projection_delete_concerns for more details. (1 row) GET_PROJECTION_STATUS Returns information relevant to the status of a projection. Syntax GET_PROJECTION_STATUS ( '[[db-name.]schema-name.]projection' ); Parameters [[db-name.]schema.] [Optional] Specifies the database name and optional schema name. Using a database name identifies objects that are not unique within the current search path (see Setting Search Paths). You must be connected to the database you specify, and you cannot change objects in other databases. Specifying different database objects lets you qualify database objects as explicitly as required. For example, you can use a database and a schema name (mydb.myschema). projection Is the name of the projection for which to display status. When using more than one schema, specify the schema that contains the projection, as noted above. Privileges None Description GET_PROJECTION_STATUS returns information relevant to the status of a projection: HP Vertica Analytic Database (7.0.x) Page 845 of 1539 SQL Reference Manual SQL Functions l The current K-safety status of the database l The number of nodes in the database l Whether the projection is segmented l The number and names of buddy projections l Whether the projection is safe l Whether the projection is up-to-date l Whether statistics have been computed for the projection Notes l You can use GET_PROJECTION_STATUS to monitor the progress of a projection data refresh. See ALTER PROJECTION. l To view a list of the nodes in a database, use the View Database Command in the Administration Tools. Examples => SELECT GET_PROJECTION_STATUS('public.customer_dimension_site01'); GET_PROJECTION_STATUS ---------------------------------------------------------------------------------------------Current system K is 1. # of Nodes: 4. public.customer_dimension_site01 [Segmented: No] [Seg Cols: ] [K: 3] [public.customer_dim ension_site04, public.customer_dimension_site03, public.customer_dimension_site02] [Safe: Yes] [UptoDate: Yes][Stats: Yes] See Also l ALTER PROJECTION RENAME l GET_PROJECTIONS, GET_TABLE_PROJECTIONS GET_PROJECTIONS, GET_TABLE_PROJECTIONS Note: This function was formerly named GET_TABLE_PROJECTIONS(). HP Vertica still supports the former function name. Returns information relevant to the status of a table: HP Vertica Analytic Database (7.0.x) Page 846 of 1539 SQL Reference Manual SQL Functions l The current K-safety status of the database l The number of sites (nodes) in the database l The number of projections for which the specified table is the anchor table l For each projection: n The projection's buddy projections n Whether the projection is segmented n Whether the projection is safe n Whether the projection is up-to-date Syntax GET_PROJECTIONS ( '[[db-name.]schema-name.]table' ) Parameters [[db-name.]schema.] [Optional] Specifies the database name and optional schema name. Using a database name identifies objects that are not unique within the current search path (see Setting Search Paths). You must be connected to the database you specify, and you cannot change objects in other databases. Specifying different database objects lets you qualify database objects as explicitly as required. For example, you can use a database and a schema name (mydb.myschema). table Is the name of the table for which to list projections. When using more than one schema, specify the schema that contains the table. Privileges None Notes l You can use GET_PROJECTIONS to monitor the progress of a projection data refresh. See ALTER PROJECTION. l To view a list of the nodes in a database, use the View Database Command in the Administration Tools. HP Vertica Analytic Database (7.0.x) Page 847 of 1539 SQL Reference Manual SQL Functions Examples The following example gets information about the store_dimension table in the VMart schema: => SELECT GET_PROJECTIONS('store.store_dimension'); -------------------------------------------------------------------------------------Current system K is 1. # of Nodes: 4. Table store.store_dimension has 4 projections. Projection Name: [Segmented] [Seg Cols] [# of Buddies] [Buddy Projections] [Safe] [UptoDa te] ---------------------------------------------------------store.store_dimension_node0004 [Segmented: No] [Seg Cols: ] [K: 3] [store.store_dimensio n_node0003, store.store_dimension_node0002, store.store_dimension_node0001] [Safe: Yes] [UptoDate: Yes][Stats: Yes] store.store_dimension_node0003 [Segmented: No] [Seg Cols: ] [K: 3] [store.store_dimensio n_node0004, store.store_dimension_node0002, store.store_dimension_node0001] [Safe: Yes] [UptoDate: Yes][Stats: Yes] store.store_dimension_node0002 [Segmented: No] [Seg Cols: ] [K: 3] [store.store_dimensio n_node0004, store.store_dimension_node0003, store.store_dimension_node0001] [Safe: Yes] [UptoDate: Yes][Stats: Yes] store.store_dimension_node0001 [Segmented: No] [Seg Cols: ] [K: 3] [store.store_dimensio n_node0004, store.store_dimension_node0003, store.store_dimension_node0002] [Safe: Yes] [UptoDate: Yes][Stats: Yes] (1 row) See Also l ALTER PROJECTION RENAME l GET_PROJECTION_STATUS REFRESH Performs a synchronous, optionally-targeted refresh of a specified table's projections. Information about a refresh operation—whether successful or unsuccessful—is maintained in the PROJECTION_REFRESHES system table until either the CLEAR_PROJECTION_ REFRESHES() function is executed or the storage quota for the table is exceeded. The PROJECTION_REFRESHES.IS_EXECUTING column returns a boolean value that indicates whether the refresh is currently running (t) or occurred in the past (f). Syntax REFRESH ( '[[db-name.]schema.]table_name [ , ... ]' ) HP Vertica Analytic Database (7.0.x) Page 848 of 1539 SQL Reference Manual SQL Functions Parameters [[db-name.]schema.] [Optional] Specifies the database name and optional schema name. Using a database name identifies objects that are not unique within the current search path (see Setting Search Paths). You must be connected to the database you specify, and you cannot change objects in other databases. Specifying different database objects lets you qualify database objects as explicitly as required. For example, you can use a database and a schema name (mydb.myschema). table_name Specifies the name of a specific table containing the projections to be refreshed. The REFRESH() function attempts to refresh all the tables provided as arguments in parallel. Such calls will be part of the Database Designer deployment (and deployment script). When using more than one schema, specify the schema that contains the table, as noted above. Returns Column Name Description Projection Name The name of the projection that is targeted for refresh. Anchor Table The name of the projection's associated anchor table. Status The status of the projection: l Queued—Indicates that a projection is queued for refresh. l Refreshing—Indicates that a refresh for a projection is in process. l Refreshed—Indicates that a refresh for a projection has successfully completed. l Failed—Indicates that a refresh for a projection did not successfully complete. HP Vertica Analytic Database (7.0.x) Page 849 of 1539 SQL Reference Manual SQL Functions Refresh Method The method used to refresh the projection: l Buddy—Uses the contents of a buddy to refresh the projection. This method maintains historical data. This enables the projection to be used for historical queries. l Scratch—Refreshes the projection without using a buddy. This method does not generate historical data. This means that the projection cannot participate in historical queries from any point before the projection was refreshed. Error Count The number of times a refresh failed for the projection. Duration (sec) The length of time that the projection refresh ran in seconds. Privileges REFRESH() works only if invoked on tables owned by the calling user. Notes l Unlike START_REFRESH(), which runs in the background, REFRESH() runs in the foreground of the caller's session. l The REFRESH() function refreshes only the projections in the specified table. l If you run REFRESH() without arguments, it refreshes all non up-to-date projections. If the function returns a header string with no results, then no projections needed refreshing. Examples The following example refreshes the projections in tables t1 and t2: => SELECT REFRESH('t1, t2'); REFRESH ---------------------------------------------------------------------------------------Refresh completed with the following outcomes: Projection Name: [Anchor Table] [Status] [Refresh Method] [Error Count] [Duration (sec)] ---------------------------------------------------------------------------------------"public"."t1_p": [t1] [refreshed] [scratch] [0] [0]"public"."t2_p": [t2] [refreshed] [scr atch] [0] [0] This next example shows that only the projection on table t was refreshed: => SELECT REFRESH('allow, public.deny, t'); HP Vertica Analytic Database (7.0.x) Page 850 of 1539 SQL Reference Manual SQL Functions REFRESH ---------------------------------------------------------------------------------------Refresh completed with the following outcomes: Projection Name: [Anchor Table] [Status] [Refresh Method] [Error Count] [Duration (sec)] ---------------------------------------------------------------------------------------"n/a"."n/a": [n/a] [failed: insufficient permissions on table "allow"] [] [1] [0] "n/a"."n/a": [n/a] [failed: insufficient permissions on table "public.deny"] [] [1] [0] "public"."t_p1": [t] [refreshed] [scratch] [0] [0] See Also l CLEAR_PROJECTION_REFRESHES l PROJECTION_REFRESHES l START_REFRESH l Clearing PROJECTION_REFRESHES History START_REFRESH Transfers data to projections that are not able to participate in query execution due to missing or out-of-date data. Syntax START_REFRESH() Notes l When a design is deployed through the Database Designer, it is automatically refreshed. See Deploying a Design in the Administrator's Guide. l All nodes must be up in order to start a refresh. l START_REFRESH() has no effect if a refresh is already running. l A refresh is run asynchronously. l Shutting down the database ends the refresh. l To view the progress of the refresh, see the PROJECTION_REFRESHES and PROJECTIONS system tables. l If a projection is updated from scratch, the data stored in the projection represents the table columns as of the epoch in which the refresh commits. As a result, the query optimizer might not HP Vertica Analytic Database (7.0.x) Page 851 of 1539 SQL Reference Manual SQL Functions choose the new projection for AT EPOCH queries that request historical data at epochs older than the refresh epoch of the projection. Projections refreshed from buddies retain history and can be used to answer historical queries. Privileges None Example The following command starts the refresh operation: => SELECT START_REFRESH(); start_refresh ---------------------------------------Starting refresh background process. See Also l CLEAR_PROJECTION_REFRESHES l MARK_DESIGN_KSAFE l PROJECTION_REFRESHES l PROJECTIONS l Clearing PROJECTION_REFRESHES History HP Vertica Analytic Database (7.0.x) Page 852 of 1539 SQL Reference Manual SQL Functions Purge Functions This section contains purge functions specific to HP Vertica. PURGE Permanently removes deleted data from physical storage so that the disk space can be reused. You can purge historical data up to and including the epoch in which the Ancient History Mark is contained. Purges all projections in the physical schema. PURGE does not delete temporary tables. Syntax PURGE() Privileges l Table owner l USAGE privilege on schema Notes l PURGE() was formerly named PURGE_ALL_PROJECTIONS. HP Vertica supports both function calls. Caution: PURGE could temporarily take up significant disk space while the data is being purged. See Also l MERGE_PARTITIONS l PARTITION_TABLE l PURGE_PROJECTION l PURGE_TABLE l STORAGE_CONTAINERS l Purging Deleted Data HP Vertica Analytic Database (7.0.x) Page 853 of 1539 SQL Reference Manual SQL Functions PURGE_PARTITION Purges a table partition of deleted rows. Similar to PURGE() and PURGE_PROJECTION(), this function removes deleted data from physical storage so you can reuse the disk space. PURGE_PARTITION() removes data from the AHM epoch and earlier only. Syntax PURGE_PARTITION ( '[[db_name.]schema_name.]table_name', partition_key ) Parameters [[db_name.]schema_name.] [Optional] Specifies the database name and optional schema name. Using a database name identifies objects that are not unique within the current search path (see Setting Search Paths). You must be connected to the database you specify, and you cannot change objects in other databases. Specifying different database objects lets you qualify database objects as explicitly as required. For example, you can use a database and a schema name (mydb.myschema). table_name The name of the partitioned table partition_key The key of the partition to be purged of deleted rows Privileges l Table owner l USAGE privilege on schema Example The following example lists the count of deleted rows for each partition in a table, then calls PURGE_ PARTITION() to purge the deleted rows from the data. => SELECT partition_key,table_schema,projection_name,sum(deleted_row_count) AS deleted_row_count FROM partitions GROUP BY partition_key,table_schema,projection_name ORDER BY partition_key; partition_key | table_schema | projection_name | deleted_row_count ---------------+--------------+-----------------+------------------0 | public | t_super | 2 1 | public | t_super | 2 HP Vertica Analytic Database (7.0.x) Page 854 of 1539 SQL Reference Manual SQL Functions 2 | public | t_super | 2 3 | public | t_super | 2 4 | public | t_super | 2 5 | public | t_super | 2 6 | public | t_super | 2 7 | public | t_super | 2 8 | public | t_super | 2 9 | public | t_super | 1 (10 rows) => SELECT PURGE_PARTITION('t',5); -- Purge partition with key 5. purge_partition -----------------------------------------------------------------------Task: merge partitions (Table: public.t) (Projection: public.t_super) (1 row) => SELECT partition_key,table_schema,projection_name,sum(deleted_row_count) AS deleted_row_count FROM partitions GROUP BY partition_key,table_schema,projection_name ORDER BY partition_key; partition_key | table_schema | projection_name | deleted_row_count ---------------+--------------+-----------------+------------------0 | public | t_super | 2 1 | public | t_super | 2 2 | public | t_super | 2 3 | public | t_super | 2 4 | public | t_super | 2 5 | public | t_super | 0 6 | public | t_super | 2 7 | public | t_super | 2 8 | public | t_super | 2 9 | public | t_super | 1 (10 rows) See Also l MERGE_PARTITIONS l PURGE l PURGE_PROJECTION l PURGE_TABLE l STORAGE_CONTAINERS PURGE_PROJECTION Permanently removes deleted data from physical storage so that the disk space can be reused. You can purge historical data up to and including the epoch in which the Ancient History Mark is contained. HP Vertica Analytic Database (7.0.x) Page 855 of 1539 SQL Reference Manual SQL Functions Purges the specified projection. Caution: PURGE_PROJECTION could temporarily take up significant disk space while purging the data. Syntax PURGE_PROJECTION ( '[[db-name.]schema.]projection_name' ) Parameters [[db-name.]schema.] [Optional] Specifies the database name and optional schema name. Using a database name identifies objects that are not unique within the current search path (see Setting Search Paths). You must be connected to the database you specify, and you cannot change objects in other databases. Specifying different database objects lets you qualify database objects as explicitly as required. For example, you can use a database and a schema name (mydb.myschema). projection_name Identifies the projection name. When using more than one schema, specify the schema that contains the projection, as noted above. Privileges l Table owner l USAGE privilege on schema Notes See PURGE for notes about the outcome of purge operations. See Also l PURGE_TABLE l STORAGE_CONTAINERS l Purging Deleted Data PURGE_TABLE Note: This function was formerly named PURGE_TABLE_PROJECTIONS(). HP Vertica still HP Vertica Analytic Database (7.0.x) Page 856 of 1539 SQL Reference Manual SQL Functions supports the former function name. Permanently removes deleted data from physical storage so that the disk space can be reused. You can purge historical data up to and including the epoch in which the Ancient History Mark is contained. Purges all projections of the specified table. You cannot use this function to purge temporary tables. Syntax PURGE_TABLE ( '[[db-name.]schema.]table_name' ) Parameters [[db-name.]schema.] [Optional] Specifies the database name and optional schema name. Using a database name identifies objects that are not unique within the current search path (see Setting Search Paths). You must be connected to the database you specify, and you cannot change objects in other databases. Specifying different database objects lets you qualify database objects as explicitly as required. For example, you can use a database and a schema name (mydb.myschema). table_name Specifies the table to purge. Privileges l Table owner l USAGE privilege on schema Caution: PURGE_TABLE could temporarily take up significant disk space while the data is being purged. Example The following example purges all projections for the store sales fact table located in the Vmart schema: => SELECT PURGE_TABLE('store.store_sales_fact'); HP Vertica Analytic Database (7.0.x) Page 857 of 1539 SQL Reference Manual SQL Functions See Also l PURGE l PURGE_TABLE l STORAGE_CONTAINERS l Purging Deleted Data HP Vertica Analytic Database (7.0.x) Page 858 of 1539 SQL Reference Manual SQL Functions Session Management Functions This section contains session management functions specific to HP Vertica. See also the SQL system table V_MONITOR.SESSIONS CANCEL_REFRESH Cancels refresh-related internal operations initiated by START_REFRESH(). Syntax CANCEL_REFRESH() Privileges None Notes l Refresh tasks run in a background thread in an internal session, so you cannot use INTERRUPT_STATEMENT to cancel those statements. Instead, use CANCEL_REFRESH to cancel statements that are run by refresh-related internal sessions. l Run CANCEL_REFRESH() on the same node on which START_REFRESH() was initiated. l CANCEL_REFRESH() cancels the refresh operation running on a node, waits for the cancelation to complete, and returns SUCCESS. l Only one set of refresh operations runs on a node at any time. Example Cancel a refresh operation executing in the background. t=> SELECT START_REFRESH(); START_REFRESH ---------------------------------------Starting refresh background process. (1 row) => SELECT CANCEL_REFRESH(); CANCEL_REFRESH ---------------------------------------Stopping background refresh process. (1 row) HP Vertica Analytic Database (7.0.x) Page 859 of 1539 SQL Reference Manual SQL Functions See Also l INTERRUPT_STATEMENT l SESSIONS l START_REFRESH l PROJECTION_REFRESHES CLOSE_ALL_SESSIONS Closes all external sessions except the one issuing the CLOSE_ALL_SESSIONS functions. Syntax CLOSE_ALL_SESSIONS() Privileges None; however, a non-superuser can only close his or her own session. Notes Closing of the sessions is processed asynchronously. It might take some time for the session to be closed. Check the SESSIONS table for the status. Database shutdown is prevented if new sessions connect after the CLOSE_SESSION or CLOSE_ ALL_SESSIONS() command is invoked (and before the database is actually shut down). See Controlling Sessions below. Message close_all_sessions | Close all sessions command sent. Check SESSIONS for progress. Examples Two user sessions opened, each on a different node: vmartdb=> SELECT * FROM sessions; -[ RECORD 1 ]--------------+---------------------------------------------------node_name | v_vmartdb_node0001 user_name | dbadmin client_hostname | 127.0.0.1:52110 HP Vertica Analytic Database (7.0.x) Page 860 of 1539 SQL Reference Manual SQL Functions client_pid | 4554 login_timestamp | 2011-01-03 14:05:40.252625-05 session_id | stress04-4325:0x14 client_label | transaction_start | 2011-01-03 14:05:44.325781 transaction_id | 45035996273728326 transaction_description | user dbadmin (select * from sessions;) statement_start | 2011-01-03 15:36:13.896288 statement_id | 10 last_statement_duration_us | 14978 current_statement | select * from sessions; ssl_state | None authentication_method | Trust -[ RECORD 2 ]--------------+---------------------------------------------------node_name | v_vmartdb_node0002 user_name | dbadmin client_hostname | 127.0.0.1:57174 client_pid | 30117 login_timestamp | 2011-01-03 15:33:00.842021-05 session_id | stress05-27944:0xc1a client_label | transaction_start | 2011-01-03 15:34:46.538102 transaction_id | -1 transaction_description | user dbadmin (COPY Mart_Fact FROM '/data/mart_Fact.tbl' DELIMITER '|' NULL '\\n';) statement_start | 2011-01-03 15:34:46.538862 statement_id | last_statement_duration_us | 26250 current_statement | COPY Mart_Fact FROM '/data/Mart_Fact.tbl' DELIMITER '|' NULL '\\n'; ssl_state | None authentication_method | Trust -[ RECORD 3 ]--------------+---------------------------------------------------node_name | v_vmartdb_node0003 user_name | dbadmin client_hostname | 127.0.0.1:56367 client_pid | 1191 login_timestamp | 2011-01-03 15:31:44.939302-05 session_id | stress06-25663:0xbec client_label | transaction_start | 2011-01-03 15:34:51.05939 transaction_id | 54043195528458775 transaction_description | user dbadmin (COPY Mart_Fact FROM '/data/Mart_Fact.tbl' DELIMITER '|' NULL '\\n' DIRECT;) statement_start | 2011-01-03 15:35:46.436748 statement_id | last_statement_duration_us | 1591403 current_statement | COPY Mart_Fact FROM '/data/Mart_Fact.tbl' DELIMITER '|' NULL '\\n' DIRECT; ssl_state | None authentication_method | Trust Close all sessions: vmartdb=> \xExpanded display is off. vmartdb=> SELECT CLOSE_ALL_SESSIONS(); HP Vertica Analytic Database (7.0.x) Page 861 of 1539 SQL Reference Manual SQL Functions CLOSE_ALL_SESSIONS ------------------------------------------------------------------------Close all sessions command sent. Check v_monitor.sessions for progress. (1 row) Session contents after issuing the CLOSE_ALL_SESSIONS() command: => SELECT * FROM SESSIONS;-[ ----node_name | user_name | client_hostname | client_pid | login_timestamp | session_id | client_label | transaction_start | transaction_id | transaction_description | statement_start | statement_id | last_statement_duration_us | current_statement | ssl_state | authentication_method | RECORD 1 ]--------------+----------------------------------v_vmartdb_node0001 dbadmin 127.0.0.1:52110 4554 2011-01-03 14:05:40.252625-05 stress04-4325:0x14 2011-01-03 14:05:44.325781 45035996273728326 user dbadmin (SELECT * FROM sessions;) 2011-01-03 16:19:56.720071 25 15605 SELECT * FROM SESSIONS; None Trust Controlling Sessions The database administrator must be able to disallow new incoming connections in order to shut down the database. On a busy system, database shutdown is prevented if new sessions connect after the CLOSE_SESSION or CLOSE_ALL_SESSIONS() command is invoked—and before the database actually shuts down. One option is for the administrator to issue the SHUTDOWN('true') command, which forces the database to shut down and disallow new connections. See SHUTDOWN in the SQL Reference Manual. Another option is to modify the MaxClientSessions parameter from its original value to 0, in order to prevent new non-dbadmin users from connecting to the database. 1. Determine the original value for the MaxClientSessions parameter by querying the V_ MONITOR.CONFIGURATIONS_PARAMETERS system table: => SELECT CURRENT_VALUE FROM CONFIGURATION_PARAMETERS WHERE parameter_name='MaxClient Sessions'; CURRENT_VALUE --------------50 (1 row) HP Vertica Analytic Database (7.0.x) Page 862 of 1539 SQL Reference Manual SQL Functions 2. Set the MaxClientSessions parameter to 0 to prevent new non-dbadmin connections: => SELECT SET_CONFIG_PARAMETER('MaxClientSessions', 0); Note: The previous command allows up to five administrators to log in. 3. Issue the CLOSE_ALL_SESSIONS() command to remove existing sessions: => SELECT CLOSE_ALL_SESSIONS(); 4. Query the SESSIONS table: => SELECT * FROM SESSIONS; When the session no longer appears in the SESSIONS table, disconnect and run the Stop Database command. 5. Restart the database. 6. Restore the MaxClientSessions parameter to its original value: => SELECT SET_CONFIG_PARAMETER('MaxClientSessions', 50); See Also l CLOSE_SESSION l CONFIGURATION_PARAMETERS l SHUTDOWN SESSIONS l l l CLOSE_SESSION Interrupts the specified external session, rolls back the current transaction, if any, and closes the socket. HP Vertica Analytic Database (7.0.x) Page 863 of 1539 SQL Reference Manual SQL Functions Syntax CLOSE_SESSION ( 'sessionid' ) Parameters sessionid A string that specifies the session to close. This identifier is unique within the cluster at any point in time but can be reused when the session closes. Privileges None; however, a non-superuser can only close his or her own session. Notes l Closing of the session is processed asynchronously. It could take some time for the session to be closed. Check the SESSIONS table for the status. l Database shutdown is prevented if new sessions connect after the CLOSE_SESSION() command is invoked (and before the database is actually shut down. See Controlling Sessions below. Messages The following are the messages you could encounter: l For a badly formatted sessionID close_session | Session close command sent. Check SESSIONS for progress.Error: invalid Session ID format l For an incorrect sessionID parameter Error: Invalid session ID or statement key Examples User session opened. RECORD 2 shows the user session running COPY DIRECT statement. => SELECT * FROM sessions; -[ RECORD 1 ]--------------+----------------------------------------------- HP Vertica Analytic Database (7.0.x) Page 864 of 1539 SQL Reference Manual SQL Functions node_name | v_vmartdb_node0001 user_name | dbadmin client_hostname | 127.0.0.1:52110 client_pid | 4554 login_timestamp | 2011-01-03 14:05:40.252625-05 session_id | stress04-4325:0x14 client_label | transaction_start | 2011-01-03 14:05:44.325781 transaction_id | 45035996273728326 transaction_description | user dbadmin (SELECT * FROM sessions;) statement_start | 2011-01-03 15:36:13.896288 statement_id | 10 last_statement_duration_us | 14978 current_statement | select * from sessions; ssl_state | None authentication_method | Trust -[ RECORD 2 ]--------------+----------------------------------------------node_name | v_vmartdb_node0002 user_name | dbadmin client_hostname | 127.0.0.1:57174 client_pid | 30117 login_timestamp | 2011-01-03 15:33:00.842021-05 session_id | stress05-27944:0xc1a client_label | transaction_start | 2011-01-03 15:34:46.538102 transaction_id | -1 transaction_description | user dbadmin (COPY ClickStream_Fact FROM '/data/clickstream/1g/ClickStream_Fact.tbl' DELIMITER '|' NULL '\\n' DIRECT;) statement_start | 2011-01-03 15:34:46.538862 statement_id | last_statement_duration_us | 26250 current_statement | COPY ClickStream_Fact FROM '/data/clickstream /1g/ClickStream_Fact.tbl' DELIMITER '|' NULL '\\n' DIRECT; ssl_state | None authentication_method | Trust Close user session stress05-27944:0xc1a => \xExpanded display is off. => SELECT CLOSE_SESSION('stress05-27944:0xc1a'); CLOSE_SESSION -------------------------------------------------------------------Session close command sent. Check v_monitor.sessions for progress. (1 row) Query the sessions table again for current status, and you can see that the second session has been closed: => SELECT * FROM SESSIONS; -[ RECORD 1 ]--------------+-------------------------------------------node_name | v_vmartdb_node0001 user_name | dbadmin client_hostname | 127.0.0.1:52110 HP Vertica Analytic Database (7.0.x) Page 865 of 1539 SQL Reference Manual SQL Functions client_pid login_timestamp session_id client_label transaction_start transaction_id transaction_description statement_start statement_id last_statement_duration_us current_statement ssl_state authentication_method | | | | | | | | | | | | | 4554 2011-01-03 14:05:40.252625-05 stress04-4325:0x14 2011-01-03 14:05:44.325781 45035996273728326 user dbadmin (select * from SESSIONS;) 2011-01-03 16:12:07.841298 20 2099 SELECT * FROM SESSIONS; None Trust Controlling Sessions The database administrator must be able to disallow new incoming connections in order to shut down the database. On a busy system, database shutdown is prevented if new sessions connect after the CLOSE_SESSION or CLOSE_ALL_SESSIONS() command is invoked—and before the database actually shuts down. One option is for the administrator to issue the SHUTDOWN('true') command, which forces the database to shut down and disallow new connections. See SHUTDOWN in the SQL Reference Manual. Another option is to modify the MaxClientSessions parameter from its original value to 0, in order to prevent new non-dbadmin users from connecting to the database. 1. Determine the original value for the MaxClientSessions parameter by querying the V_ MONITOR.CONFIGURATIONS_PARAMETERS system table: => SELECT CURRENT_VALUE FROM CONFIGURATION_PARAMETERS WHERE parameter_name='MaxClient Sessions'; CURRENT_VALUE --------------50 (1 row) 2. Set the MaxClientSessions parameter to 0 to prevent new non-dbadmin connections: => SELECT SET_CONFIG_PARAMETER('MaxClientSessions', 0); Note: The previous command allows up to five administrators to log in. 3. Issue the CLOSE_ALL_SESSIONS() command to remove existing sessions: HP Vertica Analytic Database (7.0.x) Page 866 of 1539 SQL Reference Manual SQL Functions => SELECT CLOSE_ALL_SESSIONS(); 4. Query the SESSIONS table: => SELECT * FROM SESSIONS; When the session no longer appears in the SESSIONS table, disconnect and run the Stop Database command. 5. Restart the database. 6. Restore the MaxClientSessions parameter to its original value: => SELECT SET_CONFIG_PARAMETER('MaxClientSessions', 50); See Also l CLOSE_ALL_SESSIONS l CONFIGURATION_PARAMETERS l SESSIONS SHUTDOWN l l l GET_NUM_ACCEPTED_ROWS Returns the number of rows loaded into the database for the last completed load for the current session. GET_NUM_ACCEPTED_ROWS is a meta-function. Do not use it as a value in an INSERT query. The number of accepted rows is not available for a load that is currently in process. Check the LOAD_STREAMS system table for its status. Also, this meta-function supports only loads from STDIN or a single file on the initiator. You cannot use GET_NUM_ACCEPTED_ROWS for multi-node loads. Syntax GET_NUM_ACCEPTED_ROWS(); HP Vertica Analytic Database (7.0.x) Page 867 of 1539 SQL Reference Manual SQL Functions Privileges None Note: The data regarding accepted rows from the last load during the current session does not persist, and is lost when you initiate a new load. See Also l GET_NUM_REJECTED_ROWS GET_NUM_REJECTED_ROWS Returns the number of rows that were rejected during the last completed load for the current session. GET_NUM_REJECTED_ROWS is a meta-function. Do not use it as a value in an INSERT query. Rejected row information is unavailable for a load that is currently running. The number of rejected rows is not available for a load that is currently in process. Check the LOAD_STREAMS system table for its status. Also, this meta-function supports only loads from STDIN or a single file on the initiator. You cannot use GET_NUM_REJECTED_ROWS for multi-node loads. Syntax GET_NUM_REJECTED_ROWS(); Privileges None Note: The data regarding rejected rows from the last load during the current session does not persist, and is dropped when you initiate a new load. See Also l GET_NUM_ACCEPTED_ROWS INTERRUPT_STATEMENT Interrupts the specified statement (within an external session), rolls back the current transaction, and writes a success or failure message to the log file. HP Vertica Analytic Database (7.0.x) Page 868 of 1539 SQL Reference Manual SQL Functions Syntax INTERRUPT_STATEMENT( 'session_id ', statement_id ) Parameters session_id Specifies the session to interrupt. This identifier is unique within the cluster at any point in time. statement_id Specifies the statement to interrupt Privileges Must be a superuser. Notes l Only statements run by external sessions can be interrupted. l Sessions can be interrupted during statement execution. l If the statement_id is valid, the statement is interruptible. The command is successfully sent and returns a success message. Otherwise the system returns an error. Messages The following list describes messages you might encounter: Message Meaning Statement interrupt sent. Check SESSIONS for progress. This message indicates success. Session could not be successfully interrupted: session not found. The session ID argument to the interrupt command does not match a running session. HP Vertica Analytic Database (7.0.x) Page 869 of 1539 SQL Reference Manual SQL Functions Message Meaning Session could not be successfully interrupted: statement not found. The statement ID does not match (or no longer matches) the ID of a running statement (if any). No interruptible statement running The statement is DDL or otherwise non-interruptible. Internal (system) sessions cannot be interrupted. The session is internal, and only statements run by external sessions can be interrupted. Examples Two user sessions are open. RECORD 1 shows user session running SELECT FROM SESSION, and RECORD 2 shows user session running COPY DIRECT: => SELECT * FROM SESSIONS; -[ RECORD 1 ]--------------+---------------------------------------------------node_name | v_vmartdb_node0001 user_name | dbadmin client_hostname | 127.0.0.1:52110 client_pid | 4554 login_timestamp | 2011-01-03 14:05:40.252625-05 session_id | stress04-4325:0x14 client_label | transaction_start | 2011-01-03 14:05:44.325781 transaction_id | 45035996273728326 transaction_description | user dbadmin (select * from sessions;) statement_start | 2011-01-03 15:36:13.896288 statement_id | 10 last_statement_duration_us | 14978 current_statement | select * from sessions; ssl_state | None authentication_method | Trust -[ RECORD 2 ]--------------+---------------------------------------------------node_name | v_vmartdb_node0003 user_name | dbadmin client_hostname | 127.0.0.1:56367 client_pid | 1191 login_timestamp | 2011-01-03 15:31:44.939302-05 session_id | stress06-25663:0xbec client_label | transaction_start | 2011-01-03 15:34:51.05939 transaction_id | 54043195528458775 HP Vertica Analytic Database (7.0.x) Page 870 of 1539 SQL Reference Manual SQL Functions transaction_description | user dbadmin (COPY Mart_Fact FROM '/data/Mart_Fact.tbl' DELIMITER '|' NULL '\\n' DIRECT;) statement_start | 2011-01-03 15:35:46.436748 statement_id | 5 last_statement_duration_us | 1591403 current_statement | COPY Mart_Fact FROM '/data/Mart_Fact.tbl' DELIMITER '|' NULL '\\n' DIRECT; ssl_state | None authentication_method | Trust Interrupt the COPY DIRECT statement running in stress06-25663:0xbec: => \xExpanded display is off. => SELECT INTERRUPT_STATEMENT('stress06-25663:0x1537', 5); interrupt_statement -----------------------------------------------------------------Statement interrupt sent. Check v_monitor.sessions for progress. (1 row) Verify that the interrupted statement is no longer active by looking at the current_statement column in the SESSIONS system table. This column becomes blank when the statement has been interrupted: => SELECT * FROM SESSIONS; -[ RECORD 1 ]--------------+---------------------------------------------------node_name | v_vmartdb_node0001 user_name | dbadmin client_hostname | 127.0.0.1:52110 client_pid | 4554 login_timestamp | 2011-01-03 14:05:40.252625-05 session_id | stress04-4325:0x14 client_label | transaction_start | 2011-01-03 14:05:44.325781 transaction_id | 45035996273728326 transaction_description | user dbadmin (select * from sessions;) statement_start | 2011-01-03 15:36:13.896288 statement_id | 10 last_statement_duration_us | 14978 current_statement | select * from sessions; ssl_state | None authentication_method | Trust -[ RECORD 2 ]--------------+---------------------------------------------------node_name | v_vmartdb_node0003 user_name | dbadmin client_hostname | 127.0.0.1:56367 client_pid | 1191 login_timestamp | 2011-01-03 15:31:44.939302-05 session_id | stress06-25663:0xbec client_label | transaction_start | 2011-01-03 15:34:51.05939 transaction_id | 54043195528458775 transaction_description | user dbadmin (COPY Mart_Fact FROM '/data/Mart_Fact.tbl' DELIMITER '|' NULL '\\n' DIRECT;) statement_start | 2011-01-03 15:35:46.436748 statement_id | 5 HP Vertica Analytic Database (7.0.x) Page 871 of 1539 SQL Reference Manual SQL Functions last_statement_duration_us current_statement ssl_state authentication_method | 1591403 | | None | Trust See Also l SESSIONS l Managing Sessions l Configuration Parameters RELEASE_ALL_JVM_MEMORY Forces all sessions to release the memory consumed by their Java Virtual Machines (JVM). Syntax RELEASE_ALL_JVM_MEMORY(); Permissions Must be a superuser. Example The following example demonstrates viewing the JVM memory use in all open sessions, then calling RELEASE_ALL_JVM_MEMORY() to release the memory: => select user_name,jvm_memory_kb FROM V_MONITOR.SESSIONS; user_name | jvm_memory_kb -----------+--------------dbadmin | 79705 (1 row) => SELECT RELEASE_ALL_JVM_MEMORY(); RELEASE_ALL_JVM_MEMORY ----------------------------------------------------------------------------Close all JVM sessions command sent. Check v_monitor.sessions for progress. (1 row) => SELECT user_name,jvm_memory_kb FROM V_MONITOR.SESSIONS; user_name | jvm_memory_kb -----------+--------------dbadmin | 0 (1 row) HP Vertica Analytic Database (7.0.x) Page 872 of 1539 SQL Reference Manual SQL Functions See Also l RELEASE_JVM_MEMORY RELEASE_JVM_MEMORY Terminates a Java Virtual Machine (JVM), making available the memory the JVM was using. Syntax RELEASE_JVM_MEMORY(); Privileges None. Examples User session opened. RECORD 2 shows the user session running COPY DIRECT statement. => SELECT RELEASE_JVM_MEMORY(); release_jvm_memory ----------------------------------------Java process killed and memory released (1 row) See Also l RELEASE_ALL_JVM_MEMORY HP Vertica Analytic Database (7.0.x) Page 873 of 1539 SQL Reference Manual SQL Functions Statistic Management Functions This section contains statistic management functions specific to HP Vertica. ANALYZE_HISTOGRAM Collects and aggregates data samples and storage information from all nodes that store projections associated with the specified table or column. If the function returns successfully (0), HP Vertica writes the returned statistics to the catalog. The query optimizer uses this collected data to recommend the best possible plan to execute a query. Without analyzing table statistics, the query optimizer would assume uniform distribution of data values and equal storage usage for all projections. ANALYZE_HISTOGRAM is a DDL operation that auto-commits the current transaction, if any. The ANALYZE_HISTOGRAM function reads a variable amount of disk contents to aggregate sample data for statistical analysis. Use the function's percent float parameter to specify the total disk space from which HP Vertica collects sample data. The ANALYZE_STATISTICS function returns similar data, but uses a fixed disk space amount (10 percent). Analyzing more than 10 percent disk space takes proportionally longer to process, but produces a higher level of sampling accuracy. ANALYZE_HISTOGRAM is supported on local temporary tables, but not on global temporary tables. Syntax ANALYZE_HISTOGRAM ('') ... | ( '[ [ db-name.]schema.]table [.column-name ]' [, percent ] ) Return Value 0 - For success. If an error occurs, refer to vertica.log for details. Parameters '' Empty string. Collects statistics for all tables. HP Vertica Analytic Database (7.0.x) Page 874 of 1539 SQL Reference Manual SQL Functions [[db-name.]schema.] [Optional] Specifies the schema name. Using a schema identifies objects that are not unique within the current search path (see Setting Schema Search Paths). You can optionally precede a schema with a database name, but you must be connected to the database you specify. You cannot make changes to objects in other databases. The ability to specify different database objects (from database and schemas to tables and columns) lets you qualify database objects as explicitly as required. For example, use a table and column (mytable.column1), a schema, table, and column (myschema.mytable.column1), and, as full qualification, a database, schema, table, and column (mydb.myschema.mytable.column1). table Specifies the name of the table and collects statistics for all projections of that table. If you are using more than one schema, specify the schema that contains the projection, as noted in the [[db-name.]schema.] entry. [.column-name] [Optional] Specifies the name of a single column, typically a predicate column. Using this option with a table specification lets you collect statistics for only that column. Note: If you alter a table to add or drop a column, or add a new column to a table and populate its contents with either default or other values, HP Vertica recommends calling this function on the new table column to get the most current statistics. percent [Optional] Specifies what percentage of data to read from disk (not the amount of data to analyze). Specify a float from 1 – 100, such as 33.3. By default, the function reads 10% of the table data from disk. For more information, see Collecting Statistics in the Administrator's Guide. Privileges l Any INSERT/UPDATE/DELETE privilege on table l USAGE privilege on schema that contains the table Use the HP Vertica statistics functions as follows: HP Vertica Analytic Database (7.0.x) Page 875 of 1539 SQL Reference Manual SQL Functions Use this function... ANALYZE_ STATISTICS To obtain... A fixed-size statistical data sampling (10 percent per disk). This function returns results quickly, but is less accurate than using ANALYZE_HISTOGRAM to get a larger sampling of disk data. ANALYZE_ A specified percentage of disk data sampling (from 1–100). If you analyze more HISTOGRAM than 10 percent data per disk, this function is more accurate than ANALYZE_ STATISTICS, but requires proportionately longer to return statistics. Analyzing Results To retrieve hints about under-performing queries and the associated root causes, use the ANALYZE_WORKLOAD function. This function runs the Workload Analyzer and returns tuning recommendations, such as "run analyze_statistics on schema.table.column". You or your database administrator should act upon the tuning recommendations. You can also find database tuning recommendations on the Management Console. Canceling ANALYZE_HISTOGRAM You can cancel this function mid-analysis by issuing CTRL-C in a vsql shell or by invoking the INTERRUPT_STATEMENT() function. Notes By default, HP Vertica analyzes more than one column (subject to resource limits) in a single-query execution plan to: l Reduce plan execution latency l Help speed up analysis of relatively small tables that have a large number of columns Examples In this example, the ANALYZE_STATISTICS() function reads 10 percent of the disk data. This is the static default value for this function. The function returns 0 for success: => SELECT ANALYZE_STATISTICS('shipping_dimension.shipping_key'); ANALYZE_STATISTICS -------------------0 (1 row) HP Vertica Analytic Database (7.0.x) Page 876 of 1539 SQL Reference Manual SQL Functions This example uses ANALYZE_HISTOGRAM () without specifying a percentage value. Since this function has a default value of 10 percent, it returns the identical data as the ANALYZE_ STATISTICS() function, and returns 0 for success: => SELECT ANALYZE_HISTOGRAM('shipping_dimension.shipping_key'); ANALYZE_HISTOGRAM ------------------0 (1 row) This example uses ANALYZE_HISTOGRAM (), specifying its percent parameter as 100, indicating it will read the entire disk to gather data. After the function performs a full column scan, it returns 0 for success: => SELECT ANALYZE_HISTOGRAM('shipping_dimension.shipping_key', 100); ANALYZE_HISTOGRAM ------------------0 (1 row) In this command, only 0.1% (1/1000) of the disk is read: => SELECT ANALYZE_HISTOGRAM('shipping_dimension.shipping_key', 0.1); ANALYZE_HISTOGRAM ------------------0 (1 row) See Also l ANALYZE_STATISTICS l ANALYZE_WORKLOAD l DROP_STATISTICS l EXPORT_STATISTICS l IMPORT_STATISTICS INTERRUPT_STATEMENT l l ANALYZE_STATISTICS Collects and aggregates data samples and storage information from all nodes that store projections associated with the specified table or column. HP Vertica Analytic Database (7.0.x) Page 877 of 1539 SQL Reference Manual SQL Functions If the function returns successfully (0), HP Vertica writes the returned statistics to the catalog. The query optimizer uses this collected data to recommend the best possible plan to execute a query. Without analyzing table statistics, the query optimizer would assume uniform distribution of data values and equal storage usage for all projections. ANALYZE_STATISTICS is a DDL operation that auto-commits the current transaction, if any. The ANALYZE_STATISTICS function reads a fixed, 10 percent of disk contents to aggregate sample data for statistical analysis. To obtain a larger (or smaller) data sampling, use the ANALYZE_ HISTOGRAM function, which lets you specify the percent of disk to read. Analyzing more that 10 percent disk space takes proportionally longer to process, but results in a higher level of sampling accuracy. ANALYZE_STATISTICS is supported on local temporary tables, but not on global temporary tables. Syntax ANALYZE_STATISTICS [ ('') ... | ( '[ [ db-name.]schema.]table [.column-name ]' ) ] Return value 0 - For success. If an error occurs, refer to vertica.log for details. Parameters '' Empty string. Collects statistics for all tables. [[db-name.]schema.] [Optional] Specifies the schema name. Using a schema identifies objects that are not unique within the current search path (see Setting Schema Search Paths). You can optionally precede a schema with a database name, but you must be connected to the database you specify. You cannot make changes to objects in other databases. The ability to specify different database objects (from database and schemas to tables and columns) lets you qualify database objects as explicitly as required. For example, use a table and column (mytable.column1), a schema, table, and column (myschema.mytable.column1), and, as full qualification, a database, schema, table, and column (mydb.myschema.mytable.column1). HP Vertica Analytic Database (7.0.x) Page 878 of 1539 SQL Reference Manual SQL Functions table Specifies the name of the table and collects statistics for all projections of that table. Note: If you are using more than one schema, specify the schema that contains the projection, as noted as noted in the [[db-name.] schema.] entry. [.column-name] [Optional] Specifies the name of a single column, typically a predicate column. Using this option with a table specification lets you collect statistics for only that column. Note: If you alter a table to add or drop a column, or add a new column to a table and populate its contents with either default or other values, HP Vertica recommends calling this function on the new table column to get the most current statistics. Privileges l Any INSERT/UPDATE/DELETE privilege on table l USAGE privilege on schema that contains the table Use the HP Vertica statistics functions as follows: Use this function... ANALYZE_ STATISTICS To obtain... A fixed-size statistical data sampling (10 percent per disk). This function returns results quickly, but is less accurate than using ANALYZE_HISTOGRAM to get a larger sampling of disk data. ANALYZE_ A specified percentage of disk data sampling (from 1–100). If you analyze more HISTOGRAM than 10 percent data per disk, this function is more accurate than ANALYZE_ STATISTICS, but requires proportionately longer to return statistics. Analyzing results To retrieve hints about under-performing queries and the associated root causes, use the ANALYZE_WORKLOAD function. This function runs the Workload Analyzer and returns tuning recommendations, such as "run analyze_statistics on schema.table.column". You or your database administrator should act upon the tuning recommendations. You can also find database tuning recommendations on the Management Console. HP Vertica Analytic Database (7.0.x) Page 879 of 1539 SQL Reference Manual SQL Functions Canceling this function You can cancel statistics analysis by issuing CTRL+C in a vsql shell or by invoking the INTERRUPT_STATEMENT() function. Notes l Always run ANALYZE_STATISTICS on a table or column rather than a projection. l By default, HP Vertica analyzes more than one column (subject to resource limits) in a singlequery execution plan to: l n Reduce plan execution latency n Help speed up analysis of relatively small tables that have a large number of columns Pre-join projection statistics are updated on any pre-joined tables. Examples Computes statistics on all projections in the VMart database and returns 0 (success): => SELECT ANALYZE_STATISTICS (''); analyze_statistics -------------------0 (1 row) Computes statistics on a single table (shipping_dimension) and returns 0 (success): => SELECT ANALYZE_STATISTICS ('shipping_dimension'); analyze_statistics -------------------0 (1 row) Computes statistics on a single column (shipping_key) across all projections for the shipping_ dimension table and returns 0 (success): => SELECT ANALYZE_STATISTICS('shipping_dimension.shipping_key'); analyze_statistics -------------------0 (1 row) For use cases, see Collecting Statistics in the Administrator's Guide HP Vertica Analytic Database (7.0.x) Page 880 of 1539 SQL Reference Manual SQL Functions See Also l ANALYZE_HISTOGRAM l ANALYZE_WORKLOAD l DROP_STATISTICS l EXPORT_STATISTICS l IMPORT_STATISTICS l INTERRUPT_STATEMENT DROP_STATISTICS Removes statistics for the specified table and lets you optionally specify the category of statistics to drop. Syntax DROP_STATISTICS { ('') | ('[[db-name.]schema-name.]table' [, {'BASE' | 'HISTOGRAMS' | 'ALL'} ])}; Return Value 0 - If successful, DROP_STATISTICS always returns 0. If the command fails, DROP_ STATISTICS displays an error message. See vertica.log for message details. Parameters '' Empty string. Drops statistics for all projections. [[db-name.]schema.] [Optional] Specifies the database name and optional schema name. Using a database name identifies objects that are not unique within the current search path (see Setting Search Paths). You must be connected to the database you specify, and you cannot change objects in other databases. Specifying different database objects lets you qualify database objects as explicitly as required. For example, you can use a database and a schema name (mydb.myschema). table Drops statistics for all projections within the specified table. When using more than one schema, specify the schema that contains the table with the projections you want to delete, as noted in the syntax. HP Vertica Analytic Database (7.0.x) Page 881 of 1539 SQL Reference Manual SQL Functions CATEGORY Specifies the category of statistics to drop for the named [db-name.] schema-name.]table: l 'BASE' (default) drops histograms and row counts (min/max column values, histogram. l 'HISTOGRAMS' drops only the histograms. Row counts statistics remain. l 'ALL' drops all statistics. Privileges l INSERT/UPDATE/DELETE privilege on table l USAGE privilege on schema that contains the table Notes Once dropped, statistics can be time consuming to regenerate. Examples The following command analyzes all statistics on the VMart schema database: => SELECT ANALYZE_STATISTICS(''); ANALYZE_STATISTICS -------------------0 (1 row) This command drops base statistics for table store_sales_fact in the store schema: => SELECT DROP_STATISTICS('store.store_sales_fact', 'BASE'); DROP_STATISTICS ----------------0 (1 row) Note that this command works the same as the previous command: => SELECT DROP_STATISTICS('store.store_sales_fact'); DROP_STATISTICS ----------------0 (1 row) This command also drops statistics for all table projections: HP Vertica Analytic Database (7.0.x) Page 882 of 1539 SQL Reference Manual SQL Functions => SELECT DROP_STATISTICS (''); DROP_STATISTICS ----------------0 (1 row) For use cases, see Collecting Statistics in the Administrator's Guide See Also l ANALYZE_STATISTICS l EXPORT_STATISTICS l IMPORT_STATISTICS EXPORT_STATISTICS Generates an XML file that contains statistics for the database. You can optionally export statistics on a single database object (table, projection, or table column). Before you export statistics for the database, run ANALYZE_STATISTICS() to automatically collect the most up to date statistics information. Note: Use the second argument only if statistics in the database do not match the statistics of data. Syntax EXPORT_STATISTICS [ ( 'destination' ) ... | ( '[ [ db-name.]schema.]table [.column-name ]' ) ] Parameters destination Specifies the path and name of the XML output file. An empty string returns the script to the screen. HP Vertica Analytic Database (7.0.x) Page 883 of 1539 SQL Reference Manual SQL Functions [[db-name.]schema.] [Optional] Specifies the schema name. Using a schema identifies objects that are not unique within the current search path (see Setting Schema Search Paths). You can optionally precede a schema with a database name, but you must be connected to the database you specify. You cannot make changes to objects in other databases. The ability to specify different database objects (from database and schemas to tables and columns) lets you qualify database objects as explicitly as required. For example, use a table and column (mytable.column1), a schema, table, and column (myschema.mytable.column1), and, as full qualification, a database, schema, table, and column (mydb.myschema.mytable.column1). table Specifies the name of the table and exports statistics for all projections of that table. Note: If you are using more than one schema, specify the schema that contains the projection, as noted as noted in the [[db-name.] schema.] entry. [.column-name] [Optional] Specifies the name of a single column, typically a predicate column. Using this option with a table specification lets you export statistics for only that column. Privileges Must be a superuser. Examples The following command exports statistics on the VMart example database to a file: vmart=> SELECT EXPORT_STATISTICS('/opt/vertica/examples/VMart_Schema/vmart_stats.xml'); EXPORT_STATISTICS ----------------------------------Statistics exported successfully (1 row) The next statement exports statistics on a single column (price) from a table called food: => SELECT EXPORT_STATISTICS('/opt/vertica/examples/VMart_Schema/price.xml', 'food.pric e'); HP Vertica Analytic Database (7.0.x) Page 884 of 1539 SQL Reference Manual SQL Functions See Also l ANALYZE_STATISTICS l DROP_STATISTICS l IMPORT_STATISTICS l Collecting Database Statistics IMPORT_STATISTICS Imports statistics from the XML file generated by the EXPORT_STATISTICS command. Syntax IMPORT_STATISTICS ( 'destination' ) Parameters destination Specifies the path and name of the XML input file (which is the output of EXPORT_ STATISTICS function). Privileges Must be a superuser. Notes l Imported statistics override existing statistics for all projections on the specified table. l For use cases, see Collecting Statistics in the Administrator's Guide Example Import the statistics for the VMart database that EXPORT_STATISTICS saved. -> SELECT IMPORT_STATISTICS('/opt/vertica/examples/VMart_Schema/vmart_stats.xml'); IMPORT_STATISTICS ---------------------------------------------------------------------------Importing statistics for projection date_dimension_super column date_key failure (stats d id not contain row counts) HP Vertica Analytic Database (7.0.x) Page 885 of 1539 SQL Reference Manual SQL Functions Importing statistics for projection date_dimension_super column date failure (stats did n ot contain row counts) Importing statistics for projection date_dimension_super column full_date_description fai lure (stats did not contain row counts) ... (1 row) VMart=> See Also l ANALYZE_STATISTICS l DROP_STATISTICS l EXPORT_STATISTICS HP Vertica Analytic Database (7.0.x) Page 886 of 1539 SQL Reference Manual SQL Functions Storage Management Functions This section contains storage management functions specific to HP Vertica. ADD_LOCATION Adds a storage location to the cluster. Use this function to add a new location, optionally with a location label. You can also add a location specifically for user access, and then grant one or more users access to the location. Syntax ADD_LOCATION ( 'path' [, 'node' , 'usage', 'location_label' ] ) Parameters path [Required] Specifies where the storage location is mounted. Path must be an empty directory with write permissions for user, group, or all. node [Optional] Indicates the cluster node on which a storage location resides. If you omit this parameter, the function adds the location to only the initiator node. Specifying the node parameter as an empty string ('') adds a storage location to all cluster nodes in a single transaction. Note: If you specify a node, you must also add a usage parameter. usage [Optional] Specifies what the storage location will be used for: l DATA: Stores only data files. Use this option for labeled storage locations. l TEMP: Stores only temporary files, created during loads or queries. l DATA,TEMP: Stores both types of files in the location. l USER: Allows non-dbadmin users access to the storage location for data files (not temp files), once they are granted privileges. DO NOT create a storage location for later use in a storage policy. Storage locations with policies must be for DATA usage. Also, note that this keyword is orthogonal to DATA and TEMP, and does not specify a particular usage, other than being accessible to non-dbadmin users with assigned privileges. You cannot alter a storage location to or from USER usage. NOTE: You can use this parameter only in conjunction with the node option. If you omit the usage parameter, the default is DATA,TEMP. HP Vertica Analytic Database (7.0.x) Page 887 of 1539 SQL Reference Manual SQL Functions location_label [Optional] Specifies a location label as a string, for example, SSD. Labeling a storage location lets you use the location label to create storage policies and as part of a multi-tenanted storage scheme. Privileges Must be a superuser. Storage Location Subdirectories You cannot create a storage location in a subdirectory of an existing location. For example, if you create a storage location at one location, you cannot add a second storage location in a subdirectory of the first: dbt=> select add_location ('/myvertica/Test/KMM','','DATA','SSD'); add_location -----------------------------------------/myvertica/Test/KMM added. (1 row) dbt=> select add_location ('/myvertica/Test/KMM/SSD','','DATA','SSD'); ERROR 5615: Location [/myvertica/Test/KMM/SSD] conflicts with existing location [/myvert ica/Test/KMM] on node v_node0001 ERROR 5615: Location [/myvertica/Test/KMM/SSD] conflicts with existing location [/myvert ica/Test/KMM] on node v_node0002 ERROR 5615: Location [/myvertica/Test/KMM/SSD] conflicts with existing location [/myvert ica/Test/KMM] on node v_node0003 Example This example adds a location that stores data and temporary files on the initiator node: => SELECT ADD_LOCATION('/secondverticaStorageLocation/'); This example adds a location to store data on v_vmartdb_node0004: => SELECT ADD_LOCATION('/secondverticaStorageLocation/' , 'v_vmartdb_node0004' , 'DATA'); This example adds a new DATA storage location with a label, SSD. The label identifies the location when you create storage policies. Specifying the node parameter as an empty string adds the storage location to all cluster nodes in a single transaction: VMART=> select add_location ('home/dbadmin/SSD/schemas', '', 'DATA', 'SSD'); add_location --------------------------------home/dbadmin/SSD/schemas added. (1 row) HP Vertica Analytic Database (7.0.x) Page 888 of 1539 SQL Reference Manual SQL Functions See Also l l ALTER_LOCATION_USE l DROP_LOCATION l RESTORE_LOCATION l RETIRE_LOCATION l GRANT (Storage Location) l REVOKE (Storage Location) ALTER_LOCATION_USE Alters the type of files that can be stored at the specified storage location. Syntax ALTER_LOCATION_USE ( 'path' , [ 'node' ] , 'usage' ) Parameters path Specifies where the storage location is mounted. node [Optional] The HP Vertica node with the storage location. Specifying the node parameter as an empty string ('') alters the location across all cluster nodes in a single transaction. If you omit this parameter, node defaults to the initiator. usage Is one of the following: l DATA: The storage location stores only data files. This is the supported use for both a USER storage location, and a labeled storage location. l TEMP: The location stores only temporary files that are created during loads or queries. l DATA,TEMP: The location can store both types of files. Privileges Must be a superuser. HP Vertica Analytic Database (7.0.x) Page 889 of 1539 SQL Reference Manual SQL Functions USER Storage Location Restrictions You cannot change a storage location from a USER usage type if you created the location that way, or to a USER type if you did not. You can change a USER storage location to specify DATA (storing TEMP files is not supported). However, doing so does not affect the primary objective of a USER storage location, to be accessible by non-dbadmin users with assigned privileges. Monitoring Storage Locations Disk storage information that the database uses on each node is available in the V_ MONITOR.DISK_STORAGE system table. Example The following example alters the storage location across all cluster nodes to store only data: => SELECT ALTER_LOCATION_USE ('/thirdVerticaStorageLocation/' , '' , 'DATA'); See Also l l ADD_LOCATION l DROP_LOCATION l RESTORE_LOCATION l RETIRE_LOCATION l GRANT (Storage Location) l REVOKE (Storage Location) ALTER_LOCATION_LABEL Alters the location label. Use this function to add, change, or remove a location label. You change a location label only if it is not currently in use as part of a storage policy. You can use this function to remove a location label. However, you cannot remove a location label if the name being removed is used in a storage policy, and the location from which you are removing the label is the last available storage for its associated objects. Note: If you label an existing storage location that already contains data, and then include the labeled location in one or more storage policies, existing data could be moved. If the ATM HP Vertica Analytic Database (7.0.x) Page 890 of 1539 SQL Reference Manual SQL Functions determines data stored on a labeled location does not comply with a storage policy, the ATM moves the data elsewhere. Syntax ALTER_LOCATION_LABEL ( 'path' , 'node' , 'location_label' ) Parameters path Specifies the path of the storage location. node The HP Vertica node for the storage location. If you enter node as an empty string (''), the function performs a cluster-wide label change to all nodes. Any node that is unavailable generates an error. location_label Specifies a storage label as a string, for instance SSD. You can change an existing label assigned to a storage location, or add a new label. Specifying an empty string ('') removes an existing label. Privileges Must be a superuser. Example The following example alters (or adds) the label SSD to the storage location at the given path on all cluster nodes: VMART=> select alter_location_label('/home/dbadmin/SSD/tables','', 'SSD'); alter_location_label --------------------------------------/home/dbadmin/SSD/tables label changed. (1 row) See Also l l CLEAR_OBJECT_STORAGE_POLICY l SET_OBJECT_STORAGE_POLICY HP Vertica Analytic Database (7.0.x) Page 891 of 1539 SQL Reference Manual SQL Functions CLEAR_CACHES Clears the HP Vertica internal cache files. Syntax CLEAR_CACHES ( ) Privileges Must be a superuser. Notes If you want to run benchmark tests for your queries, in addition to clearing the internal HP Vertica cache files, clear the Linux file system cache. The kernel uses unallocated memory as a cache to hold clean disk blocks. If you are running version 2.6.16 or later of Linux and you have root access, you can clear the kernel filesystem cache as follows: 1. Make sure that all data is the cache is written to disk: # sync 2. Writing to the drop_caches file causes the kernel to drop clean caches, dentries, and inodes from memory, causing that memory to become free, as follows: n To clear the page cache: # echo 1 > /proc/sys/vm/drop_caches n To clear the dentries and inodes: # echo 2 > /proc/sys/vm/drop_caches n To clear the page cache, dentries, and inodes: # echo 3 > /proc/sys/vm/drop_caches Example The following example clears the HP Vertica internal cache files: HP Vertica Analytic Database (7.0.x) Page 892 of 1539 SQL Reference Manual SQL Functions => CLEAR_CACHES(); CLEAR_CACHES -------------Cleared (1 row) CLEAR_OBJECT_STORAGE_POLICY Removes an existing storage policy. The specified object will no longer use a default storage location. Any existing data stored currently at the labeled location in the object's storage policy is moved to default storage during the next TM moveout operation. Syntax CLEAR_OBJECT_STORAGE_POLICY ( 'object_name' , [', key_min, key_max ']) Parameters object_name Specifies the database object with a storage policy to clear. key_min, key_max Specifies the table partition key value ranges stored at the labeled location. These parameters are applicable only when object_name is a table. Privileges Must be a superuser. Example This example clears the storage policy for the object lineorder: release=> select clear_object_storage_policy('lineorder'); clear_object_storage_policy ----------------------------------Default storage policy cleared. (1 row) See Also l Clearing Storage Policies l ALTER_LOCATION_LABEL HP Vertica Analytic Database (7.0.x) Page 893 of 1539 SQL Reference Manual SQL Functions l SET_OBJECT_STORAGE_POLICY DROP_LOCATION Removes the specified storage location. Syntax DROP_LOCATION ( 'path' , 'node' ) Parameters path Specifies where the storage location to drop is mounted. node Is the HP Vertica node where the location is available. Privileges Must be a superuser. Retiring or Dropping a Storage Location Dropping a storage location is a permanent operation and cannot be undone. Therefore, HP recommends that you retire a storage location before dropping it. Retiring a storage location lets you verify that you do not need the storage before dropping it. Additionally, you can easily restore a retired storage location if you determine it is still in use. Storage Locations with Temp and Data Files Dropping storage locations is limited to storage locations that contain only temp files. If you use a storage location to store data and then alter it to store only temp files, the location can still contain data files. HP Vertica does not let you drop a storage location containing data files. You can manually merge out the data files from the storage location, and then wait for the ATM to mergeout the data files automatically, or, you can drop partitions. Deleting data files does not work. Example The following example drops a storage location on node3 that was used to store temp files: => SELECT DROP_LOCATION('/secondHP VerticaStorageLocation/' , 'node3'); HP Vertica Analytic Database (7.0.x) Page 894 of 1539 SQL Reference Manual SQL Functions See Also l l l ADD_LOCATION l ALTER_LOCATION_USE l RESTORE_LOCATION l RETIRE_LOCATION l GRANT (Storage Location) l REVOKE (Storage Location) MEASURE_LOCATION_PERFORMANCE Measures disk performance for the location specified. Syntax MEASURE_LOCATION_PERFORMANCE ( 'path' , 'node' ) Parameters path Specifies where the storage location to measure is mounted. node Is the HP Vertica node where the location to be measured is available. Privileges Must be a superuser. Notes l To get a list of all node names on your cluster, query the V_MONITOR.DISK_STORAGE system table: => SELECT node_name from DISK_STORAGE; HP Vertica Analytic Database (7.0.x) Page 895 of 1539 SQL Reference Manual SQL Functions node_name ------------------v_vmartdb_node0004 v_vmartdb_node0004 v_vmartdb_node0005 v_vmartdb_node0005 v_vmartdb_node0006 v_vmartdb_node0006 (6 rows) l If you intend to create a tiered disk architecture in which projections, columns, and partitions are stored on different disks based on predicted or measured access patterns, you need to measure storage location performance for each location in which data is stored. You do not need to measure storage location performance for temp data storage locations because temporary files are stored based on available space. l The method of measuring storage location performance applies only to configured clusters. If you want to measure a disk before configuring a cluster see Measuring Storage Performance. l Storage location performance equates to the amount of time it takes to read and write 1MB of data from the disk. This time equates to: IO time = Time to read/write 1MB + Time to seek = 1/Throughput + 1/Latency Throughput is the average throughput of sequential reads/writes (units in MB per second) Latency is for random reads only in seeks (units in seeks per second) Note: The IO time of a faster storage location is less than a slower storage location. Example The following example measures the performance of a storage location on v_vmartdb_node0004: => SELECT MEASURE_LOCATION_PERFORMANCE('/secondVerticaStorageLocation/' , 'v_vmartdb_node 0004'); WARNING: measure_location_performance can take a long time. Please check logs for progre ss measure_location_performance -------------------------------------------------Throughput : 122 MB/sec. Latency : 140 seeks/sec HP Vertica Analytic Database (7.0.x) Page 896 of 1539 SQL Reference Manual SQL Functions See Also l ADD_LOCATION l ALTER_LOCATION_USE l RESTORE_LOCATION l RETIRE_LOCATION l Measuring Storage Performance RESTORE_LOCATION Restores a storage location that was previously retired with RETIRE_LOCATION. Syntax RESTORE_LOCATION ( 'path', 'node' ) Parameters path Specifies where the retired storage location is mounted. node Is the HP Vertica node where the retired location is available. Privileges Must be a superuser. Effects of Restoring a Previously Retired Location After restoring a storage location, HP Vertica re-ranks all of the cluster storage locations and uses the newly-restored location to process queries as determined by its rank. Monitoring Storage Locations Disk storage information that the database uses on each node is available in the V_ MONITOR.DISK_STORAGE system table. Example The following example restores the retired storage location on node3: HP Vertica Analytic Database (7.0.x) Page 897 of 1539 SQL Reference Manual SQL Functions => SELECT RESTORE_LOCATION ('/thirdHP VerticaStorageLocation/' , 'v_vmartdb_node0004'); See Also l Altering Storage Location Use l ADD_LOCATION l ALTER_LOCATION_USE l DROP_LOCATION l RETIRE_LOCATION l GRANT (Storage Location) l REVOKE (Storage Location) RETIRE_LOCATION Makes the specified storage location inactive. Syntax RETIRE_LOCATION ( 'path', 'node' ) Parameters path Specifies where the storage location to retire is mounted. node Is the HP Vertica node where the location is available. Privileges Must be a superuser. Effects of Retiring a Storage Location When you use this function, HP Vertica checks that the location is not the only storage for data and temp files. At least one location must exist on each node to store data and temp files, though you can store both sorts of files in either the same location, or separate locations. Note: You cannot retire a location if it is used in a storage policy, and is the last available HP Vertica Analytic Database (7.0.x) Page 898 of 1539 SQL Reference Manual SQL Functions storage for its associated objects. When you retire a storage location: l No new data is stored at the retired location, unless you first restore it with the RESTORE_ LOCATION() function. l If the storage location being retired contains stored data, the data is not moved, so you cannot drop the storage location. Instead, HP Vertica removes the stored data through one or more mergeouts. l If the storage location being retired was used only for temp files, you can drop the location. See Dropping Storage Locations in the Administrators Guide and the DROP_LOCATION() function. Monitoring Storage Locations Disk storage information that the database uses on each node is available in the V_ MONITOR.DISK_STORAGE system table. Example The following example retires a storage location: => SELECT RETIRE_LOCATION ('/secondVerticaStorageLocation/' , 'v_vmartdb_node0004'); See Also l Retiring Storage Locations l ADD_LOCATION l ALTER_LOCATION_USE l DROP_LOCATION l RESTORE_LOCATION l GRANT (Storage Location) l REVOKE (Storage Location) SET_LOCATION_PERFORMANCE Sets disk performance for the location specified. HP Vertica Analytic Database (7.0.x) Page 899 of 1539 SQL Reference Manual SQL Functions Syntax SET_LOCATION_PERFORMANCE ( 'path' , 'node' , 'throughput' , 'average_latency' ) Parameters path Specifies where the storage location to set is mounted. node Is the HP Vertica node where the location to be set is available. If this parameter is omitted, node defaults to the initiator. throughput Specifies the throughput for the location, which must be 1 or more. average_latency Specifies the average latency for the location. The average_latency must be 1 or more. Privileges Must be a superuser. Notes To obtain the throughput and average latency for the location, run the MEASURE_LOCATION_ PERFORMANCE() function before you attempt to set the location's performance. Example The following example sets the performance of a storage location on node2 to a throughput of 122 megabytes per second and a latency of 140 seeks per second. => SELECT SET_LOCATION_PERFORMANCE('/secondVerticaStorageLocation/','node2','122','140'); See Also l ADD_LOCATION l MEASURE_LOCATION_PERFORMANCE l Measuring Storage Performance l Setting Storage Performance HP Vertica Analytic Database (7.0.x) Page 900 of 1539 SQL Reference Manual SQL Functions SET_OBJECT_STORAGE_POLICY Creates or changes an object storage policy by associating a database object with a labeled storage location. Note: You cannot create a storage policy on a USER type storage location. Syntax SET_OBJECT_STORAGE_POLICY ( 'object_name', 'location_label' [, 'key_min, key_max'] [, 'enforc e_storage_move' ] ) Parameters object_name Identifies the database object assigned to a labeled storage location. The object_name can resolve to a database, schema, or table. location_label The label of the storage location with which object_name is being associated. key_min, key_max Applicable only when object_name is a table, key_min and key_max specify the table partition key value range to be stored at the location. enforce_storage_move= {true | false} [Optional] Applicable only when setting a storage policy for an object that has data stored at another labeled location. Specify this parameter as true to move all existing storage data to the target location within this function's transaction. Privileges Must be the object owner to set the storage policy, and have access to the storage location. New Storage Policy If an object does not have a storage policy, this function creates a new policy. The labeled location is then used as the default storage location during TM operations, such as moveout and mergeout. Existing Storage Policy If the object already has an active storage policy, calling this function changes the default storage for the object to the new labeled location. Any existing data stored on the previous storage location is marked to move to the new location during the next TM moveout operations, unless you use the enforce_storage_move option. HP Vertica Analytic Database (7.0.x) Page 901 of 1539 SQL Reference Manual SQL Functions Forcing Existing Data Storage to a New Storage Location You can optionally use this function to move existing data storage to a new location as part of completing the current transaction, by specifying the last parameter as true. To move existing data as part of the next TM moveout, either omit the parameter, or specify its value as false. Note: Specifying the parameter as true performs a cluster-wide operation. If an error occurs on any node, the function displays a warning message, skips the offending node, and continues execution on the remaining nodes. Example This example sets a storage policy for the table states to use the storage labeled SSD as its default location: VMART=> select set_object_storage_policy ('states', 'SSD'); set_object_storage_policy ----------------------------------Default storage policy set. (1 row) See Also l ALTER_LOCATION_LABEL l CLEAR_OBJECT_STORAGE_POLICY l Creating Storage Policies l Moving Data Storage Locations HP Vertica Analytic Database (7.0.x) Page 902 of 1539 SQL Reference Manual SQL Functions Tuple Mover Functions This section contains tuple mover functions specific to HP Vertica. DO_TM_TASK Runs a Tuple Mover operation on one or more projections defined on the specified table. Tip: You do not need to stop the Tuple Mover to run this function. Syntax DO_TM_TASK ( 'task' [ , '[[db-name.]schema.]table' | '[[db-name.]schema.]projection' ] ) Parameters task [[db-name.]schema.] Is one of the following tuple mover operations: l 'moveout' — Moves out all projections on the specified table (if a particular projection is not specified) from WOS to ROS. l 'mergeout' — Consolidates ROS containers and purges deleted records. l 'analyze_row_count' — Automatically collects the number of rows in a projection every 60 seconds and aggregates row counts calculated during loads. [Optional] Specifies the schema name. Using a schema identifies objects that are not unique within the current search path (see Setting Schema Search Paths). You can optionally precede a schema with a database name, but you must be connected to the database you specify. You cannot make changes to objects in other databases. The ability to specify different database objects (from database and schemas to tables and columns) lets you qualify database objects as explicitly as required. For example, use a table and column (mytable.column1), a schema, table, and column (myschema.mytable.column1), and, as full qualification, a database, schema, table, and column (mydb.myschema.mytable.column1). HP Vertica Analytic Database (7.0.x) Page 903 of 1539 SQL Reference Manual SQL Functions table Runs a tuple mover operation for all projections within the specified table. When using more than one schema, specify the schema that contains the table with the projections you want to affect, as noted above. projection If projection is not passed as an argument, all projections in the system are used. If projection is specified, DO_TM_TASK looks for a projection of that name and, if found, uses it; if a named projection is not found, the function looks for a table with that name and, if found, moves out all projections on that table. Privileges l Any INSERT/UPDATE/DELETE privilege on table l USAGE privileges on schema Notes DO_TM_TASK() is useful for moving out all projections from a table or database without having to name each projection individually. Examples The following example performs a moveout of all projections for table t1: => SELECT DO_TM_TASK('moveout', 't1'); The following example performs a moveout for projection t1_proj: => SELECT DO_TM_TASK('moveout', 't1_proj') See Also l COLUMN_STORAGE l DROP_PARTITION l DUMP_PARTITION_KEYS l DUMP_PROJECTION_PARTITION_KEYS l DUMP_TABLE_PARTITION_KEYS l PARTITION_PROJECTION HP Vertica Analytic Database (7.0.x) Page 904 of 1539 SQL Reference Manual SQL Functions l l HP Vertica Analytic Database (7.0.x) Page 905 of 1539 SQL Reference Manual SQL Functions Workload Management Functions This section contains workload management functions specific to HP Vertica. ANALYZE_WORKLOAD Runs the Workload Analyzer (WLA), a utility that analyzes system information held in system tables. The Workload Analyzer intelligently monitors the performance of SQL queries and workload history, resources, and configurations to identify the root causes for poor query performance. Calling the ANALYZE_WORKLOAD function returns tuning recommendations for all events within the scope and time that you specify. Tuning recommendations are based on a combination of statistics, system and data collector events, and database-table-projection design. WLA's recommendations let database administrators quickly and easily tune query performance without needing sophisticated skills. See Understanding WLA Triggering Conditions in the Administrator's Guide for the most common triggering conditions and recommendations. Syntax 1 ANALYZE_WORKLOAD ( 'scope' , 'since_time' ); Syntax 2 ANALYZE_WORKLOAD ( 'scope' , [ true ] ); Parameters scope Specifies which HP Vertica catalog objects to analyze. Can be one of: l An empty string ('') returns recommendations for all database objects l 'table_name' returns all recommendations related to the specified table l 'schema_name' returns recommendations on all database objects in the specified schema HP Vertica Analytic Database (7.0.x) Page 906 of 1539 SQL Reference Manual SQL Functions sinc e_tim e Limits the recommendations from all events that you specified in 'scope' since the specified time in this argument, up to the current system status. If you omit the since_ time parameter, ANALYZE_WORKLOAD returns recommendations on events since the last recorded time that you called this function. Note: You must explicitly cast strings that you use for the since_time parameter to TIMESTAMP or TIMESTAMPTZ. For example: SELECT ANALYZE_WORKLOAD('T1', '2010-10-04 11:18:15'::TIMESTAMPTZ);SELECT ANALYZE_WO RKLOAD('T1', TIMESTAMPTZ '2010-10-04 11:18:15'); true [Optional] Tells HP Vertica to record this particular call of WORKLOAD_ANALYZER() in the system. The default value is false (do not record). If recorded, subsequent calls to ANALYZE_WORKLOAD analyze only the events that have occurred since this recorded time, ignoring all prior events. Return Value Column Data type Description observation_coun t INTEGER Integer for the total number of events observed for this tuning recommendation. For example, if you see a return value of 1, WLA is making its first tuning recommendation for the event in 'scope'. first_observatio n_time TIMESTAM PTZ Timestamp when the event first occurred. If this column returns a null value, the tuning recommendation is from the current status of the system instead of from any prior event. last_observatio n_time TIMESTAM PTZ Timestamp when the event last occurred. If this column returns a null value, the tuning recommendation is from the current status of the system instead of from any prior event. tuning_parameter VARCHAR Objects on which you should perform a tuning action. For example, a return value of: HP Vertica Analytic Database (7.0.x) l public.t informs the DBA to run Database Designer on table t in the public schema l bsmith notifies a DBA to set a password for user bsmith Page 907 of 1539 SQL Reference Manual SQL Functions Column Data type Description tuning_descripti on VARCHAR Textual description of the tuning recommendation from the Workload Analyzer to perform on the tuning_parameter object. Examples of some of the returned values include, but are not limited to: tuning_command VARCHAR l Run database designer on table schema.table l Create replicated projection for table schema.table l Consider incremental design on query l Reset configuration parameter with SELECT set_config_ parameter('parameter', 'new_value') l Re-segment projection projection-name on highcardinality column(s) l Drop the projection projection-name l Alter a table's partition expression l Reorganize data in partitioned table l Decrease the MoveOutInterval configuration parameter setting Command string if tuning action is a SQL command. For example, the following example statements recommend that the DBA: Update statistics on a particular schema's table.column: SELECT ANALYZE_STATISTICS('public.table.column'); Resolve mismatched configuration parameter 'LockTimeout': SELECT * FROM CONFIGURATION_PARAMETERSWHERE parameter_name = 'LockTimeout'; Set the password for user bsmith: ALTER USER (user) IDENTIFIED BY ('new_password'); HP Vertica Analytic Database (7.0.x) Page 908 of 1539 SQL Reference Manual SQL Functions Column Data type Description tuning_cost VARCHAR Cost is based on the type of tuning recommendation and is one of: l LOW—minimal impact on resources from running the tuning command l MEDIUM—moderate impact on resources from running the tuning command l HIGH—maximum impact on resources from running the tuning command Depending on the size of your database or table, consider running high-cost operations after hours instead of during peak load times. ANALYZE_WORKLOAD() returns aggregated tuning recommendations, as described in the TUNING_RECOMMENDATIONS table. Privileges Must be a superuser. Examples See Analyzing Workloads through an API in the Administrator's Guide for examples. See Also TUNING_RECOMMENDATIONS l l l CHANGE_CURRENT_STATEMENT_RUNTIME_PRIORITY Changes the run-time priority of a query that is actively running. Syntax CHANGE_CURRENT_STATEMENT_RUNTIME_PRIORITY(TRANSACTION_ID, 'value') HP Vertica Analytic Database (7.0.x) Page 909 of 1539 SQL Reference Manual SQL Functions Parameters TRANSACTION_ID An identifier for the transaction within the session. TRANSACTION_ID cannot be NULL. You can find the transaction ID in the Sessions table. 'value' The RUNTIMEPRIORITY value. Can be HIGH, MEDIUM, or LOW. Privileges No special privileges required. However, non-super users can change the run-time priority of their own queries only. In addition, non-superusers can never raise the run-time priority of a query to a level higher than that of the resource pool. Example => SELECT CHANGE_CURRENT_STATEMENT_RUNTIME_PRIORITY(45035996273705748, 'low'); CHANGE_RUNTIME_PRIORITY Changes the run-time priority of a query that is actively running. Note that, while this function is still valid, you should instead use CHANGE_CURRENT_STATEMENT_RUNTIME_PRIORITY to change run-time priority. CHANGE_RUNTIME_PRIORITY will be deprecated in a future release of Vertica. Syntax CHANGE_RUNTIME_PRIORITY(TRANSACTION_ID,STATEMENT_ID, 'value') Parameters TRANSACTION_ID An identifier for the transaction within the session. TRANSACTION_ID cannot be NULL. You can find the transaction ID in the Sessions table. STATEMENT_ID A unique numeric ID assigned by the HP Vertica catalog, which identifies the currently executing statement. You can find the statement ID in the Sessions table. You can specify NULL to change the run-time priority of the currently running query within the transaction. 'value' The RUNTIMEPRIORITY value. Can be HIGH, MEDIUM, or LOW. HP Vertica Analytic Database (7.0.x) Page 910 of 1539 SQL Reference Manual SQL Functions Privileges No special privileges required. However, non-super users can change the run-time priority of their own queries only. In addition, non-superusers can never raise the run-time priority of a query to a level higher than that of the resource pool. Example => SELECT CHANGE_RUNTIME_PRIORITY(45035996273705748, NULL, 'low'); CLEAR_CACHES Clears the HP Vertica internal cache files. Syntax CLEAR_CACHES ( ) Privileges Must be a superuser. Notes If you want to run benchmark tests for your queries, in addition to clearing the internal HP Vertica cache files, clear the Linux file system cache. The kernel uses unallocated memory as a cache to hold clean disk blocks. If you are running version 2.6.16 or later of Linux and you have root access, you can clear the kernel filesystem cache as follows: 1. Make sure that all data is the cache is written to disk: # sync 2. Writing to the drop_caches file causes the kernel to drop clean caches, dentries, and inodes from memory, causing that memory to become free, as follows: n To clear the page cache: # echo 1 > /proc/sys/vm/drop_caches n To clear the dentries and inodes: HP Vertica Analytic Database (7.0.x) Page 911 of 1539 SQL Reference Manual SQL Functions # echo 2 > /proc/sys/vm/drop_caches n To clear the page cache, dentries, and inodes: # echo 3 > /proc/sys/vm/drop_caches Example The following example clears the HP Vertica internal cache files: => CLEAR_CACHES(); CLEAR_CACHES -------------Cleared (1 row) SLEEP Waits a specified number of seconds before executing another statement or command. Syntax SLEEP( seconds ) Parameters seconds The wait time, specified in one or more seconds (0 or higher) expressed as a positive integer. Single quotes are optional; for example, SLEEP(3) is the same as SLEEP ('3'). Notes l This function returns value 0 when successful; otherwise it returns an error message due to syntax errors. l You cannot cancel a sleep operation. l Be cautious when using SLEEP() in an environment with shared resources, such as in combination with transactions that take exclusive locks. Example The following command suspends execution for 100 seconds: HP Vertica Analytic Database (7.0.x) Page 912 of 1539 SQL Reference Manual SQL Functions => SELECT SLEEP(100); sleep ------0 (1 row) HP Vertica Analytic Database (7.0.x) Page 913 of 1539 SQL Reference Manual SQL Functions HP Vertica Analytic Database (7.0.x) Page 914 of 1539 SQL Reference Manual SQL Statements SQL Statements The primary structure of a SQL query is its statement. Multiple statements are separated by semicolons; for example: CREATE TABLE fact ( ..., date_col date NOT NULL, ...); CREATE TABLE fact(..., state VARCHAR NOT NULL, ...); ALTER DATABASE Use ALTER DATABASE to: l Drop all fault groups and their child fault groups from the specified database. l Specify the subnet name of a public network to be used for import/export. Syntax ALTER DATABASE database-name [ ... [ DROP ALL FAULT GROUP ... | EXPORT ON { subnet-name | DEFAULT } ] Parameters database-name The name of the database you want to alter. DROP ALL FAULT GROUP Drops all fault groups defined on the specified database. Notice the syntax for DROP ALL FAULT GROUP is singular for GROUP). EXPORT ON subnet-name Specifies the subnet of the public network to be used for import/export. When set to DEFAULT, Vertica Analytic Database assumes that export should go to a private network. Permissions Must be a superuser to alter the database. Example The following command drops all fault groups from the exampledb database: exampledb=> ALTER DATABASE exampledb DROP ALL FAULT GROUP; ALTER DATABASE HP Vertica Analytic Database (7.0.x) Page 915 of 1539 SQL Reference Manual SQL Statements See Also Fault Groups in the Administrator's Guide High Availability With Fault Groups in the Concepts Guide Using Public and Private Networks in the Administrator's Guide ALTER NODE in the SQL Reference Manual ALTER FAULT GROUP Modifies an existing fault group. For example, use the ALTER FAULT GROUP statement to: l Add a node to or drop a node from an existing fault group l Add a child fault group to or drop a child fault group from a parent fault group l Rename a fault group Syntax ALTER ... [ ... [ ... [ ... [ ... [ FAULT GROUP fault-group-name ADD NODE node-name ] DROP NODE node-name ] ADD FAULT GROUP child-fault-group-name ] DROP FAULT GROUP child-fault-group-name ] RENAME TO new-fault-group-name ] Parameters fault-group-name The existing fault group name you want to modify. Tip: For a list of all fault groups defined in the cluster, query the V_CATALOG.FAULT_GROUPS system table. node-name The node name you want to add to or drop from the existing (parent) fault group. child-fault-group-name The name of the child fault group you want to add to or remove from an existing parent fault group. new-fault-group-name The new name for the fault group you want to rename. HP Vertica Analytic Database (7.0.x) Page 916 of 1539 SQL Reference Manual SQL Statements Permissions Must be a superuser to alter a fault group. Example This example renames the parent0 fault group to parent100: exampledb=> ALTER FAULT GROUP parent0 RENAME TO parent100; ALTER FAULT GROUP You can verify the change by querying the V_CATALOG.FAULT_GROUPS system table: exampledb=> SELECT member_name FROM fault_groups; member_name ---------------------v_exampledb_node0003 parent100 mygroup (3 rows) See Also V_CATALOG.FAULT_GROUPS and V_CATALOG.CLUSTER_LAYOUT Fault Groups in the Administrator's Guide High Availability With Fault Groups in the Concepts Guide ALTER FUNCTION Alters a user-defined SQL function or user defined function (UDF) by providing a new function or different schema name or my modifying its Fenced Mode setting. Syntax 1 ALTER FUNCTION... [[db-name.]schema.]function-name ( [ [ argname ] argtype ... RENAME TO new_name ... SET FENCED bool_val [, ...] ] ) Syntax 2 ALTER FUNCTION... [[db-name.]schema.]function-name ( [ [ argname ] argtype ... SET SCHEMA new_schema ... SET FENCED bool_val HP Vertica Analytic Database (7.0.x) [, ...] ] ) Page 917 of 1539 SQL Reference Manual SQL Statements Syntax 3 ALTER FUNCTION... [[db-name.]schema.]function-name ( [ [ argname ] argtype ... SET FENCED bool_val [, ...] ] ) Parameters [[db-name.]schema-name.] [Optional] Specifies the schema name. Using a schema identifies objects that are not unique within the current search path (see Setting Schema Search Paths). You can optionally precede a schema with a database name, but you must be connected to the database you specify. You cannot make changes to objects in other databases. The ability to specify different database objects (from database and schemas to tables and columns) lets you qualify database objects as explicitly as required. For example, use a table and column (mytable.column1), a schema, table, and column (myschema.mytable.column1), and, as full qualification, a database, schema, table, and column (mydb.myschema.mytable.column1). function-name The name of the user-defined SQL Function (function body) to alter. If the function name is schema-qualified (as described above), the function is altered in the specified schema. argname Specifies the name of the argument. argtype Specifies the data type for argument that is passed to the function. Argument types must match HP Vertica type names. See SQL Data Types. RENAME TO new_name Specifies the new name of the function SET SCHEMA new_schema Specifies the new schema name where the function resides. SET FENCED bool_val A boolean value that specifies if Fenced Mode is enabled for this function. Fenced Mode is not available for User Defined Aggregates or User Defined Load. Permissions l Only a superuser or owner can alter a function. l To rename a function (ALTER FUNCTION RENAME TO) the user must have USAGE and CREATE privilege on schema that contains the function. HP Vertica Analytic Database (7.0.x) Page 918 of 1539 SQL Reference Manual SQL Statements l To specify a new schema (ALTER FUNCTION SET SCHEMA), the user must have USAGE privilege on schema that currently contains the function (old schema) and CREATE privilege on the schema to which the function will be moved (new schema). Notes When you alter a function you must specify the argument type, because there could be several functions that share the same name with different argument types. Examples This example creates a SQL function called zerowhennull that accepts an INTEGER argument and returns an INTEGER result. => CREATE FUNCTION zerowhennull(x INT) RETURN INT AS BEGIN RETURN (CASE WHEN (x IS NOT NULL) THEN x ELSE 0 END); END; This next command renames the zerowhennull function to zeronull: => ALTER FUNCTION zerowhennull(x INT) RENAME TO zeronull; ALTER FUNCTION This command moves the renamed function to a new schema called macros: => ALTER FUNCTION zeronull(x INT) SET SCHEMA macros; ALTER FUNCTION This command disables Fenced Mode for the Add2Ints function: => ALTER FUNCTION Add2Ints(INT, INT) SET FENCED false; ALTER FUNCTION See Also l CREATE FUNCTION (SQL Functions) l DROP FUNCTION l GRANT (User Defined Extension) l REVOKE (User Defined Extension) HP Vertica Analytic Database (7.0.x) Page 919 of 1539 SQL Reference Manual SQL Statements l USER_FUNCTIONS l Using User-Defined SQL Functions ALTER LIBRARY Replaces the Linux shared object library file (.so) or R file for an already-defined library with a new file. The new file is automatically distributed throughout the HP Vertica cluster. See Developing and Using User Defined Functions in the Programmer's Guide for details. All of the functions that reference the library automatically begin using the new library file after it is loaded. Note: The new library must be developed in the same language as the library file being replaced. For example, you cannot use this statement to replace a C++ library file with an R library file. Syntax ALTER LIBRARY [[db-name.]schema.]library_name AS 'library_path'; Parameters [[db-name.]schema.] [Optional] Specifies the database name and optional schema name. Using a database name identifies objects that are not unique within the current search path (see Setting Search Paths). You must be connected to the database you specify, and you cannot change objects in other databases. Specifying different database objects lets you qualify database objects as explicitly as required. For example, you can use a database and a schema name (mydb.myschema). library_name The name of the library being altered. This library must have already been created using CREATE LIBRARY. library_path The absolute path to the replacement library file. The file must be the same type as the library file used by the current library definition. Permissions Must be a superuser to alter a library. Notes l All of the UDFs that reference the library begin calling the code in the updated library file once it has been distributed to all of the nodes in the HP Vertica cluster. HP Vertica Analytic Database (7.0.x) Page 920 of 1539 SQL Reference Manual SQL Statements l Any nodes that are down or that are added to the cluster later automatically receive a copy of the updated library file when they join the cluster. l HP Vertica does not compare the functions defined in the new library to ensure they match any currently-defined functions in the catalog. If you change the signature of a function in the library (for example, if you change the number and data types accepted by a UDSF defined in the library), calls to that function will likely generate errors. If your new library file changes the definition of a function, you must remove the function using DROP FUNCTION before using ALTER LIBRARY to load the new library. You can then recreate the function using its new signature. ALTER NODE When used with the EXPORT ON clause, specifies the network interface of the public network on individual nodes that will be used for import/export. Caution: ALTER NODE is used internally by HP Vertica. Do not use ALTER NODE for any purpose other than importing/exporting to a public network. Syntax ALTER NODE node-name EXPORT ON {network-interface-name|DEFAULT} Parameters node-name The name of the database you want to alter. EXPORT ON network-interface-n ame Specifies the network interface of the public network on the node that will be used for import/export. Permissions Must be a superuser to alter the database. See Also Using Public and Private Networks in the Administrator's Guide In the SQL Reference Manual: l ALTER DATABASE l CREATE SUBNET HP Vertica Analytic Database (7.0.x) Page 921 of 1539 SQL Reference Manual SQL Statements l CREATE NETWORK INTERFACE l V_MONITOR.NETWORK INTERFACES ALTER NETWORK INTERFACE Lets you rename a network interface. Syntax ALTER NETWORK INTERFACE network-interface-name RENAME TO new-network-interface-name The parameters are defined as follows: network-interface-name The name of the existing network interface. new-network-interface-name The new name for the network interface. Permissions Must be a superuser to alter a network interface. ALTER PROJECTION RENAME Initiates a rename operation on the specified projection. Syntax ALTER PROJECTION [ [db-name.]schema.]projection-name RENAME TO new-projection-name HP Vertica Analytic Database (7.0.x) Page 922 of 1539 SQL Reference Manual SQL Statements Parameters [[db-name.]schema.] [Optional] Specifies the database name and optional schema name. Using a database name identifies objects that are not unique within the current search path (see Setting Search Paths). You must be connected to the database you specify, and you cannot change objects in other databases. Specifying different database objects lets you qualify database objects as explicitly as required. For example, you can use a database and a schema name (mydb.myschema). projection-name Specifies the projection to change. You must include the base table name prefix that is added automatically when you create a projection. new-projection-name Specifies the new projection name. Permissions To rename a projection, the user must own the anchor table for which the projection was created and have USAGE and CREATE privileges on the schema that contains the projection. Notes The projection must exist before it can be renamed. See Also l CREATE PROJECTION ALTER PROFILE Changes a profile. Only a database superuser can alter a profile. Syntax ALTER PROFILE name LIMIT ... [PASSWORD_LIFE_TIME {life-limit | DEFAULT | UNLIMITED}] ... [PASSWORD_GRACE_TIME {grace_period | DEFAULT | UNLIMITED}] ... [FAILED_LOGIN_ATTEMPTS {login-limit | DEFAULT | UNLIMITED}] ... [PASSWORD_LOCK_TIME {lock-period | DEFAULT | UNLIMITED}] ... [PASSWORD_REUSE_MAX {reuse-limit | DEFAULT | UNLIMITED}] ... [PASSWORD_REUSE_TIME {reuse-period | DEFAULT | UNLIMITED}] ... [PASSWORD_MAX_LENGTH {max-length | DEFAULT | UNLIMITED}] ... [PASSWORD_MIN_LENGTH {min-length | DEFAULT | UNLIMITED}] ... [PASSWORD_MIN_LETTERS {min-letters | DEFAULT | UNLIMITED}] ... [PASSWORD_MIN_UPPERCASE_LETTERS {min-cap-letters | DEFAULT | UNLIMITED}] ... [PASSWORD_MIN_LOWERCASE_LETTERS {min-lower-letters | DEFAULT | UNLIMITED}] HP Vertica Analytic Database (7.0.x) Page 923 of 1539 SQL Reference Manual SQL Statements ... [PASSWORD_MIN_DIGITS {min-digits | DEFAULT | UNLIMITED}] ... [PASSWORD_MIN_SYMBOLS {min-symbols | DEFAULT | UNLIMITED}] Note: For all parameters, the special value DEFAULT means the parameter is inherited from the DEFAULT profile. Parameters Meaning of UNLIMITED value Name Description name The name of the profile to create PASSWORD_LIFE_TIME life-limit Integer number of days a Passwords password remains valid. never expire. After the time elapses, the user must change the password (or will be warned that their password has expired if PASSWORD_GRACE_ TIME is set to a value other than zero or UNLIMITED). PASSWORD_GRACE_TIMEgrace-period Integer number of days the users are allowed to login (while being issued a warning message) after their passwords are older than the PASSWORD_ LIFE_TIME. After this period expires, users are forced to change their passwords on login if they have not done so after their password expired. No grace period (the same as zero) FAILED_LOGIN_ATTEMPTSlogin-limit The number of consecutive failed login attempts that result in a user's account being locked. Accounts are never locked, no matter how many failed login attempts are made. HP Vertica Analytic Database (7.0.x) N/A Page 924 of 1539 SQL Reference Manual SQL Statements Meaning of UNLIMITED value Name Description PASSWORD_LOCK_TIME lock-period Integer value setting the number of days an account is locked after the user's account was locked by having too many failed login attempts. After the PASSWORD_LOCK_ TIME has expired, the account is automatically unlocked. Accounts locked because of too many failed login attempts are never automatically unlocked. They must be manually unlocked by the database superuser. PASSWORD_REUSE_MAX reuse-limit The number of password changes that need to occur before the current password can be reused. Users are not required to change passwords a certain number of times before reusing an old password. PASSWORD_REUSE_TIMEreuse-period The integer number of days that must pass after a password has been set before the before it can be reused. Password reuse is not limited by time. PASSWORD_MAX_LENGTH max-length The maximum number of characters allowed in a password. Value must be in the range of 8 to 100. Passwords are limited to 100 characters. PASSWORD_MIN_LENGTH min-length The minimum number of characters required in a password. Valid range is 0 to max-length. Equal to maxlength. PASSWORD_MIN_LETTERSmin-of-letters Minimum number of letters (a-z and A-Z) that must be in a password. Valid ranged is 0 to max-length. 0 (no minimum). HP Vertica Analytic Database (7.0.x) Page 925 of 1539 SQL Reference Manual SQL Statements Meaning of UNLIMITED value Name Description PASSWORD_MIN_UPPERCASE_LETTERSmin-cap-letters Minimum number of capital letters (A-Z) that must be in a password. Valid range is is 0 to max-length. 0 (no minimum). PASSWORD_MIN_LOWERCASE_LETTERSmin-lower-letters Minimum number of lowercase letters (a-z) that must be in a password. Valid range is is 0 to maxlength. 0 (no minimum). PASSWORD_MIN_DIGITS min-digits Minimum number of digits (0-9) that must be in a password. Valid range is is 0 to max-length. 0 (no minimum). PASSWORD_MIN_SYMBOLSmin-symbols Minimum number of 0 (no symbols (any printable minimum). non-letter and non-digit character, such as $, #, @, and so on) that must be in a password. Valid range is is 0 to max-length. Permissions Must be a superuser to alter a profile. Note: Only the profile settings for how many failed login attempts trigger account locking and how long accounts are locked have an effect on external password authentication methods such as LDAP or Kerberos. All password complexity, reuse, and lifetime settings have an effect on passwords managed by HP Vertica only. See Also l CREATE PROFILE l DROP PROFILE ALTER PROFILE RENAME Rename an existing profile. HP Vertica Analytic Database (7.0.x) Page 926 of 1539 SQL Reference Manual SQL Statements Syntax ALTER PROFILE name RENAME TO newname; Parameters name The current name of the profile. newname The new name for the profile. Permissions Must be a superuser to alter a profile. See Also l ALTER PROFILE l CREATE PROFILE l DROP PROFILE ALTER RESOURCE POOL Modifies a resource pool. The resource pool must exist before you can issue the ALTER RESOURCE POOL command. Syntax ALTER ... [ ... [ ... [ ... [ ... [ ... [ ... [ ... [ ... [ ... [ ... [ ] ... [ RESOURCE POOL pool-name MEMORYSIZE 'sizeUnits' MAXMEMORYSIZE 'sizeUnits' ] PRIORITY {integer | DEFAULT } ] EXECUTIONPARALLELISM {integer | AUTO | DEFAULT } ] RUNTIMEPRIORITY (HIGH | MEDIUM | LOW | DEFAULT) ] RUNTIMEPRIORITYTHRESHOLD {integer |DEFAULT }] QUEUETIMEOUT {integer | NONE |DEFAULT } ] PLANNEDCONCURRENCY {integer | DEFAULT | AUTO } ] RUNTIMECAP {interval | NONE |DEFAULT } ] MAXCONCURRENCY {integer | NONE | DEFAULT } ] SINGLEINITIATOR { bool | DEFAULT } ] CPUAFFINITYSET { 'cpuIndex' | 'cpuIndex list' | 'integer percentage' | NONE | DEFAULT } CPUAFFINITYMODE { SHARED | EXCLUSIVE | ANY | DEFAULT } ] HP Vertica Analytic Database (7.0.x) Page 927 of 1539 SQL Reference Manual SQL Statements Parameters If you set any of these parameters to DEFAULT, HP Vertica sets the parameter to the value stored in RESOURCE_POOL_DEFAULTS. pool-name Specifies the name of the resource pool to alter. Resource pool names are subject to the same rules as HP Vertica Identifiers. Built-in pool names cannot be used for userdefined pools. MEMORYSIZE 'sizeUnits' [Default 0%] The amount of memory allocated to this pool per node and not across the whole cluster. The default of 0% means that the pool has no memory allocated to it and must exclusively borrow from the GENERAL pool. Units can be one of the following: l Percentage (%) of total memory available to the Resource Manager. (In this case size must be 0-100). l K Kilobytes l M Megabytes l G Gigabytes l T Terabytes See also MAXMEMORYSIZE parameter. HP Vertica Analytic Database (7.0.x) Page 928 of 1539 SQL Reference Manual SQL Statements MAXMEMORYSIZE'sizeUnits' | NONE [Default unlimited] Maximum size the resource pool could grow by borrowing memory from the GENERAL pool. See BuiltIn Pools for a discussion on how resource pools interact with the GENERAL pool. Units can be one of the following: l % percentage of total memory available to the Resource Manager. (In this case, size must be 0-100). This notation has special meaning for the GENERAL pool, described in Notes below. l K—Kilobytes l M—Megabytes l G—Gigabytes l T—Terabytes If MAXMEMORYSIZE NONE is specified, there is no upper limit. Note: The MAXMEMORYSIZE parameter refers to the maximum memory borrowed by this pool per node and not across the whole cluster. The default of unlimited means that the pool can borrow as much memory from GENERAL pool as is available. The MAXMEMORYSIZE of the WOSDATA and SYSDATA pools cannot be changed as long as any of their memory is in use. For example, in order to change the MAXMEMORYSIZE of the WOSDATA pool, you need to disable any trickle loading jobs and wait until the WOS is empty before you can change the MAXMEMORYSIZE. HP Vertica Analytic Database (7.0.x) Page 929 of 1539 SQL Reference Manual SQL Statements EXECUTIONPARALLELISM [Default: AUTO] Limits the number of threads used to process any single query issued in this resource pool. When set to AUTO, HP Vertica sets this value based on the number of cores, available memory, and amount of data in the system. Unless data is limited, or the amount of data is very small, HP Vertica sets this value to the number of cores on the node. Reducing this value increases the throughput of short queries issued in the pool, especially if the queries are executed concurrently. If you choose to set this parameter manually, set it to a value between 1 and the number of cores. RUNTIMEPRORITY [Default: MEDIUM] Determines the amount of run-time resources (CPU, I/O bandwidth) the Resource Manager should dedicate to queries already running in the resource pool. Valid values are: l HIGH l MEDIUM l LOW Queries with a HIGH run-time priority are given more CPU and I/O resources than those with a MEDIUM or LOW run-time priority. RUNTIMEPRIORITYTHRESHOLD Specifies a time limit (in seconds) by which a query must finish before the Resource Manager assigns to it the RUNTIMEPRIORITY of the resource pool. All queries begin runnng at a HIGH priority. When a query's duration exceeds this threshold, it is assigned the RUNTIMEPRIORITY of the resource pool. [Default 2] PRIORITY HP Vertica Analytic Database (7.0.x) [Default 0] An integer that represents priority of queries in this pool, when they compete for resources in the GENERAL pool. Higher numbers denote higher priority. Administrator-created resource pools can have a priority of -100 to 100. The built-in resource pools SYSQUERY, RECOVERY, and TM can have a range of -110 to 110. Page 930 of 1539 SQL Reference Manual SQL Statements QUEUETIMEOUT [Default 300 seconds] An integer, in seconds, that represents the maximum amount of time the request is allowed to wait for resources to become available before being rejected. If set to NONE, the request can be queued for an unlimited amount of time. RUNTIMECAP HP Vertica Analytic Database (7.0.x) [Default: NONE] Sets the maximum amount of time any query on the pool can execute. Set RUNTIMECAP using interval, such as '1 minute' or '100 seconds' (see Interval Values for details). This value cannot exceed one year. Setting this value to NONE specifies that there is no time limit on queries running on the pool. If the user or session also has a RUNTIMECAP, the shorter limit applies. Page 931 of 1539 SQL Reference Manual SQL Statements PLANNEDCONCURRENCY [Default: AUTO] When set to AUTO, this value is calculated automatically at query runtime. HP Vertica sets this parameter to the lower of these two calculations: l Number of cores l Memory/2GB When this parameter is set to AUTO, HP Vertica will not choose a value lower than 4. HP Vertica advises changing this value only after evaluating performance over a period of time. Notes: l The PLANNEDCONCURRENCY setting for the GENERAL pool defaults to a too-small value for machines with large numbers of cores. To adjust to a more appropriate value: => ALTER RESOURCE POOL general PLANNEDCONCURRENCY <#cores>; l This is a cluster-wide maximum and not a per-node limit. l For clusters where the number of cores differs on different nodes, AUTO can apply differently on each node. Distributed queries run like the minimal effective planned concurrency. Single node queries run with the planned concurrency of the initiator. l If you created or upgraded your database in 4.0 or 4.1, the PLANNEDCONCURRENCY setting on the GENERAL pool defaults to a too-small value for machines with large numbers of cores. To adjust to a more appropriate value: => ALTER RESOURCE POOL general PLANNEDCONCURRENCY <#cores>; You need to set this parameter only if you created a database before 4.1, patchset 1. See Guidelines for Setting Pool Parameters in the Administrator's Guide SINGLEINITIATOR HP Vertica Analytic Database (7.0.x) [Default false] This parameter is included for backwards compatibility only. Do not change the value. Page 932 of 1539 SQL Reference Manual SQL Statements MAXCONCURRENCY [Default unlimited] An integer that represents the maximum number of concurrent execution slots available to the resource pool. If MAXCONCURRENCY NONE is specified, there is no limit. Note: This is a cluster wide maximum and NOT a per-node limit. HP Vertica Analytic Database (7.0.x) Page 933 of 1539 SQL Reference Manual SQL Statements CPUAFFINITYSET [Default none] The set of CPUs on which queries associated with this pool are executed. Can only be used with userdefined resource pools (is_internal = f). Note: If you are changing the CPUAFFINITYSET from the default value (NONE), then you must also specify the CPUAFFINITYMODE at the same time. For example, when creating a new resource pool, or altering an existing resource pool that has a CPUAFFINITYSET that is not defined (or NONE), then specify CPUAFFINITYMODE to a supported mode for the set: CREATE RESOURCE POOL load CPUAFFINITYSET '25%' CPUAFFINITYMODE SHARED For this setting, CPU numbering is defined by the number of CPUs in the system based on a 0 index. You can obtain the number of CPUs on the system with the command: lscpu | grep "^CPU(s)" For example, if the above command returned: CPU(s): 8, then 0–7 are valid CPU indexes for this parameter. Note: HP Vertica is limited to 1024 CPUs per node. Value of this parameter can be one of the following: l 'cpuIndex'—Index of a specific CPU on which to run the queries for this pool. For example '3'. Queries in this pool are not run on any other CPUs besides the CPU defined. l 'cpuIndex list'—List of CPUs on which to run the queries for this pool. CPU indexes can be comma-separated for non-continous indexes, or use the '-' character for continuous CPU indexes. For example, '0,2–4' is a CPU index list containing CPU 0, CPU 2, CPU 3, and CPU 4. Queries in this pool are not run on any other CPUs besides the CPUs defined. Note: The cpuIndex must be unique across all resource pools if using a CPUAFFINITYMODE of exclusive. the cpuIndex does not need to be unqiue l HP Vertica Analytic Database (7.0.x) 'integer percentage'—Percentage of all available CPUs to use for this query. For example '50%' uses up to half of the Page 934 of 1539 SQL Reference Manual SQL Statements available CPUs that are not reserved in other resource pools (with EXCLUSIVE affinity) for queries in this pool. Note that you must define this setting in whole percentages and that HP Vertica rounds the percentages down to account for whole CPU units. For example, a '20%' setting on an 8 CPU system is rounded down to 12.5% actual CPU percentage. (1/8 = 12.5% 2/8 - 25%). In a 16 CPU system, the setting is rounded down to 18.75% (3/16 = 18.75%, 4/16 = 25%). l NONE—Set no CPU affinity for this resource pool. The queries associated with this pool are executed on any CPU. l DEFAULT—Same as NONE. Important: CPU affinity settings apply to all nodes in the cluster. CPU counts must be identical on all nodes in the cluster. HP Vertica Analytic Database (7.0.x) Page 935 of 1539 SQL Reference Manual SQL Statements CPUAFFINITYMODE [Default any] The mode in which CPU affinity operates for this resource pool. Can be one of: l SHARED—Queries run in this pool are constrained to run only on CPUs defined in CPUAFFINITYSET, but other HP Vertica resource pools can also run queries that utilize the same CPUs. l EXCLUSIVE—The CPUs defined in CPUAFFINITYSET are exclusively assigned to this resource pool. Other HP Vertica resource pools are unable to use the CPUs. In the case that CPUAFFINITYSET is set as a percentage, then that percentage of CPU resources available to HP Vertica is assigned solely for this resource pool. l ANY—Queries can be run on any CPU. If CPUAFFINITYSET is set to a non-default value, and you then set CPUAFFINITYMODE to ANY, then the CPUAFFINITYSET is removed (set to NONE) by HP Vertica, since the ANY mode is only valid for the NONE set. l DEFAULT—Same as ANY. Note: Important! CPU affinity settings apply to all nodes in the cluster. CPU counts must be identical on all nodes in the cluster. Permissions Must be a superuser on the resource pool for the following parameters: l MAXMEMORYSIZE l PRIORITY l QUEUETIMEOUT l CPUAFFINITYSET l CPUAFFINITYMODE The following parameters require UPDATE privileges: l PLANNEDCONCURRENCY l SINGLEINITIATOR HP Vertica Analytic Database (7.0.x) Page 936 of 1539 SQL Reference Manual SQL Statements MAXCONCURRENCY l Notes l New resource pools can be created or altered without shutting down the system. The only exception is that changes to GENERAL.MAXMEMORYSIZE take effect only on a node restart. When a new pool is created (or its size altered), MEMORYSIZE amount of memory is taken out of the GENERAL pool. If the GENERAL pool does not currently have sufficient memory to create the pool due to existing queries being processed, a request is made to the system to create a pool as soon as resources become available. The pool is in operation as soon as the specified amount of memory becomes available. You can monitor whether the ALTER has been completed in the V_ MONITOR.RESOURCE_POOL_STATUS system table. l If the GENERAL.MAXMEMORYSIZE parameter is modified while a node is down, and that node is restarted, the restarted node sees the new setting whereas other nodes continue to see the old setting until they are restarted. HP Vertica recommends that you do not change this parameter unless absolutely necessary. l Under normal operation, MEMORYSIZE is required to be less than MAXMEMORYSIZE and an error is returned during CREATE/ALTER operations if this size limit is violated. However, under some circumstances where the node specification changes by addition/removal of memory, or if the database is moved to a different cluster, this invariant could be violated. In this case, MAXMEMORYSIZE is reduced to MEMORYSIZE. l If two pools have the same PRIORITY, their requests are allowed to borrow from the GENERAL pool in order of arrival. l CPUAFFINITYSET and CPUAFFINITYMODE cane only be used with user-created resource pools. See Guidelines for Setting Pool Parameters in the Administrator's Guide for details about setting these parameters. See Also l CREATE RESOURCE POOL l CREATE USER l DROP RESOURCE POOL l RESOURCE_POOL_STATUS l SET SESSION RESOURCE_POOL SET SESSION MEMORYCAP l l HP Vertica Analytic Database (7.0.x) Page 937 of 1539 SQL Reference Manual SQL Statements ALTER ROLE RENAME Rename an existing role. Syntax ALTER ROLE name RENAME [TO] new_name; Parameters name The current name of the role that you want to rename. new_name The new name for the role. Permissions Must be a superuser to rename a role. Example => ALTER ROLE applicationadministrator RENAME TO appadmin; ALTER ROLE See Also l CREATE ROLE l DROP ROLE ALTER SCHEMA Renames one or more existing schemas. Syntax ALTER SCHEMA [db-name.]schema-name [ , ... ] HP Vertica Analytic Database (7.0.x) ... RENAME TO new-schema-name [ , ... ] Page 938 of 1539 SQL Reference Manual SQL Statements Parameters [db-name.] [Optional] Specifies the current database name. Using a database name prefix is optional, and does not affect the command in any way. You must be connected to the specified database. schema-name Specifies the name of one or more schemas to rename. RENAME TO Specifies one or more new schema names. The lists of schemas to rename and the new schema names are parsed from left to right and matched accordingly using one-to-one correspondence. When renaming schemas, be sure to follow these standards: l The number of schemas to rename must match the number of new schema names supplied. l The new schema names must not already exist. The RENAME TO parameter is applied atomically. Either all the schemas are renamed or none of the schemas are renamed. If, for example, the number of schemas to rename does not match the number of new names supplied, none of the schemas are renamed. Note: Renaming a schema that is referenced by a view will cause the view to fail unless another schema is created to replace it. Privileges Schema owner or user requires CREATE privilege on the database Notes Renaming schemas does not affect existing pre-join projections because pre-join projections refer to schemas by the schemas' unique numeric IDs (OIDs), and the OIDs for schemas are not changed by ALTER SCHEMA. Tip Renaming schemas is useful for swapping schemas without actually moving data. To facilitate the swap, enter a non-existent, temporary placeholder schema. The following example uses the temporary schema temps to facilitate swapping schema S1 with schema S2. In this example, S1 is renamed to temps. Then S2 is renamed to S1. Finally, temps is renamed to S2. ALTER SCHEMA S1, S2, temps RENAME TO temps, S1, S2; HP Vertica Analytic Database (7.0.x) Page 939 of 1539 SQL Reference Manual SQL Statements Examples The following example renames schema S1 to S3 and schema S2 to S4: => ALTER SCHEMA S1, S2 RENAME TO S3, S4; See Also l CREATE SCHEMA l DROP SCHEMA ALTER SEQUENCE Changes the attributes of an existing sequence. All changes take effect in the next database session. Any parameters not set during an ALTER SEQUENCE statement retain their prior settings. You must be a sequence owner or a superuser to use this statement. Note: You can rename an existing sequence, or the schema of a sequence, but neither of these changes can be combined with any other optional parameters. Syntax ALTER ... [ ... [ ... | ... [ ... [ ... [ ... [ ... [ ... [ SEQUENCE [[db-name.]schema.]sequence-name RENAME TO new-name | SET SCHEMA new-schema-name] OWNER TO new-owner-name ] INCREMENT [ BY ] increment-value ] MINVALUE minvalue | NO MINVALUE ] MAXVALUE maxvalue | NO MAXVALUE ] RESTART [ WITH ] restart ] CACHE cache ] CYCLE | NO CYCLE ] HP Vertica Analytic Database (7.0.x) Page 940 of 1539 SQL Reference Manual SQL Statements Parameters [[db-name.]schema.] [Optional] Specifies the schema name. Using a schema identifies objects that are not unique within the current search path (see Setting Schema Search Paths). You can optionally precede a schema with a database name, but you must be connected to the database you specify. You cannot make changes to objects in other databases. The ability to specify different database objects (from database and schemas to tables and columns) lets you qualify database objects as explicitly as required. For example, use a table and column (mytable.column1), a schema, table, and column (myschema.mytable.column1), and, as full qualification, a database, schema, table, and column (mydb.myschema.mytable.column1). sequence-name The name of the sequence to alter. The name must be unique among sequences, tables, projections, and views. RENAME TO new-name Renames a sequence within the same schema. To move a sequence, see SET SCHEMA below. OWNER TO new-owner-name Reassigns the current sequence owner to the specified owner. Only the sequence owner or a superuser can change ownership, and reassignment does not transfer grants from the original owner to the new owner (grants made by the original owner are dropped). SET SCHEMA new-schema-name Moves a sequence between schemas. INCREMENT [BY] increment-value Modifies how much to increment or decrement the current sequence to create a new value. A positive value increments an ascending sequence, and a negative value decrements the sequence. MINVALUE minvalue | NO MINVALUE Modifies the minimum value a sequence can generate. If you change this value and the current value exceeds the range, the current value is changed to the minimum value if increment is greater than zero, or to the maximum value if increment is less than zero. HP Vertica Analytic Database (7.0.x) Page 941 of 1539 SQL Reference Manual SQL Statements MAXVALUE maxvalue | NO MAXVALUE Modifies the maximum value for the sequence. If you change this value and the current value exceeds the range, the current value is changed to the minimum value if increment is greater than zero, or to the maximum value if increment is less than zero. RESTART [WITH] restart Changes the current value of the sequence to restart. The subsequent call to NEXTVAL will return the restart value. CACHE [value | NO CACHE] Modifies how many sequence numbers are preallocated and stored in memory for faster access. The default is 250,000 with a minimum value of 1. Specifying a value of 1 indicates that only one value can be generated at a time, since no cache is assigned. Alternatively, you can specify NO CACHE. CYCLE | NO CYCLE Allows you you to switch between CYCLE and NO CYCLE. The CYCLE option allows the sequence to wrap around when the maxvalue or minvalue is reached by an ascending or descending sequence respectively. If the limit is reached, the next number generated is the minvalue or maxvalue, respectively. If NO CYCLE is specified, any calls to NEXTVAL after the sequence has reached its maximum/minimum value, return an error. The default is NO CYCLE. Permissions l To rename a schema, the user must be the sequence owner and have USAGE and CREATE privileges on the schema. l To move a sequence between schemas, the user must be the sequence owner and have USAGE privilege on the schema that currently contains the sequence (old schema) and CREATE privilege on new schema to contain the sequence. Examples The following example modifies an ascending sequence called sequential to restart at 105: ALTER SEQUENCE sequential RESTART WITH 105; The following example moves a sequence from one schema to another: ALTER SEQUENCE public.sequence SET SCHEMA vmart; The following example renames a sequence in the Vmart schema: HP Vertica Analytic Database (7.0.x) Page 942 of 1539 SQL Reference Manual SQL Statements ALTER SEQUENCE vmart.sequence RENAME TO serial; The following example reassigns sequence ownership from the current owner to user Bob: ALTER SEQUENCE sequential OWNER TO Bob; See Also l CREATE SEQUENCE l CURRVAL l DROP SEQUENCE l GRANT (Sequence) NEXTVAL l l l l ALTER SUBNET Renames an existing subnet. Syntax ALTER SUBNET subnet-name RENAME TO 'new-subnet-name' Parameters The parameters are defined as follows: subnet-name The name of the existing subnet. new-subnet-name The new name for the subnet. Permissions Must be a superuser to alter a subnet. HP Vertica Analytic Database (7.0.x) Page 943 of 1539 SQL Reference Manual SQL Statements ALTER TABLE Modifies an existing table with a new table definition. Syntax 1 ALTER TABLE [[db-name.]schema.]table-name { ... ADD COLUMN column-definition ( table ) [CASCADE] ... | ADD Table-Constraint ... | ALTER COLUMN column-name | [ SET DEFAULT expression ] | [ DROP DEFAULT ] | [ { SET | DROP } NOT NULL] | [ SET DATA TYPE data-type ] ... | DROP CONSTRAINT constraint-name [ CASCADE | RESTRICT ] ... | [ DROP [ COLUMN ] column-name [ CASCADE | RESTRICT ] ] ... | RENAME [ COLUMN ] column TO new-column ... | SET SCHEMA new-schema-name [ CASCADE | RESTRICT ] ... | PARTITION BY partition-clause [ REORGANIZE ] ... | REORGANIZE ... | REMOVE PARTITIONING ... | OWNER TO new-owner-name } Syntax 2 ALTER TABLE [[db-name.]schema.]table-name [ , ... ]... RENAME [TO] new-table-name [ , ... ] HP Vertica Analytic Database (7.0.x) Page 944 of 1539 SQL Reference Manual SQL Statements Parameters [[db-name.]schema.] [Optional] Specifies the schema name. Using a schema identifies objects that are not unique within the current search path (see Setting Schema Search Paths). You can optionally precede a schema with a database name, but you must be connected to the database you specify. You cannot make changes to objects in other databases. The ability to specify different database objects (from database and schemas to tables and columns) lets you qualify database objects as explicitly as required. For example, use a table and column (mytable.column1), a schema, table, and column (myschema.mytable.column1), and, as full qualification, a database, schema, table, and column ( mydb.myschema.mytable.colu mn1). table-name Specifies the name of the table to alter. When using more than one schema, specify the schema that contains the table. You can use ALTER TABLE in conjunction with SET SCHEMA to move only one table between schemas at a time. When using ALTER TABLE to rename one or more tables, you can specify a comma-delimited list of table names to rename. HP Vertica Analytic Database (7.0.x) Page 945 of 1539 SQL Reference Manual SQL Statements ADD COLUMN column-definition [CASCADE] Adds a new column to table as defined by column-definition and automatically adds the new column with a unique projection column name to the superprojection for that table. If you use the optional CASCADE keyword, HP Vertica also adds the new table column to all pre-join projections where the table is specified. When you use the CASCADE keyword, if you specify a default value for the column that is not a constant, HP Verticadoes not add the column to the pre-join projections. column-definition is any valid SQL function that does not contain volatile functions. For example, a constant or a function of other columns in the same table. ADD COLUMN operations take an O lock on the table until the operation completes, in order to prevent DELETE, UPDATE, INSERT, and COPY statements from affecting the table. If you use the CASCADE keyword, HP Vertica also takes O locks on all the anchor tables of any pre-join projections associated with that table.One consequence of the O lock is that SELECT statements issued at SERIALIZABLE isolation level are blocked until the operation completes. You can add a column when nodes are down. For more information, see Altering Tables in the Administrator's Guide. HP Vertica Analytic Database (7.0.x) Page 946 of 1539 SQL Reference Manual SQL Statements ADD table-constraint Adds a Table-Constraint to a table that does not have any associated projections. Adding a table constraint has no effect on views that reference the table. See About Constraints in the Administrator's Guide. ALTER COLUMN column-name [SET DEFAULT expression] [DROP DEFAULT] [{SET | DROP} NOT NULL] HP Vertica Analytic Database (7.0.x) Alters an existing table column to change, drop, or establish a DEFAULT expression for the column, or set or drop a NOT NULL constraint. (You can also use DROP DEFAULT to remove a default expression.) You can specify a volatile function or a user-defined function as the default expression for a column as part of an ALTER COLUMN SET DEFAULT statement. See Types of UDFs in the Programmer's Guide. Page 947 of 1539 SQL Reference Manual SQL Statements SET DATA TYPE data-type Changes the column's data type to any type whose conversion does not require storage reorganization. The following types are the conversions that HP Vertica supports: l Binary types—expansion and contraction (cannot convert between BINARY and VARBINARY types). l Character types—all conversions allowed, even between CHAR and VARCHAR l Exact numeric types– INTEGER, INT, BIGINT, TINYINT, INT8, SMALLINT, and all NUMERIC values of scale <=18 and precision 0 are interchangeable. For NUMERIC data types, you cannot alter precision, but you can change the scale in the ranges (0-18), (19-37), and so on. Restrictions You also cannot alter a column that is used in the CREATE PROJECTION .. SEGMENTED BY clause. To resize a segmented column, you must either create new superprojections and omit the column in the segmentation clause, or you can create a new table and projections with the column size that specifies the new size. The following type conversions are not allowed: l HP Vertica Analytic Database (7.0.x) Boolean to other types Page 948 of 1539 SQL Reference Manual SQL Statements DROP CONSTRAINT name [ CASCADE | RESTRICT ] l DATE/TIME type conversion l Approximate numeric type conversions l Conversions between BINARY and VARBINARY Drops the specified tableconstraint from the table. Use the CASCADE keyword to drop a constraint upon which something else depends. For example, a FOREIGN KEY constraint depends on a UNIQUE or PRIMARY KEY constraint on the referenced columns. Use the RESTRICT keyword to drop the constraint only from the given table. Dropping a table constraint has no effect on views that reference the table. HP Vertica Analytic Database (7.0.x) Page 949 of 1539 SQL Reference Manual SQL Statements DROP COLUMN column-name [ CASCADE | RESTRICT ] Drops both the specified column from the table and the ROS containers that correspond to the dropped column. Because drop operations physically purge object storage and catalog definitions (table history) from the table, AT EPOCH (historical) queries return nothing for the dropped column. Restrictions l At the table level, you cannot drop or alter a primary key column or a column participating in the table's partitioning clause. l At the projection level, you cannot drop the first column in a projection's sort order or columns that participate in the segmentation expression of a projection. l All nodes must be up for the drop operation to succeed. Using CASCADE to force a drop You can use the CASCADE keyword to drop a column if that column: HP Vertica Analytic Database (7.0.x) l Has a constraint of any kind on it. l Participates in the projection's sort order. l Participates in a pre-join projection or participates in the projection's segmentation expression. Note that when a pre-join projection contains a column to be dropped with Page 950 of 1539 SQL Reference Manual SQL Statements CASCADE, HP Vertica tries to drop the projection. In all cases, CASCADE tries to drop the projection(s) and will roll back if K-safety is compromised. See the Dropping a table column in the Administrator's Guide for additional details about CASCADE behavior and examples. Use the RESTRICT keyword to drop the column only from the given table. RENAME [TO] Renames one or more tables. In either case, the keyword changes the name of the table or tables to the specified name or names. For more information, see Altering Tables in the Administrator's Guide. Renaming a table requires USAGE and CREATE privilege on the schema that contains the table. RENAME [ COLUMN ] Renames the specified column within the table. If a column that is referenced by a view is renamed, the column does not appear in the result set of the view even if the view uses the wild card (*) to represent all columns in the table. Recreate the view to incorporate the column's new name. HP Vertica Analytic Database (7.0.x) Page 951 of 1539 SQL Reference Manual SQL Statements SET SCHEMA new-schema-name [ RESTRICT | CASCADE ] Moves a table to the specified schema. You must have USAGE privilege on the old schema and CREATE privilege on new schema. SET SCHEMA supports moving only one table between schemas at a time. You cannot move temporary tables between schemas. For more information, see Altering Tables in the Administrator's Guide. HP Vertica Analytic Database (7.0.x) Page 952 of 1539 SQL Reference Manual SQL Statements PARTITION BY partition-clause [ REORGANIZE ] Partitions or re-partitions a table according to the partition-clause that you define. Existing partition keys are immediately dropped when you run the command. You can use the PARTITION BY and REORGANIZE keywords separately or together. However, you cannot use these keywords with any other clauses. Partition-clause expressions are limited in the following ways: l Your partition-clause must calculate a single non-null value for each row. You can reference multiple columns, but each row must return a single value. l You can specify leaf expressions, functions, and operators in the partition clause expression. l All leaf expressions in the partition clause must be either constants or columns of the table. l Aggregate functions and queries are not permitted in the partition-clause expression. l SQL functions used in the partition-clause expression must be immutable. Partitioning or re-partitioning tables requires USAGE privilege on the schema that contains the table. See Partitioning, repartitioning, and reorganizing tables in the HP Vertica Analytic Database (7.0.x) Page 953 of 1539 SQL Reference Manual SQL Statements Administrator's Guide for details and best practices on repartitioning and reorganizing data, as well as how to monitor REORGANIZE operations. Do not alter table partitioning when nodes are down. Doing so prevents those nodes down from assisting in database recovery. REMOVE PARTITIONING Immediately removes partitioning on a table. The ROS containers are not immediately altered, but are later cleaned by the Tuple Mover. OWNER TO new-owner-name Changes the table owner. Only the table owner or a superuser can change ownership, and reassignment does not transfer grants from the original owner to the new owner (grants made by the original owner are dropped). Changing the table owner transfers ownership of the associated IDENTITY/AUTO_ INCREMENT sequences (defined in CREATE TABLE column-constraint syntax) but not other REFERENCES sequences. See Changing a table owner and Changing a sequence owner in the Administrator's Guide. Permissions You must be a table owner or a superuser and have USAGE privileges on schema that contains the table in order to: HP Vertica Analytic Database (7.0.x) Page 954 of 1539 SQL Reference Manual SQL Statements l Add, drop, rename, or alter column l Add or drop a constraint l Partition or re-partition the table To rename a table, you must have USAGE and CREATE privilege on the schema that contains the table. Moving a table to a new schema requires: l USAGE privilege on the old schema l CREATE privilege on new schema Table Behavior After Alteration After you modify a column, any new data that you load will conform to the modified table definition. If you restore the database to an epoch other than the current epoch, the restore operation will overwrite the changes with the prior table schema. For example, if you change a column's data type from CHAR(8) to CHAR(16) in epoch 10 and you restore the database from epoch 5, the column will be CHAR(8) again. Changing a Data Type for a Column Specified in a SEGMENTED BY Clause If you create a table and do not create a superprojection for it, HP Vertica automatically creates a superprojection when you first load data into the table. By default, superprojections are segmented by all columns to ensure that all of the data is available for queries. If you try to alter a column used in the superprojection's segmentation clause, HP Vertica returns an error message like in the following example: => CREATE TABLE colmod (c1 VARCHAR(13), c2 VARCHAR (8), c3 INT); CREATE TABLE => CREATE PROJECTION colmod_c1seg AS SELECT c1 FROM colmod SEGMENTED BY HASH(c1) ALL NODES; WARNING 4116: No super projections created for table public.colmod. HINT: Default super projections will be automatically created with the next DML CREATE PROJECTION => ALTER TABLE colmod ALTER COLUMN c1 SET DATA TYPE VARCHAR(30); ROLLBACK 2353: Cannot alter type of column "c1" since it is referenced in the segmentation expression of projection "colmod_c1seg" To resize a segmented column, you must either create new superprojections and omit the column in the segmentation clause or create a new table (with new column size) and projections. HP Vertica Analytic Database (7.0.x) Page 955 of 1539 SQL Reference Manual SQL Statements Locked Tables If the operation cannot obtain an O Lock on the table(s), HP Vertica attempts to close any internal Tuple Mover (TM) sessions running on the same table(s) so that the operation can proceed. Explicit TM operations that are running in user sessions are not closed. If an explicit TM operation is running on the table, then the operation cannot proceed until the explicit TM operation completes. Examples The following example drops the default expression specified for the Discontinued_flag column: => ALTER TABLE Retail.Product_Dimension ALTER COLUMN Discontinued_flag DROP DEFAULT; The following example renames a column in the Retail.Product_Dimension table from Product_ description to Item_description: => ALTER TABLE Retail.Product_Dimension RENAME COLUMN Product_description TO Item_description; The following example moves table T1 from schema S1 to schema S2. SET SCHEMA defaults to CASCADE, so all the projections that are anchored on table T1 are automatically moved to schema S2 regardless of the schema in which they reside: => ALTER TABLE S1.T1 SET SCHEMA S2; The following example adds partitioning to the Sales table based on state and reorganizes the data into partitions: => ALTER TABLE Sales PARTITION BY state REORGANIZE; Adding and Changing Constraints on Columns Using ALTER TABLE The following example uses ALTER TABLE to add a column (b) with not NULL and default 5 constraints to a table (test6): CREATE TABLE test6 (a INT); ALTER TABLE test6 ADD COLUMN b INT DEFAULT 5 NOT NULL; Use ALTER TABLE with the ALTER COLUMN and SET NOT NULL clauses to add the constraint on column a in table test6: ALTER TABLE test6 ALTER COLUMN a SET NOT NULL; HP Vertica Analytic Database (7.0.x) Page 956 of 1539 SQL Reference Manual SQL Statements Adding and Dropping NOT NULL Column Constraints Use the SET NOT NULL or DROP NOT NULL clause to add or remove a not NULL column constraint. Use these clauses to ensure that the column has the proper constraints when you have added or removed a primary key constraint on a column, or any time you want to add or remove the not NULL constraint. Note: A PRIMARY KEY constraint includes a not NULL constraint, but if you drop the PRIMARY KEY constraint on a column, the not NULL constraint remains on that column. Examples ALTER TABLE T1 ALTER COLUMN x SET NOT NULL; ALTER TABLE T1 ALTER COLUMN x DROP NOT NULL; For more information, see Altering Table Definitions. Adding New Columns to Tables with CASCADE The following example shows how to use the CASCADE keyword when adding a new column to an existing table. Using CASCADE ensures that the new column is added to the superprojection and to all pre-join projections that include that table. Create two tables: => CREATE TABLE t1 (x INT PRIMARY KEY NOT NULL, y INT); => CREATE TABLE t2 (x INT PRIMARY KEY NOT NULL, t1_x INT REFERENCES t1(x) NOT NULL, z VARCHAR(8)); After you load data into them, HP Vertica creates a superprojection for each table. The superprojections contains all the columns in their respective tables. For this example, name them super_t1 and super_t2. Create two pre-join projections that join tables t1 and t2. => CREATE PROJECTION t_pj1 AS SELECT t1.x, t1.y, t2.x, t2.t1_x, t2.z FROM t1 JOIN t2 ON t1.x = t2.t1_x UNSEGMENTED ALL NODES; => CREATE PROJECTION t_pj2 AS SELECT t1.x, t2.x FROM t1 JOIN t2 ON t1.x = t2.t1_x UNSEGMENTED ALL NODES; Add a new column w1 to table t1 using the CASCADE keyword. HP Vertica adds the column to: l Superprojection super_t1 l Pre-join projection t_pj1 HP Vertica Analytic Database (7.0.x) Page 957 of 1539 SQL Reference Manual SQL Statements l Pre-join projection t_pj2 => ALTER TABLE t ADD COLUMN w INT DEFAULT 5 NOT NULL CASCADE; Add a new column w2to table t1, and specify a nonconstant default value. HP Vertica adds the new column to the superprojection super_t1. Because the default value is not a constant, HP Vertica does not add the new column to the pre-join projections. => ALTER TABLE t ADD COLUMN w1 INT DEFAULT default (t1.y+1) NOT NULL CASCADE; WARNING: Column "c_new" in table "t" with non-constant default will not be added to prejoin(s) t_pj1, t_pj2. See Also l "Working with Tables" on page 1 Table-Constraint Adds a constraint to the metadata of a table. See Adding Constraints in the Administrator's Guide. Syntax [ CONSTRAINT constraint_name ... { PRIMARY KEY ( column [ ... | FOREIGN KEY ( column [ REFERENCES table [( column [ ] , ... ] ) , ... ] ) , ... ] )] ... | UNIQUE ( column [ , ... ] )... } Parameters CONSTRAINT constraint-name Assigns a name to the constraint. HP recommends that you name all constraints. PRIMARY KEY ( column [ , ... ] ) Adds a referential integrity constraint defining one or more NOT NULL columns as the primary key. FOREIGN KEY ( column [ , ... ] ) Adds a referential integrity constraint defining one or more columns as a foreign key. REFERENCES table [( column [ , ... ] )] HP Vertica Analytic Database (7.0.x) Specifies the table to which the FOREIGN KEY constraint applies. If you omit the optional column definition of the referenced table, the default is the primary key of table. Page 958 of 1539 SQL Reference Manual SQL Statements UNIQUE ( column [ , ... ] ) Specifies that the data contained in a column or a group of columns is unique with respect to all the rows in the table. Permissions Table owner or user WITH GRANT OPTION is grantor. l REFERENCES privilege on table to create foreign key constraints that reference this table l USAGE privilege on schema that contains the table Specifying Primary and Foreign Keys You must define PRIMARY KEY and FOREIGN KEY constraints in all tables that participate in inner joins. You can specify a foreign key table constraint either explicitly (with the FOREIGN KEY parameter), or implicitly using the REFERENCES parameter to reference the table with the primary key. You do not have to explicitly specify the columns in the referenced table, for example: CREATE TABLE fact(c1 INTEGER PRIMARY KEY); CREATE TABLE dim (c1 INTEGER REFERENCES fact); Adding Constraints to Views Adding a constraint to a table that is referenced in a view does not affect the view. Examples The VMart sample database, described in the Getting Started Guide, contains a table Product_ Dimension in which products have descriptions and categories. For example, the description "Seafood Product 1" exists only in the "Seafood" category. You can define several similar correlations between columns in the Product Dimension table. ALTER USER Changes a database user account. Only a superuser can alter another user's database account. Making changes to a database user account with the ALTER USER function does not affect current sessions. Database Account Changes Users Can Make Users can change their own user accounts with these options: HP Vertica Analytic Database (7.0.x) Page 959 of 1539 SQL Reference Manual SQL Statements l IDENTIFIED BY. . . l RESOURCE POOL . . . l SEARCH_PATH . . . Users can change their own passwords using the IDENTIFIED BY option and supplying the current password with the REPLACE clause. Users can set the default RESOURCE POOL to any pool to which they have been granted usage privileges. Syntax ALTER ... [ ... [ ... [ ... [ ... [ ... [ ... [ ... [ ... [ ... [ USER name ACCOUNT { LOCK | UNLOCK } ] DEFAULT ROLE {role [, ...] | NONE} ] IDENTIFIED BY 'password' [ REPLACE 'old-password' ] ] MEMORYCAP { 'memory-limit' | NONE } ] PASSWORD EXPIRE ] PROFILE { profile-name | DEFAULT } ] RESOURCE POOL pool-name ] RUNTIMECAP { 'time-limit' | NONE } ] TEMPSPACECAP { 'space-limit' | NONE } ] SEARCH_PATH { schema[,schema2,...] | DEFAULT } ] Parameters name Specifies the name of the user to alter. You must double quote names that contain special characters. ACCOUNT LOCK | UNLOCK Locks or unlocks the named user's access to the database. Users cannot log in if their account is locked. A superuser can manually lock and unlock accounts using ALTER USER syntax or automate account locking by setting a maximum number of failed login attempts through the CREATE PROFILE statement. DEFAULT ROLE {role [, ...] | NONE} One or more roles that should be active when the user's session starts. The user must have already been granted access to the roles (see GRANT (Role)). The role or roles specified in this command replace any existing default roles. Use the NONE keyword to eliminate all default roles for the user. HP Vertica Analytic Database (7.0.x) Page 960 of 1539 SQL Reference Manual SQL Statements IDENTIFIED BY 'password' [ REPLACE 'old_passw ord' ] Sets a user's password to password. Supplying an empty string for password removes the user's password. The use of this clause differs between superusers and non-superusers. A non-superuser can alter only his or her own password, and must supply the existing password using the REPLACE parameter. Superusers can change any user's password and do not need to supply the REPLACE parameter. See Password Guidelines and Creating a Database Name and Password for password policies. PASSWORD EXPIRE Expires the user's password. HP Vertica will force the user to change passwords during his or her next login. Note: PASSWORD EXPIRE has no effect when using external password authentication methods such as LDAP or Kerberos. PROFILE profile-name | DEFAULT Sets the user's profile to profile-name. Using the value DEFAULT sets the user's profile to the default profile. MEMORYCAP 'memory-limit' | NONE Limits the amount of memory that the user's requests can use. This value is a number representing the amount of space, followed by a unit (for example, '10G'). The unit can be one of the following: l % percentage of total memory available to the Resource Manager. (In this case value of the size size must be 0-100) l K—Kilobytes l M—Megabytes l G—Gigabytes l T—Terabytes Setting this value to NONE means the user has no limits on memory use. HP Vertica Analytic Database (7.0.x) Page 961 of 1539 SQL Reference Manual SQL Statements RESOURCE POOL pool-name Sets the name of the default resource pool for the user. Attempting to alter a database user account to associate the account with a particular resource pool will result in an error if the user has not already been granted access to the resource pool. particular resource pool on which they have not been granted access results in an error (even for a superuser). RUNTIMECAP 'time-limit' | NONE Sets the maximum amount of time any of the user's queries can execute. time-limit is an interval, such as '1 minute' or '100 seconds' (see Interval Values for details). This value cannot exceed one year. Setting this value to NONE means there is no time limit on the user's queries. If RUNTIMECAP is also set for the resource pool or the session, HP Vertica always uses the shortest limit. TEMPSPACECAP 'space-limit' | NONE Limits the amount of temporary file storage the user's requests can use. This parameter's value has the same format as the MEMORYCAP value. SEARCH_PATH schema[,schema2,...] | DEFAULT Sets the user's default search path that tells HP Vertica which schemas to search for unqualified references to tables and UDFs. See Setting Search Paths in the Administrator's Guide for an explanation of the schema search path. The DEFAULT keyword sets the search path to: "$user", public, v_catalog, v_monitor, v_in ternal Permissions Must be a superuser to alter a user. See Also CREATE USER l DROP USER l l HP Vertica Analytic Database (7.0.x) Page 962 of 1539 SQL Reference Manual SQL Statements l ALTER VIEW Renames a view. Syntax ALTER VIEW [[db-name.]schema.] current-view-name ... RENAME TO new-view-name Parameters viewname Specifies the name of the view you want to rename. RENAME TOnew-view-name Specifies the new name of the view. The view name must be unique. Do not use the same name as any table, view, or projection within the database. Notes Views are read only. You cannot perform insert, update, delete, or copy operations on a view. Permissions To create a view, the user must be a superuser or have CREATE privileges on the schema in which the view is renamed. Example The following command renames view1 to view2: => CREATE VIEW view1 AS SELECT * FROM t; CREATE VIEW => ALTER VIEW view1 RENAME TO view2; ALTER VIEW BEGIN Starts a transaction block. Syntax BEGIN [ WORK | TRANSACTION ] [ isolation_level ] [ transaction_mode ] where isolation_level is one of: HP Vertica Analytic Database (7.0.x) Page 963 of 1539 SQL Reference Manual SQL Statements ISOLATION LEVEL { SERIALIZABLE | REPEATABLE READ | READ COMMITTED | READ UNCOMMITTED } and where transaction_mode is one of: READ { ONLY | WRITE } Parameters WORK | TRANSACTION Have no effect; they are optional keywords for readability. ISOLATION LEVEL { SERIALIZABLE | REPEATABLE READ | READ COMMITTED | READ UNCOMMITTED } A transaction retains its isolation level until it completes, even if the session's transaction isolation level changes mid-transaction. HP Vertica internal processes (such as the Tuple Mover and refresh operations) and DDL operations are always run at SERIALIZABLE isolation level to ensure consistency. Isolation level determines what data the transaction can access when other transactions are running concurrently. The isolation level cannot be changed after the first query (SELECT) or DML statement (INSERT, DELETE, UPDATE) has run. isolation_level can one of the following values: l SERIALIZABLE—Sets the strictest level of SQL transaction isolation. This level emulates transactions serially, rather than concurrently. It holds locks and blocks write operations until the transaction completes. Not recommended for normal query operations. l REPEATABLE READ—Automatically converted to SERIALIZABLE by HP Vertica. l READ COMMITTED (Default)—Allows concurrent transactions. Use READ COMMITTED isolation or Snapshot isolation for normal query operations, but be aware that there is a subtle difference between them. (See section below this table.) l READ UNCOMMITTED—Automatically converted to READ COMMITTED by HP Vertica. HP Vertica Analytic Database (7.0.x) Page 964 of 1539 SQL Reference Manual SQL Statements READ { ONLY | WRITE } Transaction mode can be one of the following: l READ WRITE—(default)The transaction is read/write. l READ ONLY—The transaction is read-only. Setting the transaction session mode to read-only disallows the following SQL commands, but does not prevent all disk write operations: l INSERT, UPDATE, DELETE, and COPY if the table they would write to is not a temporary table l All CREATE, ALTER, and DROP commands l GRANT, REVOKE, and EXPLAIN if the command it would run is among those listed. Permissions No special permissions required. Notes START TRANSACTION performs the same function as BEGIN. See Also l Transactions l Creating and Rolling Back Transactions l COMMIT l END l ROLLBACK HP Vertica Analytic Database (7.0.x) Page 965 of 1539 SQL Reference Manual SQL Statements COMMENT ON Statements The following functions allow you to create comments associated with HP Vertica database objects: l COMMENT ON COLUMN l COMMENT ON CONSTRAINT l COMMENT ON FUNCTION l COMMENT ON LIBRARY l COMMENT ON NODE l COMMENT ON PROJECTION l COMMENT ON SCHEMA l COMMENT ON SEQUENCE l COMMENT ON TABLE l COMMENT ON TRANSFORM FUNCTION l COMMENT ON VIEW COMMENT ON COLUMN Adds, revises, or removes a projection column comment. You can only add comments to projection columns, not to table columns. Each object can have a maximum of 1 comment (1 or 0). Comments are stored in the V_CATALOG.COMMENTS system table. Syntax COMMENT ON COLUMN [[db-name.]schema.]proj_name.column_name IS {'comment' | NULL} HP Vertica Analytic Database (7.0.x) Page 966 of 1539 SQL Reference Manual SQL Statements Parameters [[db-name.]schema.] [Optional] Specifies the schema name. Using a schema identifies objects that are not unique within the current search path (see Setting Schema Search Paths). You can optionally precede a schema with a database name, but you must be connected to the database you specify. You cannot make changes to objects in other databases. The ability to specify different database objects (from database and schemas to tables and columns) lets you qualify database objects as explicitly as required. For example, use a table and column (mytable.column1), a schema, table, and column (myschema.mytable.column1), and, as full qualification, a database, schema, table, and column (mydb.myschema.mytable.column1). proj_name.column_name Specifies the name of the projection and column with which to associate the comment. comment Specifies the comment text to add. Enclose the text of the comment within single-quotes. If a comment already exists for this column, the comment you enter here overwrites the previous comment. Comments can be up to 8192 characters in length. If a comment exceeds that limitation, HP Vertica truncates the comment and alerts the user with a message. You can enclose a blank value within single quotes to remove an existing comment. NULL Removes an existing comment. Permissions l A superuser can view and add comments to all objects. l The object owner can add or edit comments for the object. l A user must have VIEW privileges on an object to view its comments. Notes Dropping an object drops all comments associated with the object. Example The following example adds a comment to the customer_name column in the customer_dimension projection: HP Vertica Analytic Database (7.0.x) Page 967 of 1539 SQL Reference Manual SQL Statements => COMMENT ON COLUMN customer_dimension_vmart_node01.customer_name IS 'Last name only'; The following examples remove a comment from the customer_name column in the customer_ dimension projection in two ways, using the NULL option, or specifying a blank string: => COMMENT ON COLUMN customer_dimension_vmart_node01.customer_name IS NULL; => COMMENT ON COLUMN customer_dimension_vmart_node01.customer_name IS ''; See Also l COMMENTS COMMENT ON CONSTRAINT Adds, revises, or removes a comment on a constraint. Each object can have a maximum of 1 comment (1 or 0). Comments are stored in the V_CATALOG.COMMENTS system table. Syntax COMMENT ON CONSTRAINT constraint_name ON [ [db-name.]schema.]table_name IS ... {'comment' | NU LL }; Parameters constraint_name The name of the constraint associated with the comment. [[db-name.]schema.] [Optional] Specifies the schema name. Using a schema identifies objects that are not unique within the current search path (see Setting Schema Search Paths). You can optionally precede a schema with a database name, but you must be connected to the database you specify. You cannot make changes to objects in other databases. The ability to specify different database objects (from database and schemas to tables and columns) lets you qualify database objects as explicitly as required. For example, use a table and column (mytable.column1), a schema, table, and column (myschema.mytable.column1), and, as full qualification, a database, schema, table, and column (mydb.myschema.mytable.column1). table_name Specifies the name of the table constraint with which to associate a comment. HP Vertica Analytic Database (7.0.x) Page 968 of 1539 SQL Reference Manual SQL Statements comment Specifies the comment text to add. Enclose the text of the comment within single-quotes. If a comment already exists for this constraint, the comment you enter here overwrites the previous comment. Comments can be up to 8192 characters in length. If a comment exceeds that limitation, HP Vertica truncates the comment and alerts the user with a message. You can enclose a blank value within single quotes to remove an existing comment. NULL Removes an existing comment. Permissions l A superuser can view and add comments to all objects. l The object owner can add or edit comments for the object. l A user must have VIEW privileges on an object to view its comments. Notes Dropping an object drops all comments associated with the object. Example The following example adds a comment to the constraint_x constraint on the promotion_ dimension table: => COMMENT ON CONSTRAINT constraint_x ON promotion_dimension IS 'Primary key'; The following examples remove a comment from the constraint_x constraint on the promotion_ dimension table: => COMMENT ON CONSTRAINT constraint_x ON promotion_dimension IS NULL; => COMMENT ON CONSTRAINT constraint_x ON promotion_dimension IS ''; See Also l COMMENTS COMMENT ON FUNCTION Adds, revises, or removes a comment on a function. Each object can have a maximum of 1 comment (1 or 0). Comments are stored in the V_CATALOG.COMMENTS system table. HP Vertica Analytic Database (7.0.x) Page 969 of 1539 SQL Reference Manual SQL Statements Syntax COMMENT ON FUNCTION [[db-name.]schema.]function_name function_arg IS { 'comment' | NULL }; Parameters [[db-name.]schema.] [Optional] Specifies the schema name. Using a schema identifies objects that are not unique within the current search path (see Setting Schema Search Paths). You can optionally precede a schema with a database name, but you must be connected to the database you specify. You cannot make changes to objects in other databases. The ability to specify different database objects (from database and schemas to tables and columns) lets you qualify database objects as explicitly as required. For example, use a table and column (mytable.column1), a schema, table, and column (myschema.mytable.column1), and, as full qualification, a database, schema, table, and column (mydb.myschema.mytable.column1). function_name Specifies the name of the function with which to associate the comment. function_arg Indicates the function arguments. comment Specifies the comment text to add. Enclose the comment text within single-quotes. If a comment already exists for this function, the comment you enter overwrites the previous comment. Comments can be up to 8192 characters in length. If a comment exceeds that limitation, HP Vertica truncates the comment and alerts the user with a message. You can enclose a blank value within single quotes to remove an existing comment. NULL Removes an existing comment. Notes l A superuser can view and add comments to all objects. l A user must own an object to be able to add or edit comments for the object. l A user must have viewing privileges on an object to view its comments. l If you drop an object, all comments associated with the object are dropped as well. HP Vertica Analytic Database (7.0.x) Page 970 of 1539 SQL Reference Manual SQL Statements Examples The following example adds a comment to the macros.zerowhennull (x INT) function: => COMMENT ON FUNCTION macros.zerowhennull(x INT) IS 'Returns a 0 if not NULL'; The following examples remove a comment from the macros.zerowhennull (x INT) function in two ways by using the NULL option, or specifying a blank string: => COMMENT ON FUNCTION macros.zerowhennull(x INT) IS NULL; => COMMENT ON FUNCTION macros.zerowhennull(x INT) IS ''; See Also l COMMENTS COMMENT ON LIBRARY Adds, revises, or removes a comment on a library . Each object can have a maximum of 1 comment (1 or 0). Comments are stored in the V_CATALOG.COMMENTS system table. Syntax COMMENT ON LIBRARY [ [db-name.]schema.]library_name IS {'comment' | NULL} Parameters [[db-name.]schema.] [Optional] Specifies the schema name. Using a schema identifies objects that are not unique within the current search path (see Setting Schema Search Paths). You can optionally precede a schema with a database name, but you must be connected to the database you specify. You cannot make changes to objects in other databases. The ability to specify different database objects (from database and schemas to tables and columns) lets you qualify database objects as explicitly as required. For example, use a table and column (mytable.column1), a schema, table, and column (myschema.mytable.column1), and, as full qualification, a database, schema, table, and column (mydb.myschema.mytable.column1). library_name The name of the library associated with the comment. HP Vertica Analytic Database (7.0.x) Page 971 of 1539 SQL Reference Manual SQL Statements comment Specifies the comment text to add. Enclose the text of the comment within single-quotes. If a comment already exists for this library, the comment you enter here overwrites the previous comment. Comments can be up to 8192 characters in length. If a comment exceeds that limitation, HP Vertica truncates the comment and alerts the user with a message. You can enclose a blank value within single quotes to remove an existing comment. NULL Removes an existing comment. Permissions l A superuser can view and add comments to all objects. l The object owner can add or edit comments for the object. l A user must have VIEW privileges on an object to view its comments. Notes Dropping an object drops all comments associated with the object. Examples The following example adds a comment to the library MyFunctions: => COMMENT ON LIBRARY MyFunctions IS 'In development'; The following examples remove a comment from the library MyFunctions: => COMMENT ON LIBRARY MyFunctions IS NULL; => COMMENT ON LIBRARY MyFunctions IS ''; See Also l COMMENTS COMMENT ON NODE Adds, revises, or removes a comment on a node. Each object can have a maximum of 1 comment (1 or 0). Comments are stored in the V_CATALOG.COMMENTS system table. HP Vertica Analytic Database (7.0.x) Page 972 of 1539 SQL Reference Manual SQL Statements Syntax COMMENT ON NODE node_name IS { 'comment' | NULL } Parameters node_name The name of the node associated with the comment. comment Specifies the comment text to add. Enclose the text of the comment within singlequotes. If a comment already exists for this node, the comment you enter here overwrites the previous comment. Comments can be up to 8192 characters in length. If a comment exceeds that limitation, HP Vertica truncates the comment and alerts the user with a message. You can enclose a blank value within single quotes to remove an existing comment. NULL Removes an existing comment. Permissions l A superuser can view and add comments to all objects. l The object owner can add or edit comments for the object. l A user must have VIEW privileges on an object to view its comments. Notes Dropping an object drops all comments associated with the object. Examples The following example adds a comment for the initiator node: => COMMENT ON NODE initiator IS 'Initiator node'; The following examples removes a comment from the initiator node. => COMMENT ON NODE initiator IS NULL; => COMMENT ON NODE initiator IS ''; See Also l COMMENTS HP Vertica Analytic Database (7.0.x) Page 973 of 1539 SQL Reference Manual SQL Statements COMMENT ON PROJECTION Adds, revises, or removes a comment on a projection. Each object can have a maximum of 1 comment (1 or 0). Comments are stored in the V_CATALOG.COMMENTS system table. Syntax COMMENT ON PROJECTION [ [db-name.]schema.]proj_name IS { 'comment' | NULL } Parameters [[db-name.]schema.] [Optional] Specifies the database name and optional schema name. Using a database name identifies objects that are not unique within the current search path (see Setting Search Paths). You must be connected to the database you specify, and you cannot change objects in other databases. Specifying different database objects lets you qualify database objects as explicitly as required. For example, you can use a database and a schema name (mydb.myschema). projection_name The name of the projection associated with the comment. comment Specifies the text of the comment to add. Enclose the text of the comment within single-quotes. If a comment already exists for this projection, the comment you enter here overwrites the previous comment. Comments can be up to 8192 characters in length. If a comment exceeds that limitation, HP Vertica truncates the comment and alerts the user with a message. You can enclose a blank value within single quotes to remove an existing comment. Null Removes an existing comment. Permissions l A superuser can view and add comments to all objects. l The object owner can add or edit comments for the object. l A user must have VIEW privileges on an object to view its comments. Notes Dropping an object drops all comments associated with the object. HP Vertica Analytic Database (7.0.x) Page 974 of 1539 SQL Reference Manual SQL Statements Examples The following example adds a comment to the customer_dimension_vmart_node01 projection: => COMMENT ON PROJECTION customer_dimension_vmart_node01 IS 'Test data'; The following examples remove a comment from the customer_dimension_vmart_node01 projection: => COMMENT ON PROJECTION customer_dimension_vmart_node01 IS NULL; => COMMENT ON PROJECTION customer_dimension_vmart_node01 IS ''; See Also l COMMENTS COMMENT ON SCHEMA Adds, revises, or removes a comment on a schema. Each object can have a maximum of 1 comment (1 or 0). Comments are stored in the V_CATALOG.COMMENTS system table. Syntax COMMENT ON SCHEMA [db-name.]schema_name IS {'comment' | NULL} Parameters [db-name.] [Optional] Specifies the database name. You must be connected to the database you specify. You cannot make changes to objects in other databases. schema_name Indicates the schema associated with the comment. comment Text of the comment you want to add. Enclose the text of the comment in singlequotes. If a comment already exists for this schema, the comment you enter here overwrites the previous comment. Comments can be up to 8192 characters in length. If a comment exceeds that limitation, HP Vertica truncates the comment and alerts the user with a message. You can enclose a blank value within single quotes to remove an existing comment. NULL Allows you to remove an existing comment. HP Vertica Analytic Database (7.0.x) Page 975 of 1539 SQL Reference Manual SQL Statements Permissions l A superuser can view and add comments to all objects. l The object owner can add or edit comments for the object. l A user must have VIEW privileges on an object to view its comments. Notes Dropping an object drops all comments associated with the object. Examples The following example adds a comment to the public schema: => COMMENT ON SCHEMA public IS 'All users can access this schema'; The following examples remove a comment from the public schema. => COMMENT ON SCHEMA public IS NULL; => COMMENT ON SCHEMA public IS ''; See Also l COMMENTS COMMENT ON SEQUENCE Adds, revises, or removes a comment on a sequence. Each object can have a maximum of 1 comment (1 or 0). Comments are stored in the V_CATALOG.COMMENTS system table. Syntax COMMENT ON SEQUENCE [[db-name.]schema.]sequence_name IS { 'comment' | NULL } HP Vertica Analytic Database (7.0.x) Page 976 of 1539 SQL Reference Manual SQL Statements Parameters [[db-name.]schema.] [Optional] Specifies the schema name. Using a schema identifies objects that are not unique within the current search path (see Setting Schema Search Paths). You can optionally precede a schema with a database name, but you must be connected to the database you specify. You cannot make changes to objects in other databases. The ability to specify different database objects (from database and schemas to tables and columns) lets you qualify database objects as explicitly as required. For example, use a table and column (mytable.column1), a schema, table, and column (myschema.mytable.column1), and, as full qualification, a database, schema, table, and column (mydb.myschema.mytable.column1). sequence_name The name of the sequence associated with the comment. comment Specifies the text of the comment to add. Enclose the text of the comment within single-quotes. If a comment already exists for this sequence, the comment you enter here overwrites the previous comment. Comments can be up to 8192 characters in length. If a comment exceeds that limitation, HP Vertica truncates the comment and alerts the user with a message. You can enclose a blank value within single quotes to remove an existing comment. NULL Removes an existing comment. Permissions l A superuser can view and add comments to all objects. l The object owner can add or edit comments for the object. l A user must have VIEW privileges on an object to view its comments. Notes Dropping an object drops all comments associated with the object. Examples The following example adds a comment to the sequence called prom_seq. HP Vertica Analytic Database (7.0.x) Page 977 of 1539 SQL Reference Manual SQL Statements => COMMENT ON SEQUENCE prom_seq IS 'Promotion codes'; The following examples remove a comment from the prom_seq sequence. => COMMENT ON SEQUENCE prom_seq IS NULL; => COMMENT ON SEQUENCE prom_seq IS ''; See Also l COMMENTS COMMENT ON TABLE Adds, revises, or removes a comment on a table. Each object can have a maximum of one comment (1 or 0). Comments are stored in the V_CATALOG.COMMENTS system table. Syntax COMMENT ON TABLE [ [db-name.]schema.]table_name IS { 'comment' | NULL } Parameters [[db-name.]schema.] [Optional] Specifies the database name and optional schema name. Using a database name identifies objects that are not unique within the current search path (see Setting Search Paths). You must be connected to the database you specify, and you cannot change objects in other databases. Specifying different database objects lets you qualify database objects as explicitly as required. For example, you can use a database and a schema name (mydb.myschema). table_name Specifies the name of the table with which to associate the comment. comment Specifies the text of the comment to add. Enclose the text of the comment within single-quotes. If a comment already exists for this table, the comment you enter here overwrites the previous comment. Comments can be up to 8192 characters in length. If a comment exceeds that limitation, HP Vertica truncates the comment and alerts the user with a message. You can enclose a blank value within single quotes to remove an existing comment. Null Removes a previously added comment. HP Vertica Analytic Database (7.0.x) Page 978 of 1539 SQL Reference Manual SQL Statements Permissions l A superuser can view and add comments to all objects. l The object owner can add or edit comments for the object. l A user must have VIEW privileges on an object to view its comments. Notes Dropping an object drops all comments associated with the object. Examples The following example adds a comment to the promotion_dimension table: => COMMENT ON TABLE promotion_dimension IS '2011 Promotions'; The following examples remove a comment from the promotion_dimension table: => COMMENT ON TABLE promotion_dimension IS NULL; => COMMENT ON TABLE promotion_dimension IS ''; See Also l COMMENTS COMMENT ON TRANSFORM FUNCTION Adds, revises, or removes a comment on a user-defined transform function. Each object can have a maximum of 1 comment (1 or 0). Comments are stored in the v_catalog.comments system table. Syntax COMMENT ON TRANSFORM FUNCTION [[db-name.]schema.]t_function_name ...([t_function_arg_name t_function_arg_type] [,...]) IS {'comment' | NULL} HP Vertica Analytic Database (7.0.x) Page 979 of 1539 SQL Reference Manual SQL Statements Parameters [[db-name.]schema.] [Optional] Specifies the database name and optional schema name. Using a database name identifies objects that are not unique within the current search path (see Setting Search Paths). You must be connected to the database you specify, and you cannot change objects in other databases. Specifying different database objects lets you qualify database objects as explicitly as required. For example, you can use a database and a schema name (mydb.myschema). t_function_name Specifies name of the transform function with which to associate the comment. t_function_arg_name t_function_arg_type [Optional] Indicates the names and data types of one or more transform function arguments. If you supply argument names and types, each type must match the type specified in the library used to create the original transform function. comment Specifies the comment text to add. Enclose the text of the comment within single-quotes. If a comment already exists for this transform function, the comment you enter overwrites the previous comment. Comments can be up to 8192 characters in length. If a comment exceeds that limitation, HP Vertica truncates the comment and alerts the user with a message. You can enclose a blank value within single quotes to remove an existing comment. NULL Removes an existing comment. Permissions l A superuser can view and add comments to all objects. l The object owner can add or edit comments for the object. l A user must have VIEW privileges on an object to view its comments. Notes Dropping an object drops all comments associated with the object. HP Vertica Analytic Database (7.0.x) Page 980 of 1539 SQL Reference Manual SQL Statements Examples The following example adds a comment to the macros.zerowhennull (x INT) UTF function: => COMMENT ON TRANSFORM FUNCTION macros.zerowhennull(x INT) IS 'Returns a 0 if not NULL'; The following example removes a comment from the acros.zerowhennull (x INT) function by using the NULL option: => COMMENT ON TRANSFORM FUNCTION macros.zerowhennull(x INT) IS NULL; COMMENT ON VIEW Adds, revises, or removes a comment on a view. Each object can have a maximum of 1 comment (1 or 0). Comments are stored in the V_CATALOG.COMMENTS system table. Syntax COMMENT ON VIEW [ [db-name.]schema.]view_name IS { 'comment' | NULL } Parameters [[db-name.]schema.] [Optional] Specifies the database name and optional schema name. Using a database name identifies objects that are not unique within the current search path (see Setting Search Paths). You must be connected to the database you specify, and you cannot change objects in other databases. Specifying different database objects lets you qualify database objects as explicitly as required. For example, you can use a database and a schema name (mydb.myschema). view_name The name of the view with which to associate the comment. comment Specifies the text of the comment to add. If a comment already exists for this view, the comment you enter here overwrites the previous comment. Comments can be up to 8192 characters in length. If a comment exceeds that limitation, HP Vertica truncates the comment and alerts the user with a message. You can enclose a blank value within single quotes to remove an existing comment. NULL Removes an existing comment. HP Vertica Analytic Database (7.0.x) Page 981 of 1539 SQL Reference Manual SQL Statements Permissions l A superuser can view and add comments to all objects. l The object owner can add or edit comments for the object. l A user must have VIEW privileges on an object to view its comments. Notes Dropping an object drops all comments associated with the object. Examples The following example adds a comment to a view called curr_month_ship: => COMMENT ON VIEW curr_month_ship IS 'Shipping data for the current month'; The following example removes a comment from the curr_month_ship view: => COMMENT ON VIEW curr_month_ship IS NULL; See Also l COMMENTS COMMIT Ends the current transaction and makes all changes that occurred during the transaction permanent and visible to other users. Syntax COMMIT [ WORK | TRANSACTION ] Parameters WORK | TRANSACTION Have no effect; they are optional keywords for readability. Permissions No special permissions required. HP Vertica Analytic Database (7.0.x) Page 982 of 1539 SQL Reference Manual SQL Statements Notes END is a synonym for COMMIT. See Also l l l BEGIN l ROLLBACK l START TRANSACTION CONNECT Connects to another HP Vertica database to enable data import (using the COPY FROM VERTICA statement) or export (using the EXPORT statement). By default, invoking CONNECT occurs over the HP Vertica private network. Creating a connection over a public network requires some configuration. For information about using CONNECT to export data to or import data over a public network, see Export/Import from a Public Network. When importing from or exporting to an HP Vertica database, you can connect only to a database that uses trusted- (username-only) or password-based authentication, as described in Implementing Security. Neither LDAP nor SSL authentication is supported. Syntax CONNECT TO VERTICA database USER username PASSWORD 'password' ON 'host',port Parameters database The connection target database name. username The username to use when connecting to the other database. password A string containing the password to use to connect to the other database. host A string containing the host name of one of the nodes in the other database. port The port number of the other database as an integer. Permissions No special permissions required. HP Vertica Analytic Database (7.0.x) Page 983 of 1539 SQL Reference Manual SQL Statements Connection Details Once you successfully establish a connection to another database, the connection remains open for the current session. To disconnect a connection, use the DISCONNECT statement. You can have only one connection to another database at a time, though you can create connections to multiple different databases in the same session. If the target database does not have a password, and you specify a password in the CONNECT statement, the connection succeeds, but does not give any indication that you supplied an incorrect password. Example => CONNECT TO VERTICA ExampleDB USER dbadmin PASSWORD 'Password123' ON 'VerticaHost01',54 33; CONNECT See Also l COPY FROM VERTICA l DISCONNECT l EXPORT TO VERTICA COPY Bulk loads data into an HP Vertica database. You can initiate loading one or more files or pipes on a cluster host or on a client system (using the COPY LOCAL option). Permissions You must connect to the HP Vertica database as a superuser, or, as a non-superuser, have a USER-accessible storage location, and applicable READ or WRITE privileges granted to the storage location from which files are read or written to. COPY LOCAL users must have INSERT privileges to copy data from the STDIN pipe, as well as USAGE privileges on the schema. The following permissions are required to COPY FROM STDIN: l INSERT privilege on table l USAGE privilege on schema HP Vertica Analytic Database (7.0.x) Page 984 of 1539 SQL Reference Manual SQL Statements Syntax COPY [ [db-name.]schema-name.]table ... [ ( { column-as-expression | column } ...... [ FILLER datatype ] ...... [ FORMAT 'format' ] ...... [ ENCLOSED BY 'char' ] ...... [ ESCAPE AS 'char' | NO ESCAPE ] ...... [ NULL [ AS ] 'string' ] ...... [ TRIM 'byte' ] ...... [ DELIMITER [ AS ] 'char' ] ... [, ... ] ) ] ... [ COLUMN OPTION ( column ...... [ FORMAT 'format' ] ...... [ ENCLOSED BY 'char' ] ...... [ ESCAPE AS 'char' | NO ESCAPE ] ...... [ NULL [ AS ] 'string' ] ...... [ DELIMITER [ AS ] 'char' ] ... [, ... ] ) ] FROM { STDIN ...... [ BZIP | GZIP | UNCOMPRESSED ] ...| 'pathToData' [ ON nodename | ON ANY NODE ] ...... [ BZIP | GZIP | UNCOMPRESSED ] [, ...] ...| LOCAL STDIN | 'pathToData' ...... [ BZIP | GZIP | UNCOMPRESSED ] [, ...] } ...[ NATIVE | NATIVE VARCHAR | FIXEDWIDTH COLSIZES (integer [, ....]) ] ...[ WITH ] ...[ WITH [ SOURCE source(arg='value')] [ FILTER filter(arg='value') ] [ PARSER parser([arg= 'value']) ] ] ...[ DELIMITER [ AS ] 'char' ] ...[ TRAILING NULLCOLS ] ...[ NULL [ AS ] 'string' ] ...[ ESCAPE AS 'char' | NO ESCAPE ] ...[ ENCLOSED BY 'char' ] ...[ RECORD TERMINATOR 'string' ] ...[ SKIP records ] ...[ SKIP BYTES integer ] ...[ TRIM 'byte' ] ...[ REJECTMAX integer ] ...[ REJECTED DATA {'path' [ ON nodename ] [, ...] | AS TABLE 'reject_table'} ] ...[ EXCEPTIONS 'path' [ ON nodename ] [, ...] ] ...[ ENFORCELENGTH ] ...[ ABORT ON ERROR ] ...[ AUTO | DIRECT | TRICKLE ] ...[ STREAM NAME 'streamName'] ...[ NO COMMIT ] Parameters table HP Vertica Analytic Database (7.0.x) The table containing the data to load into the HP Vertica database. Page 985 of 1539 SQL Reference Manual SQL Statements [[db-name.]schema-name.]table [Optional] Specifies the name of a schema table (not a projection), optionally preceded by a database name. HP Vertica loads the data into all projections that include columns from the schema table. When using more than one schema, specify the schema that contains the table. Note: COPY ignores db-name or schema-name options when used as part of a CREATE EXTERNAL TABLE... or CREATE FLEX EXTERNAL TABLE... statements. column-as-expression Specifies the expression used to compute values for the target column. For example: COPY t(year AS TO_CHAR(k, 'YYYY')) FROM 'myfile.dat' Use this option to transform data when it is loaded into the target database. For more information about using expressions with COPY, see Transforming Data During Loads in the Administrator's Guide. See Ignoring Columns and Fields in the Load File in the Administrator's Guide for information about using fillers. column Restricts the load to one or more specified columns in the table. If you do not specify any columns, COPY loads all columns by default. Table columns that you do not specify in the column list are assigned their default values. If a column had no defined default value, COPY inserts NULL. If you leave the column parameter blank to load all columns in the table, you can use the optional parameter COLUMN OPTION to specify parsing options for specific columns. Note: The data file must contain the same number of columns as the COPY command's column list. For example, in a table T1 with nine columns (C1 through C9), the following command loads the three columns of data in each record to columns C1, C6, and C9, respectively: => COPY T1 (C1, C6, C9); HP Vertica Analytic Database (7.0.x) Page 986 of 1539 SQL Reference Manual SQL Statements FILLER Specifies not to load the column and its fields into the destination table. Use this option to omit columns that you do not want to transfer into a table. This parameter also transforms data from a source column and loads the transformed data to the destination table, rather than loading the original, untransformed source column (parsed column). (See Ignoring Columns and Fields in the Load File in the Administrator's Guide.) FORMAT Specifies the input formats to use when loading date/time and binary columns. These are the valid input formats when loading binary columns: l 'octal' l 'hex' l 'bitstream' See Loading Binary Data to learn more about using these formats. When loading date/time columns, using FORMAT significantly improves load performance. COPY supports the same formats as the TO_DATE function. See the following topics for additional information: l Template Patterns for Date/Time Formatting l Template Pattern Modifiers for Date/Time Formatting If you specify invalid format strings, the COPY operation returns an error. HP Vertica Analytic Database (7.0.x) Page 987 of 1539 SQL Reference Manual SQL Statements pathToData Specifies the absolute path of the file containing the data, which can be from multiple input sources. If path resolves to a storage location, and the user invoking COPY is not a superuser, these are the required privileges: l The storage location must have been created with the USER option (see ADD_LOCATION). l The user must already have been granted READ access to the storage location where the file(s) exist, as described in GRANT (Storage Location) Further, if a non-superuser invokes COPY from a storage location to which she has privileges, HP Vertica also checks any symbolic links (symlinks) the user has to ensure no symlink can access an area to which the user has not been granted privileges. The pathToData can optionally contain wildcards to match more than one file. The file or files must be accessible to the local client or the host on which the COPY statement runs. You can use variables to construct the pathname as described in Using Load Scripts. The supported patterns for wildcards are specified in the Linux Manual Page for Glob (7), and for ADO.net platforms, through the .NET Directory.getFiles Method. nodename [Optional] Specifies the node on which the data to copy resides and the node that should parse the load file. You can use nodename to COPY and parse a load file from a node other than the initiator node of the COPY statement. If you omit nodename, the location of the input file defaults to the initiator node for the COPY statement. Note: You cannot specify nodename with either STDIN or LOCAL, because STDIN is read on the initiator node only and LOCAL indicates a client node. HP Vertica Analytic Database (7.0.x) Page 988 of 1539 SQL Reference Manual SQL Statements ON ANY NODE [Optional] Specifies that the source file to load is on all of the nodes, so COPY opens the file and parses it from any node in the cluster. Make sure that the source file is available and accessible on each cluster node. You can use a wildcard or glob (such as *.dat) to load multiple input files, combined with the ON ANY NODE clause. Using a glob results in COPY distributing the list of files to all cluster nodes and spreading the workload. Note: You cannot specify ON ANY NODE with either STDIN or LOCAL, because STDIN is read on the initiator node only and LOCAL indicates a client node. STDIN Reads from the client a standard input instead of a file. STDIN takes one input source only and is read on the initiator node. To load multiple input sources, use pathToData. User must have INSERT privilege on table and USAGE privilege on schema/ LOCAL Specifies that all paths for the COPY statement are on the client system and that all COPY variants are initiated from a client. You can use the LOCAL and STDIN parameters together. See Using COPY and COPY LOCAL in the Administrator's Guide. BZIP | GZIP | UNCOMPRESSED Specifies the input file format. The default value is UNCOMPRESSED, and input files can be of any format. If using wildcards, all qualifying input files must be in the same format. Notes: WITH, AS HP Vertica Analytic Database (7.0.x) l When using concatenated BZIP or GZIP files, be sure that each source file is terminated with a record terminator before concatenating them. l Concatenated BZIP and GZIP files are not supported for NATIVE (binary) and NATIVE VARCHAR formats. Improve readability of the statement. These parameters have no effect on the actions performed by the statement. Page 989 of 1539 SQL Reference Manual SQL Statements [WITH [ SOURCE source(arg='value')] [FILTER filter(arg='value')] [PARSER parser(param='value')]] Directs COPY to optionally use one or more User Defined Load functions. You can specify up to one source, zero or more filters, and up to one parser. To load flexible tables, use the PARSER parameter followed by one of the flex table parsers, fjsonparser, fdelimitedparser, or fcefparser. For more information about the flex table parsers, and using their parameters, see Using Flex Table Parsers in the Flex Tables Guide. NATIVE | NATIVE VARCHAR | FIXEDWIDTH Specifies the parser to use when bulk loading columnar tables. These parameters are not applicable when loading flexible tables. By default, COPY uses the DELIMITER parser for UTF-8 format, delimited text input data. Do not specify DELIMITER. COPY always uses the default parser unless you specify another. For more information about using these options, see Specifying a COPY Parser in the Administrator's Guide. NOTE: COPY LOCAL does not support the NATIVE and NATIVE VARCHAR parsers. COLUMN OPTION Specifies load metadata for one or more columns declared in the table column list. For example, you can specify that a column has its own DELIMITER, ENCLOSED BY, NULL as 'NULL' expression, and so on. You do not have to specify every column name explicitly in the COLUMN OPTION list, but each column you specify must correspond to a column in the table column list. COLSIZES (integer [,...]) Required specification when loading fixed-width data using the FIXEDWIDTH parser. COLSIZES and the list of integers must correspond to the columns listed in the table column list. For more information, see Loading Fixed-Width Format Data in the Administrator's Guide. DELIMITER A single ASCII character that separates columns within each record of a file. The default in HP Vertica is a vertical bar (|). You can use any ASCII value in the range E'\000' to E'\177' inclusive. You cannot use the same character for both the DELIMITER and NULL options. For more information, see Loading UTF-8 Format Data in the Administrator's Guide. HP Vertica Analytic Database (7.0.x) Page 990 of 1539 SQL Reference Manual SQL Statements TRAILING NULLCOLS Specifies that if HP Vertica encounters a record with insufficient data to match the columns in the table column list, COPY inserts the missing columns with NULLs. For other information and examples, see Loading Fixed-Width Format Data in the Administrator's Guide. ESCAPE AS Sets the escape character to indicate that the following character should be interpreted literally, rather than as a special character. You can define an escape character using any ASCII value in the range E'\001' to E'\177', inclusive (any ASCII character except NULL: E'\000'). The COPY statement does not interpret the data it reads in as String Literals, and does not follow the same escape rules as other SQL statements (including the COPY parameters). When reading in data, COPY interprets only characters defined by these options as special values: l ESCAPE AS l DELIMITER l ENCLOSED BY l RECORD TERMINATOR NO ESCAPE Eliminates escape character handling. Use this option if you do not need any escape character and you want to prevent characters in your data from being interpreted as escape sequences. ENCLOSED BY Sets the quote character within which to enclose data, allowing delimiter characters to be embedded in string values. You can choose any ASCII value in the range E'\001' to E'\177' inclusive (any ASCII character except NULL: E'\000'). By default, ENCLOSED BY has no value, meaning data is not enclosed by any sort of quote character. NULL The string representing a null value. The default is an empty string (''). You can specify a null value as any ASCII value in the range E'\001' to E'\177' inclusive (any ASCII character except NULL: E'\000'). You cannot use the same character for both the DELIMITER and NULL options. For more information, see Loading UTF-8 Format Data. HP Vertica Analytic Database (7.0.x) Page 991 of 1539 SQL Reference Manual SQL Statements RECORD TERMINATOR Specifies the literal character string that indicates the end of a data file record. For more information about using this parameter, see Loading UTF-8 Format Data. SKIP records Skips a number (integer) of records in a load file. For example, you can use the SKIP option to omit table header information. SKIP BYTES total Skips the total number (integer) of bytes in a record. This option is only available when loading fixed-width data. TRIM Trims the number of bytes you specify from a column. This option is only available when loading fixed-width data. REJECTMAX Specifies a maximum number of logical records to be rejected before a load fails. For more details about using this option, see Tracking Load Exceptions and Rejections Status in the Administrator's Guide. HP Vertica Analytic Database (7.0.x) Page 992 of 1539 SQL Reference Manual SQL Statements REJECTED DATA { 'path' [ ON nodename ] [, ...] | AS TABLE reject_table } Specifies the file name or absolute path of the file in which to write rejected rows. The rejected data consists of each row that failed to load due to a parsing error. Use the REJECTED DATA clause with the EXCEPTIONS clause, because exceptions explain why a row was rejected. Alternatively, use the REJECTED DATA AS TABLE reject_table clause to save rejected rows in a columnar table. Saving rejections to a table also saves the reason for the rejected row. You can then query the table to access rejected data information. For more information, see Saving Load Rejections (REJECTED DATA) in the Administrator's Guide. If path resolves to a storage location, and the user invoking COPY is not a superuser, the following privileges are required: l The storage location must have been created with the USER option (see ADD_LOCATION). l The user must already have been granted READ access to the storage location where the files exist, as described in GRANT (Storage Location) The optional ON nodename clause moves any existing rejected data files on nodename to path on the same node. See Tracking Load Exceptions and Rejections Status in the Administrator's Guide. Note: If you include the NO COMMIT and REJECTED DATA AS TABLE clauses in your COPY statement and the reject_table does not already exist, Vertica Analytic Database saves the rejected-data table as a LOCAL TEMP table and returns a message that a LOCAL TEMP table is being created. HP Vertica Analytic Database (7.0.x) Page 993 of 1539 SQL Reference Manual SQL Statements EXCEPTIONS 'path' [ ON nodename ] [, ...] Specifies the file name or absolute path of the file in which to write exceptions. Exceptions are textual messages describing why each rejected row was rejected. Each exception describes the corresponding record in the file specified by the REJECTED DATA option. If path resolves to a storage location, and the user invoking COPY is not a superuser, the following privileges are required: l The storage location must have been created with the USER option (see ADD_LOCATION). l The user must already have been granted READ access to the storage location where the files exist, as described in GRANT (Storage Location). The optional ON nodename clause moves any existing exceptions files on nodename to the indicated path on the same node. For more details about using this option, see Saving Load Exceptions in the Administrator's Guide. Note: Specifying an exceptions file name is incompatible with using the REJECTED DATA AS TABLE clause, which includes the exception in the table's rejected_reason column. ENFORCELENGTH Determines whether COPY truncates or rejects data rows of type char, varchar, binary, and varbinary if they do not fit the target table. By default, COPY truncates offending rows of these data types, but does not reject them. For more details, see Tracking Load Exceptions and Rejections Status in the Administrator's Guide. ABORT ON ERROR Stops the COPY command if a row is rejected and rolls back the command. No data is loaded. AUTO | DIRECT | TRICKLE Specifies the method COPY uses to load data into the database. The default load method is AUTO, in which COPY loads data into the WOS (Write Optimized Store) in memory. When the WOS is full, the load continues directly into ROS (Read Optimized Store) on disk. For more information, see Choosing a Load Method in the Administrator's Guide. Note: COPY ignores these options when used as part of a CREATE EXTERNAL TABLE statement. HP Vertica Analytic Database (7.0.x) Page 994 of 1539 SQL Reference Manual SQL Statements STREAM NAME [Optional] Supplies a COPY load stream identifier. Using a stream name helps to quickly identify a particular load. The STREAM NAME value that you supply in the load statement appears in the stream column of the LOAD_STREAMS system table. By default, HP Vertica names streams by table and file name. For example, if you have two files (f1, f2) in Table A, their stream names are A-f1, A-f2, respectively. To name a stream: => COPY mytable FROM myfile DELIMITER '|' DIRECT STR EAM NAME 'My stream name'; NO COMMIT Prevents the COPY statement from committing its transaction automatically when it finishes copying data. For more information about using this parameter, see Choosing a Load Method in the Administrator's Guide. Notes: COPY ignores this option when used as part of a CREATE EXTERNAL TABLE statement. If you include the NO COMMIT and REJECTED DATA AS TABLE clauses in your COPY statement and the reject_ table does not already exist, Vertica Analytic Database saves the rejected-data table as a LOCAL TEMP table and returns a message that a LOCAL TEMP table is being created. Note: Always use the COPY statement REJECTED DATA and EXCEPTIONS parameters to save load rejections. Using the RETURNREJECTED parameter is supported only for internal use by the JDBC and ODBC drivers. HP Vertica's internal-use options can change without notice. COPY Option Summary The following table summarizes which COPY parameters are available when loading data using the default (DELIMITER), NATIVE (binary), NATIVE VARCHAR, and FIXEDWIDTH parsersü: COPY Option DELIMITER NATIVE (BINARY) COLUMN OPTION ü ü ü ü AUTO ü ü ü ü HP Vertica Analytic Database (7.0.x) NATIVE (VARCHAR) FIXEDWIDTH Page 995 of 1539 SQL Reference Manual SQL Statements COPY Option DELIMITER NATIVE (BINARY) NATIVE (VARCHAR) FIXEDWIDTH DIRECT ü ü ü ü TRICKLE ü ü ü ü ENFORCELENGTH ü ü ü ü EXCEPTIONS ü ü ü ü FILLER ü ü ü ü REJECTED DATA ü ü ü ü ABORT ON ERROR ü ü ü ü STREAM NAME ü ü ü ü SKIP ü ü ü ü ü SKIP BYTES REJECTMAX ü ü ü ü STDIN ü ü ü ü UNCOMPRESSED ü ü ü ü BZIP | GZIP ü ü ü ü CONCATENATED BZIP or GZIP ü NO COMMIT ü ü ü ü FORMAT ü ü ü ü NULL ü ü ü ü DELIMITED ü ENCLOSED BY ü ESCAPE AS ü TRAILING NULLCOLS ü RECORD TERMINATOR ü TRIM ü ü ü Notes When loading data with the COPY statement, COPY considers the following data invalid: HP Vertica Analytic Database (7.0.x) Page 996 of 1539 SQL Reference Manual SQL Statements l Missing columns (an input line has less columns than the recipient table). l Extra columns (an input line has more columns than the recipient table). l Empty columns for an INTEGER or DATE/TIME data type. If a column is empty for either of these types, COPY does not use the default value that was defined by the CREATE TABLE command, unless you do not supply a column option as part of the COPY statement. l Incorrect representation of a data type. For example, trying to load a non-numeric data into an INTEGER column is invalid. When COPY encounters an empty line while loading data, the line is neither inserted nor rejected, but COPY increments the line record number. Consider this behavior when evaluating rejected records. If you return a list of rejected records and COPY encountered an empty row while loading data, the position of rejected records is incremented by one. Examples The following examples load data with the COPY statement using the FORMAT, DELIMITER, NULL and ENCLOSED BY string options, as well as a DIRECT option. => COPY public.customer_dimension (customer_since FORMAT 'YYYY') FROM STDIN DELIMITER ',' NULL AS 'null' ENCLOSED BY '"' => COPY a FROM STDIN DELIMITER ',' NULL E'\\\N' DIRECT; => COPY store.store_dimension FROM :input_file DELIMITER '|' NULL '' RECORD TERMINATOR E'\f' Setting vsql Variables The first two examples load data from STDIN. The last example uses a vsql variable (input_file) . You can set a vsql variable as follows: \set input_file ../myCopyFromLocal/large_table.gzip HP Vertica Analytic Database (7.0.x) Page 997 of 1539 SQL Reference Manual SQL Statements Using Compressed Data and Named Pipes COPY supports named pipes that follow the same naming conventions as file names on the given file system. Permissions are open, write, and close. This statement creates the named pipe, pipe1, and sets two vsql variables, dir and file: \! mkfifo pipe1 \set dir `pwd`/ \set file '''':dir'pipe1''' This statement copies an uncompressed file from the named pipe: \! cat pf1.dat > pipe1 & COPY large_tbl FROM :file delimiter '|'; SELECT * FROM large_tbl; COMMIT; This statement copies a GZIP file from named pipe and uncompresses it: \! gzip pf1.dat \! cat pf1.dat.gz > pipe1 & COPY large_tbl FROM :file ON site01 GZIP delimiter '|'; SELECT * FROM large_tbl; COMMIT; \!gunzip pf1.dat.gz This statement copies a BZIP file from named pipe and then uncompresses it: \!bzip2 pf1.dat \! cat pf1.dat.bz2 > pipe1 & COPY large_tbl FROM :file ON site01 BZIP delimiter '|'; SELECT * FROM large_tbl; COMMIT; bunzip2 pf1.dat.bz2 This statement creates a Flex table, and copies JSON data into the table, using the flex table parser, fjsonparser: CREATE FLEX TABLE darkdata(); CREATE TABLE COPY tweets FROM from '/myTest/Flexible/DATA/tweets_12.json' parser fjsonparser(); Rows Loaded ------------12 (1 row) HP Vertica Analytic Database (7.0.x) Page 998 of 1539 SQL Reference Manual SQL Statements See Also l SQL Data Types l ANALYZE_CONSTRAINTS l Choosing a Load Method in the Administrator's Guide l CREATE EXTERNAL TABLE AS COPY l Directory.getFiles Method l Bulk Loading Data in the Administrator's Guide l Loading Fixed-Width Format Data in the Administrator's Guide l Loading Binary Data in the Administrator's Guide l Loading Flex Table Data in the Administrator's Guide l Ignoring Columns and Fields in the Load File in the Administrator's Guide l Linux Manual Page for Glob (7) l Tracking Load Exceptions and Rejections Status in the Administrator's Guide l Transforming Data During Loads in the Administrator's Guide COPY LOCAL Using the COPY statement with its LOCAL option lets you load a data file on a client system, rather than on a cluster host. COPY LOCAL supports the STDIN and 'pathToData' parameters, but not the [ON nodename] clause. COPY LOCAL does not support NATIVE or NATIVE VARCHAR formats. The COPY LOCAL option is platform independent. The statement works in the same way across all supported HP Vertica platforms and drivers. For more details about using COPY LOCAL with supported drivers, see the Programmer's Guide section for your platform. Note: On Windows clients, the path you supply for the COPY LOCAL file is limited to 216 characters due to limitations in the Windows API. Invoking COPY LOCAL does not automatically create exceptions and rejections files, even if exceptions occur. You cannot save exceptions and rejections to a table with the rejected data as table parameter. For information about saving such files, see Capturing Load Exceptions and Rejections in the Administrator's Guide. HP Vertica Analytic Database (7.0.x) Page 999 of 1539 SQL Reference Manual SQL Statements Permissions User must have INSERT privilege on the table and USAGE privilege on the schema. How Copy Local Works COPY LOCAL loads data in a platform-neutral way. The COPY LOCAL statement loads all files from a local client system to the HP Vertica host, where the server processes the files. You can copy files in various formats: uncompressed, compressed, fixed-width format, in bzip or gzip format, or specified as a bash glob. Files of a single format (such as all bzip, or gzip) can be comma-separated in the list of input files. You can also use any of the applicable COPY statement options (as long as the data format supports the option). For instance, you can define a specific delimiter character, or how to handle NULLs, and so forth. Note: The Linux glob command returns files that match the pattern you enter, as specified in the Linux Manual Page for Glob (7). For ADO.net platforms, specify patterns and wildcards as described in the .NET Directory.getFiles Method. For examples of using the COPY LOCAL option to load data, see COPY for syntactical descriptions, and the Bulk Loading Data section in the Administrator's Guide. The HP Vertica host uncompresses and processes the files as necessary, regardless of file format or the client platform from which you load the files. Once the server has the copied files, HP Vertica maintains performance by distributing file parsing tasks, such as encoding, compressing, uncompressing, across nodes. Viewing Copy Local Operations in a Query Plan When you use the COPY LOCAL option, the GraphViz Explain plan includes a label for Load-ClientFile, rather than Load-File. Following is a section from a sample Explain plan: ----------------------------------------------PLAN: BASE BULKLOAD PLAN (GraphViz Format) ----------------------------------------------digraph G { graph [rankdir=BT, label = " BASE BULKLOAD PLAN \nAll Nodes Vector: \n\n node[0]=initiator (initiator) Up\n", labelloc=t, labeljust=l ordering=out] . . . 10[label = "Load-Client-File(/tmp/diff) \nOutBlk=[UncTuple]", color = "green", shape = "ellipse"]; COPY FROM VERTICA You can import data from an earlier HP Vertica release, as long as the earlier release is a version of the last major release. For instance, for Version 6.x, you can import data from any version of 5.x, HP Vertica Analytic Database (7.0.x) Page 1000 of 1539 SQL Reference Manual SQL Statements but not from 4.x. Copies data from another HP Vertica database once you have established a connection to the other HP Vertica database with the CONNECT statement. See Importing Data for more setup information. The COPY FROM VERTICA statement works similarly to the COPY statement, but accepts only a subset of COPY parameters. By default, using COPY FROM VERTICA to copy or import data from another database occurs over the HP Vertica private network. Connecting to a public network requires some configuration. For information about using this statement to copy data across a public network, see Importing/Exporting From Public Networks. Syntax COPY [target_schema.]target_table ... [( target_column_name[, target_column_name2,...])] ... FROM VERTICA database.[source_schema.]source_table ... [(source_column_name[, source_column_name2,...])] ... [AUTO | DIRECT | TRICKLE] ... [STREAM NAME 'stream name'] ... [NO COMMIT] Parameters [target_schema.]target_table The table to store the copied data. This table must be in your local database, and must already exist. (target_column_name[, target_colu mn_name2,...]) A list of columns in the target table to store the copied data. Note: You cannot use column fillers as part of the column definition. database The name of the database that is the source of the copied data. You must have already created a connection to this database in the current session. [source_schema.]source_table The table in the source database that is the source of the copied data. (source_column_name[, source_colu mn_name2,...]) A list of columns in the source table to be copied. If this list is supplied, only these columns are copied from the source table. HP Vertica Analytic Database (7.0.x) Page 1001 of 1539 SQL Reference Manual SQL Statements AUTO | DIRECT | TRICKLE Specifies the method COPY uses to load data into the database. The default load method is AUTO, in which COPY loads data into the WOS (Write Optimized Store) in memory. When the WOS is full, the load continues directly into ROS (Read Optimized Store) on disk. For more information, see Choosing a Load Method in the Administrator's Guide. COPY ignores these options when used as part of a CREATE EXTERNAL TABLE statement. STREAM NAME [Optional] Supplies a COPY load stream identifier. Using a stream name helps to quickly identify a particular load. The STREAM NAME value that you supply in the load statement appears in the stream column of the LOAD_STREAMS system table. By default, HP Vertica names streams by table and file name. For example, if you have two files (f1, f2) in Table A, their stream names are A-f1, A-f2, respectively. To name a stream: => COPY mytable FROM myfile DELIMITER '|' DIRECT STREAM NAME 'My stream name'; NO COMMIT Prevents the COPY statement from committing its transaction automatically when it finishes copying data. For more information about using this parameter, see Choosing a Load Method in the Administrator's Guide. Permissions l SELECT privileges on the source table l USAGE privilege on source table schema l INSERT privileges for the destination table in target database l USAGE privilege on destination table schema Notes Importing and exporting data fails if either side of the connection is a single-node cluster installed to localhost, or you do not specify a host name or IP address. HP Vertica Analytic Database (7.0.x) Page 1002 of 1539 SQL Reference Manual SQL Statements l If you do not supply a list of source and destination columns, COPY FROM VERTICA attempts to match columns in the source table with corresponding columns in the destination table. See the following section for details. Source and Destination Column Mapping You can optionally supply lists of either source columns to be copied, columns in the destination table where data should be stored, or both. Specifying the lists lets you select a subset of source table columns to copy to the destination table. Since source and destination lists are not required, results differ depending on which list is present. The following table presents the results of supplying one or more lists: Omit Source Column List Supply Source Column List Matches all columns in the source table to columns in the destination table. The number of columns in Omit the two tables need not match, Destination but the destination table must not Column have fewer columns than the List source. Copies content only from the supplied list of source table columns. Matches columns in the destination table to columns in the source list. The number of columns in the two tables need not match, but the destination table must not have fewer columns than the source. Matches columns in the destination column list to columns Supply in the source. The number of Destination columns in the destination list Column must match the number of List columns in the source table. Matches columns from the source table column lists to those in the destination table. The lists must have the same number of columns. The COPY FROM VERTICA statement needs to map columns in the source table to columns in the destination table. Example This example demonstrates connecting to another database, copying the contents of an entire table from the source database to an identically-defined table in the current database directly into ROS, and then closing the connection. => CONNECT TO VERTICA vmart USER dbadmin PASSWORD '' ON 'VertTest01',5433; CONNECT => COPY customer_dimension FROM VERTICA vmart.customer_dimension DIRECT; Rows Loaded ------------500000 (1 row) => DISCONNECT vmart; DISCONNECT HP Vertica Analytic Database (7.0.x) Page 1003 of 1539 SQL Reference Manual SQL Statements This example demonstrates copying several columns from a table in the source database into a table in the local database. => CONNECT TO VERTICA vmart USER dbadmin PASSWORD '' ON 'VertTest01',5433; CONNECT => COPY people (name, gender, age) FROM VERTICA -> vmart.customer_dimension (customer_name, customer_gender, -> customer_age); Rows Loaded ------------500000 (1 row) => DISCONNECT vmart; DISCONNECT You can copy tables (or columns) containing Identity and Auto-increment values, but the sequence values are not incremented automatically at their destination. See Also l CONNECT l DISCONNECT l EXPORT TO VERTICA CREATE EXTERNAL TABLE AS COPY Creates an external table. This statement is a combination of the CREATE TABLE and COPY statements, supporting a subset of each statement's parameters, as noted below. You can also use user-defined load extension functions (UDLs) to create external tables. For more information about UDL syntax, see User Defined Load (UDL) and COPY. Note: HP Vertica does not create a superprojection for an external table when you create it. Permissions Must be a database superuser to create external tables, unless the superuser has created a useraccessible storage location to which the COPY refers, as described in ADD_LOCATION. Once external tables exist, you must also be a database superuser to access them through a select statement. Note: Permission requirements for external tables differ from other tables. To gain full access (including SELECT) to an external table that a user has privileges to create, the database superuser must also grant READ access to the USER-accessible storage location, see GRANT (Storage Location). HP Vertica Analytic Database (7.0.x) Page 1004 of 1539 SQL Reference Manual SQL Statements Syntax CREATE EXTERNAL TABLE [ IF NOT EXISTS ] [schema.]table-name { ... ( Column-Definition (table) [ , ... ] ) ... | [ column-name-list (create table) ] } AS COPY [ [db-name.]schema-name.]table ... [ ( { column-as-expression | column } ...... [ FILLER datatype ] ...... [ FORMAT 'format' ] ...... [ ENCLOSED BY 'char' ] ...... [ ESCAPE AS 'char' | NO ESCAPE ] ...... [ NULL [ AS ] 'string' ] ...... [ TRIM 'byte' ] ...... [ DELIMITER [ AS ] 'char' ] ... [, ... ] ) ] ... [ COLUMN OPTION ( column ...... [ FORMAT 'format' ] ...... [ ENCLOSED BY 'char' ] ...... [ ESCAPE AS 'char' | NO ESCAPE ] ...... [ NULL [ AS ] 'string' ] ...... [ DELIMITER [ AS ] 'char' ] ... [, ... ] ) ] FROM { ...| 'pathToData' [ ON nodename | ON ANY NODE ] ...... [ BZIP | GZIP | UNCOMPRESSED ] [, ...] } ...[ NATIVE ...| NATIVE VARCHAR ...| FIXEDWIDTH { COLSIZES (integer [, ....]) } ...] ...[ WITH ] ...[ WITH [ SOURCE source(arg='value')] [ FILTER filter(arg='value') ] [ PARSER parser(arg='v alue') ]] ...[ DELIMITER [ AS ] 'char' ] ...[ TRAILING NULLCOLS ] ...[ NULL [ AS ] 'string' ] ...[ ESCAPE AS 'char' | NO ESCAPE ] ...[ ENCLOSED BY 'char' [ AND 'char' ] ] ...[ RECORD TERMINATOR 'string' ] ...[ SKIP integer ] ...[ SKIP BYTES integer ] ...[ TRIM 'byte' ] ...[ REJECTMAX integer ] ...[ EXCEPTIONS 'path' [ ON nodename ] [, ...] ] ...[ REJECTED DATA 'path' [ ON nodename ] [, ...] ] ...[ ENFORCELENGTH ] ...[ ABORT ON ERROR ] Parameters The following parameters from the parent statements are not supported in the CREATE EXTERNAL TABLE AS COPY statement: HP Vertica Analytic Database (7.0.x) Page 1005 of 1539 SQL Reference Manual SQL Statements CREATE TABLE COPY AS AT EPOCH LAST FROM STDIN AT TIME 'timestamp' FROM LOCAL ORDER BY table-column [,...] DIRECT ENCODED BY TRICKLE hash-segmentation-clause NO COMMIT UNSEGMENTED {node | node all} KSAFE [k_num] PARTITION BY partition-clause For all supported parameters, see the CREATE TABLE and COPY statements. Notes Canceling a CREATE EXTERNAL TABLE AS COPY statement can cause unpredictable results. HP recommends that you allow the statement to finish, then use DROP TABLE once the table exists. Examples Examples of external table definitions: CREATE EXTERNAL TABLE ext1 (x integer) AS COPY FROM '/tmp/ext1.dat' DELIMITER ','; CREATE EXTERNAL TABLE ext1 (x integer) AS COPY FROM '/tmp/ext1.dat.bz2' BZIP DELIMITER ', '; CREATE EXTERNAL TABLE ext1 (x integer, y integer) AS COPY (x as '5', y) FROM '/tmp/ext1.d at.bz2' BZIP DELIMITER ','; See Also l Physical Schema l CREATE TABLE l CREATE FLEX TABLE l SELECT l Using External Tables CREATE FAULT GROUP Creates a fault group, which can contain the following: HP Vertica Analytic Database (7.0.x) Page 1006 of 1539 SQL Reference Manual SQL Statements l One or more nodes l One or more child fault groups l One or more nodes and one or more child fault groups The CREATE FAULT GROUP statement creates an empty fault group. You must run the ALTER FAULT GROUP statement to add nodes or other fault groups to an existing fault group. Syntax CREATE FAULT GROUP name Parameters name Specifies the name of the fault group to create. You must provide distinct names for each fault group you create. Permissions Must be a superuser to create a fault group. Example The following command creates a fault group called parent0: exampledb=> CREATE FAULT GROUP parent0; CREATE FAULT GROUP To add nodes or other fault groups to the parent0 fault group, run the ALTER FAULT GROUP statement. See Also l V_CATALOG.FAULT_GROUPS l V_CATALOG.CLUSTER_LAYOUT l Fault Groups in the Administrator's Guide l High Availability With Fault Groups in the Concepts Guide HP Vertica Analytic Database (7.0.x) Page 1007 of 1539 SQL Reference Manual SQL Statements CREATE FLEX TABLE Creates a flex table in the logical schema. If you create a flex table without any column definitions, two materialized columns exist: __raw__ : A LONG VARBINARY type column in which any unstructured data you load is stored. The column has a NOT NULL constraint, which can be changed using the ALTER TABLE statement. __identity__ : An identity column, present when no other column definitions exist. This column is auto-incrementing and used for segmentation and sort order. Additionally, creating any flex table results in three associated objects: l A flex table (flex_table) named in this statement l A related keys table, called flex_table_keys l A related view, called flex_table_view Both the flex table and its associated _keys table are required to use flex tables successfully. The _ keys table and _view are subservient objects of the flex table, which cannot exist if the table does not. However, you can drop either the _keys table or _view independently. Declaring columns (or other supported parameters) is optional. CREATE FLEX TABLE supports many of the parameters available when creating columnar tables, but not all. This section presents the optional use of column definitions, and the subset of supported parameters. You can also create flex external tables, with some syntactical variations, as described in CREATE FLEX EXTERNAL TABLE AS COPY . Note: HP Vertica does not support flexible global temporary tables. Syntax CREATE {FLEX | FLEXIBLE} TABLE [ IF NOT EXISTS ] [[db-name.]schema.]table-name { ... ( [ Column-Definition (table) [ , ... ]] ) ... | [ table-constraint ( column_name, ... )] ... | [ column-name-list (create table) ] } ... [ ORDER BY table-column [ , ... ] ] ... [ ENCODED BY column-definition [ , ... ] ... [ Hash-Segmentation-Clause ..... | UNSEGMENTED { NODE node | ALL NODES } ] ... [ KSAFE [k_num] ] ... [ PARTITION BY partition-clause ] Parameters See the CREATE TABLE statement for all parameter descriptions. HP Vertica Analytic Database (7.0.x) Page 1008 of 1539 SQL Reference Manual SQL Statements Unsupported CREATE Options for Flex Tables You cannot use the following options when creating a flex table: ... AS [COPY] [ [ AT EPOCH LATEST ] ... | [ AT TIME 'timestamp' ] ] [ /*+ direct */ ] query ... | [ LIKE [[db-name.]schema.]existing-table [ INCLUDING PROJECTIONS | EXCLUDING PROJECTIONS ] ] Default Flex Table and Keys Table Projections HP Vertica automatically creates superprojections for both the flex table and keys tables when you create them. If you create a flex table with one or more of the ORDER BY, ENCODED BY, SEGMENTED BY, or KSAFE clauses, the clause information is used to create projections. If no clauses are in use, HP Vertica uses the following defaults for unspecified aspects: Table order_by encoded_by Segmentation Ksafe flexible table __identity__ none by hash __identity__ 1 keys_table frequency none replicated/unsegmented all nodes 1 Note: When you build a view for a flex table (see BUILD_FLEXTABLE_VIEW), the view is ordered by frequency, desc, and key_name. Permissions To create a flex table, you must have CREATE privileges on the table schema. Examples The following example creates a flex table named darkdata without specifying any column information. HP Vertica creates a default superprojection and buddy projection as part of creating the table: => CREATE FLEXIBLE TABLE darkdata(); CREATE TABLE The following example creates a table called darkdata1 with one column (date_col) and specifies the partition by clause to partition the data by year. HP Vertica creates a default superprojection and buddy projections as part of creating the table: => CREATE FLEX TABLE darkdata1 (date_col date NOT NULL) partition by extract('year' from date_col); CREATE TABLE HP Vertica Analytic Database (7.0.x) Page 1009 of 1539 SQL Reference Manual SQL Statements See Also l Physical Schema l COPY l CREATE EXTERNAL TABLE AS COPY l CREATE FLEX EXTERNAL TABLE AS COPY l CREATE TABLE l PARTITION_PROJECTION l PARTITION_TABLE l SELECT l Working with Table Partitions l Auto Partitioning l Using External Tables CREATE FLEX EXTERNAL TABLE AS COPY Creates a flexible external table. This statement is a combination of the CREATE TABLE and COPY statements, supporting a subset of each statement's parameters, as noted below. You can also use user-defined load extension functions (UDLs) to create external flex tables. For more information about UDL syntax, see User Defined Load (UDL) and COPY. Note: HP Vertica does not create a superprojection for an external table when you create it. Permissions Must be a database superuser to create external tables, unless the superuser has created a useraccessible storage location to which the COPY refers, as described in ADD_LOCATION. Once external tables exist, you must also be a database superuser to access them through a select statement. Note: Permission requirements for external tables differ from other tables. To gain full access (including SELECT) to an external table that a user has privileges to create, the database superuser must also grant READ access to the USER-accessible storage location, see GRANT (Storage Location). HP Vertica Analytic Database (7.0.x) Page 1010 of 1539 SQL Reference Manual SQL Statements Syntax CREATE {FLEX | FLEXIBLE} EXTERNAL TABLE [ IF NOT EXISTS ] [schema.]table-name { ... ( [ Column-Definition (table) [ , ... ] ] ) } AS COPY FROM { ...| 'pathToData' [ ON nodename | ON ANY NODE ] ...... [ BZIP | GZIP | UNCOMPRESSED ] [, ...] } ...[ NATIVE ...| NATIVE VARCHAR ...| FIXEDWIDTH { COLSIZES (integer [, ....]) } ...] ...[ WITH ] ...[ WITH [ SOURCE source(arg='value')] [ FILTER filter(arg='value') ] [ PARSER parser( arg='value') ]] ...[ DELIMITER [ AS ] 'char' ] ...[ TRAILING NULLCOLS ] ...[ NULL [ AS ] 'string' ] ...[ ESCAPE AS 'char' | NO ESCAPE ] ...[ ENCLOSED BY 'char' [ AND 'char' ] ] ...[ RECORD TERMINATOR 'string' ] ...[ SKIP integer ] ...[ SKIP BYTES integer ] ...[ TRIM 'byte' ] ...[ REJECTMAX integer ] ...[ EXCEPTIONS 'path' [ ON nodename ] [, ...] ] ...[ REJECTED DATA 'path' [ ON nodename ] [, ...] ] ...[ ENFORCELENGTH ] ...[ ABORT ON ERROR ] Parameters The following parameters from the parent statements are not supported in the CREATE FLEXIBLE EXTERNAL TABLE AS COPY statement: CREATE TABLE COPY AS AT EPOCH LAST FROM STDIN AT TIME 'timestamp' FROM LOCAL ORDER BY table-column [,...] DIRECT ENCODED BY TRICKLE hash-segmentation-clause NO COMMIT UNSEGMENTED {node | node all} KSAFE [k_num] PARTITION BY partition-clause For all supported parameters, see the CREATE TABLE and COPY statements. HP Vertica Analytic Database (7.0.x) Page 1011 of 1539 SQL Reference Manual SQL Statements Notes Canceling a CREATE FLEX EXTERNAL TABLE AS COPY statement can cause unpredictable results. HP Vertica recommends that you allow the statement to finish, then use DROP TABLE once the table exists. Examples To create an external flex table: kdb=> create flex external table mountains() as copy from 'home/release/KData/kmm_ountain s.json' parser fjsonparser(); CREATE TABLE After creating an external flex table, two regular tables exist, as with other flex tables, the named table, and its associated _keys table, which is not an external table: kdb=> \dt mountains List of tables Schema | Name | Kind | Owner | Comment --------+-----------+-------+---------+--------public | mountains | table | release | (1 row) You can use the helper function COMPUTE_FLEXTABLE_KEYS_AND_BUILD_VIEW to compute keys and create a view for the external table: kdb=> select compute_flextable_keys_and_build_view ('appLog'); compute_flextable_keys_and_build_view ------------------------------------------------------------------------------------------------Please see public.appLog_keys for updated keys The view public.appLog_view is ready for querying (1 row) 1. Check the keys from the _keys table for the results of running the helper application: kdb=> select * from appLog_keys; key_name | frequency | data_type_gu ess ----------------------------------------------------------+-----------+----------------contributors | 8 | varchar(20) coordinates | 8 | varchar(20) created_at | 8 | varchar(60) entities.hashtags | 8 | long varbinary HP Vertica Analytic Database (7.0.x) Page 1012 of 1539 SQL Reference Manual SQL Statements (186) . . . retweeted_status.user.time_zone retweeted_status.user.url retweeted_status.user.utc_offset retweeted_status.user.verified (125 rows) | | | | 1 1 1 1 | | | | varchar(20) varchar(68) varchar(20) varchar(20) 2. Query from the external flex table view: kdb=> select "user.lang" from appLog_view; user.lang ----------it en es en en es tr en (12 rows) See Also l CREATE EXTERNAL TABLE AS COPY l CREATE TABLE l CREATE FLEX TABLE SELECT l l CREATE FUNCTION Statements You can use the Create Function statement to create two different kinds of functions: l User-Defined SQL functions--User defined SQL functions let you define and store commonlyused SQL expressions as a function. User defined SQL functions are useful for executing complex queries and combining HP Vertica built-in functions. You simply call the function name you assigned in your query. HP Vertica Analytic Database (7.0.x) Page 1013 of 1539 SQL Reference Manual SQL Statements User defined scalar functions (UDSFs) take in a single row of data and return a single value. These functions can be used anywhere a native HP Vertica function or statement can be used, except CREATE TABLE with its PARTITION BY or any segmentation clause. l User-Defined Scalar functions-- While you use CREATE FUNCTION to create both SQL and scalar functions, you use a different syntax for each function type. For more information, see: l CREATE FUNCTION (SQL Functions) l CREATE FUNCTION (UDF) About Creating User Defined Transform Functions (UDTFs) You can use a similar SQL statement to create user-defined transform functions. User Defined Transform Functions (UDTFs) operate on table segments and return zero or more rows of data. The data they return can be an entirely new table, unrelated to the schema of the input table, including having its own ordering and segmentation expressions. They can only be used in the SELECT list of a query. For details see Using User Defined Transforms. To create a UDTF, see CREATE TRANSFORM FUNCTION. CREATE AGGREGATE FUNCTION Adds a User Defined Aggregate Function (UDAF) stored in a shared Linux library to the catalog. You must have already loaded this library using the CREATE LIBRARY statement. When you call the SQL function, HP Vertica passes data values to the code in the library to process it. Syntax CREATE [ OR REPLACE ] AGGREGATE FUNCTION [[db-name.]schema.]function-name ... AS LANGUAGE 'language' NAME 'factory' LIBRARY library_name; Parameters [ OR REPLACE ] If you do not supply this parameter, the CREATE AGGREGATE FUNCTION statement fails if an existing function matches the name and parameters of the function you are trying to define. If you do supply this parameter, the new function definition overwrites the old. HP Vertica Analytic Database (7.0.x) Page 1014 of 1539 SQL Reference Manual SQL Statements [[db-name.]schema.] [Optional] Specifies the database name and optional schema name. Using a database name identifies objects that are not unique within the current search path (see Setting Search Paths). You must be connected to the database you specify. You cannot make changes to objects in other databases. Specifying different database objects lets you qualify database objects as explicitly as required. For example, you can use a database and a schema name (mydb.myschema). function-name The name of the function to create. If the function name is schemaqualified (as above), the function is created in the specified schema. This name does not need to match the name of the factory, but it is less confusing if they are the same or similar. LANGUAGE 'language' The programming language used to develop the function. Currently only 'C++' is supported for UDAF. NAME 'factory' The name of the factory class in the shared library that generates the object to handle the function's processing. LIBRARY library_name The name of the shared library that contains the C++ object to perform the processing for this function. This library must have been previously loaded using the CREATE LIBRARY statement. Notes l The parameters and return value for the function are automatically determined by the CREATE AGGREGATE FUNCTION statement, based on data supplied by the factory class. l When a User Defined Aggregate function that is defined multiple times with arguments of different data types is called, HP Vertica selects the function whose input parameters match the parameters in the function call to perform the processing. l You can return a list of all SQL functions and User Defined Functions (including aggregates) by querying the system table V_CATALOG.USER_FUNCTIONS or executing the vsql meta-command \df. Users see only the functions on which they have EXECUTE privileges. Permissions l Only a superuser can create or drop a User Defined Aggregate library. l To create a User Defined Aggregate function, the user must have CREATE and USAGE privileges on the schema and USAGE privileges on the library. l To use a User Defined Aggregate, the user must have USAGE privileges on the schema and EXECUTE privileges on the defined function. See GRANT (User Defined Extension) and REVOKE (User Defined Extension). HP Vertica Analytic Database (7.0.x) Page 1015 of 1539 SQL Reference Manual SQL Statements Examples The following example demonstrates loading a library named AggregateFunctions then defining a function named ag_avg and ag_cat that are mapped to the ag_cat AverageFactory and ConcatenateFactory classes in the library: => CREATE LIBRARY AggregateFunctions AS '/opt/vertica/sdk/examples/build/AggregateFunctio ns.so'; CREATE LIBRARY => CREATE AGGREGATE FUNCTION ag_avg AS LANGUAGE 'C++' NAME 'AverageFactory' library AggregateFunctions; CREATE AGGREGATE FUNCTION => CREATE AGGREGATE FUNCTION ag_cat AS LANGUAGE 'C++' NAME 'ConcatenateFactory' library AggregateFunctions; CREATE AGGREGATE FUNCTION => \x Expanded display is on. select * from user_functions; -[ RECORD 1 ]----------+----------------------------------------------------------------schema_name | public function_name | ag_avg procedure_type | User Defined Aggregate function_return_type | Numeric function_argument_type | Numeric function_definition | Class 'AverageFactory' in Library 'public.AggregateFunctions' volatility | is_strict | f is_fenced | f comment | -[ RECORD 2 ]----------+----------------------------------------------------------------schema_name | public function_name | ag_cat procedure_type | User Defined Aggregate function_return_type | Varchar function_argument_type | Varchar function_definition | Class 'ConcatenateFactory' in Library 'public.AggregateFunction s' volatility | is_strict | f is_fenced | f comment | See Also l CREATE LIBRARY l DROP AGGREGATE FUNCTION l GRANT (User Defined Extension) l REVOKE (User Defined Extension) HP Vertica Analytic Database (7.0.x) Page 1016 of 1539 SQL Reference Manual SQL Statements l USER_FUNCTIONS l Developing and Using User Defined Extensions l Developing a User Defined Aggregate Function CREATE ANALYTIC FUNCTION Associates a User Defined Analytic Function (UDAnF) stored in a shared Linux library with a SQL function name. You must have already loaded the library containing the UDAnF using the CREATE LIBRARY statement. When you call the SQL function, HP Vertica passes the arguments to the analytic function in the library to process. Syntax CREATE [ OR REPLACE ] ANALYTIC FUNCTION function-name ... AS [ LANGUAGE 'language' ] NAME 'factory' ... LIBRARY library_name ... [ FENCED | NOT FENCED ]; Parameters function-name The name to assign to the UDAnF. This is the name you use in your SQL statements to call the function. LANGUAGE 'language' The programming language used to write the UDAnF. Currently, 'C++' is supported. If not supplied, C++ is assumed. NAME 'factory' The name of the C++ factory class in the shared library that generates the object to handle the function's processing. LIBRARY library_name The name of the shared library that contains the C++ object to perform the processing for this function. This library must have been previously loaded using the CREATE LIBRARY statement. [ FENCED | NOT FENCED ] Enables or disables Fenced Mode for this function. Fenced mode is enabled by default. Permissions l To CREATE a function, the user must have CREATE privilege on the schema to contain the function and USAGE privilege on the library containing the function. l To use a function, the user must have USAGE privilege on the schema that contains the function and EXECUTE privileges on the function. l To DROP a function, the user must either be a superuser, the owner of the function, or the owner of the schema which contains the function. HP Vertica Analytic Database (7.0.x) Page 1017 of 1539 SQL Reference Manual SQL Statements Notes l The parameters and return value for the function are automatically determined by the CREATE ANALYTIC FUNCTION statement, based on data supplied by the factory class. l You can assign multiple functions the same name if they accept different sets of arguments. See User Defined Function Overloading in the Programmer's Guide for more information. l You can return a list of all UDFs by querying the system table V_CATALOG.USER_ FUNCTIONS. Users see only the functions on which they have EXECUTE privileges. See Also Developing a User Defined Analytic Function CREATE FILTER Adds a User Defined Load FILTER function. You must have already loaded this library using the CREATE LIBRARY statement. When you call the SQL function, HP Vertica passes the parameters to the function in the library to process it. Syntax CREATE [ OR REPLACE ] FILTER [[db-name.]schema.]function-name ... AS LANGUAGE 'language' NAME 'factory' LIBRARY library_name ... [ FENCED | NOT FENCED ]; Parameters [ OR REPLACE ] If you do not supply this parameter, the CREATE FILTER statement fails if an existing function matches the name and parameters of the filter function you are trying to define. If you do supply this parameter, the new filter function definition overwrites the old. [[db-name.]schema.] [Optional] Specifies the database name and optional schema name. Using a database name identifies objects that are not unique within the current search path (see Setting Schema Search Paths). You must be connected to the database you specify. You cannot make changes to objects in other databases. Specifying different database objects lets you qualify database objects as explicitly as required. For example, you can use a database and a schema name (mydb.myschema). HP Vertica Analytic Database (7.0.x) Page 1018 of 1539 SQL Reference Manual SQL Statements function-name The name of the filter function to create. If the filter function name is schema-qualified (as above), the function is created in the specified schema. This name does not need to match the name of the factory, but it is less confusing if they are the same or similar. LANGUAGE 'language' The programming language used to develop the function. 'C++' is the only language supported by User Defined Load functions. NAME 'factory' The name of the factory class in the shared library that generates the object to handle the filter function's processing. This is the same name used by the RegisterFactory class. LIBRARY library_name The name of the shared library that contains the C++ object to perform the processing for this filter function. This library must have been previously loaded using the CREATE LIBRARY statement. [ FENCED | NOT FENCED ] Enables or disables Fenced Mode for this function. Fenced mode is enabled by default. Notes l The parameters and return value for the filter function are automatically determined by the CREATE FILTER statement, based on data supplied by the factory class. l You can return a list of all SQL functions and User Defined Functions by querying the system table V_CATALOG.USER_FUNCTIONS or executing the vsql meta-command \df. Users see only the functions on which they have EXECUTE privileges. Permissions l Only a superuser can create or drop a function that uses a UDx library. l To use a User Defined Filter, the user must have USAGE privileges on the schema and EXECUTE privileges on the defined filter function. See GRANT (Function) and REVOKE (Function). Important: Installing an untrusted UDL function can compromise the security of the server. UDx's can contain arbitrary code. In particular, UD Source functions can read data from any arbitrary location. It is up to the developer of the function to enforce proper security limitations. Superusers must not grant access to UDx's to untrusted users. Example The following example demonstrates loading a library named iConverterLib, then defining a function named Iconverter that is mapped to the iConverterFactory factory class in the library: HP Vertica Analytic Database (7.0.x) Page 1019 of 1539 SQL Reference Manual SQL Statements => CREATE LIBRARY iConverterLib as '/opt/vertica/sdk/examples/build/IconverterLib.so'; CREATE LIBRARY => CREATE FILTER Iconverter AS LANGUAGE 'C++' NAME 'IconverterFactory' LIBRARY Iconverter Lib; CREATE FILTER FUNCTION => \x Expanded display is on. => SELECT * FROM user_functions; -[ RECORD 1 ]----------+-------------------schema_name | public function_name | Iconverter procedure_type | User Defined Filter function_return_type | function_argument_type | function_definition | volatility | is_strict | f is_fenced | f comment | See Also l CREATE LIBRARY l DROP FILTER l GRANT (Function) l REVOKE (Function) l USER_FUNCTIONS l Developing User Defined Load (UDL) Functions CREATE FUNCTION (SQL Functions) Lets you store SQL expressions as functions in HP Vertica for use in queries. These functions are useful for executing complex queries or combining HP Vertica built-in functions. You simply call the function name you assigned. Note: This topic describes how to use CREATE FUNCTION to create a SQL function. If you want to create a user-defined scalar function (UDSF), see CREATE FUNCTION (UDF). In addition, if you want to see how to create a user-defined transform function (UDTF), see CREATE TRANSFORM FUNCTION. Syntax CREATE [ OR REPLACE ] FUNCTION ... [[db-name.]schema.]function-name ( [ argname argtype HP Vertica Analytic Database (7.0.x) [, ...] ] ) Page 1020 of 1539 SQL Reference Manual SQL Statements ... RETURN rettype ... AS ... BEGIN ...... RETURN expression; ... END; Parameters [[db-name.]schema.] [Optional] Specifies the schema name. Using a schema identifies objects that are not unique within the current search path (see Setting Schema Search Paths). You can optionally precede a schema with a database name, but you must be connected to the database you specify. You cannot make changes to objects in other databases. The ability to specify different database objects (from database and schemas to tables and columns) lets you qualify database objects as explicitly as required. For example, use a table and column (mytable.column1), a schema, table, and column (myschema.mytable.column1), and, as full qualification, a database, schema, table, and column (mydb.myschema.mytable.column1). function-name Specifies a name for the SQL function to create. When using more than one schema, specify the schema that contains the function, as noted above. argname Specifies the name of the argument. argtype Specifies the data type for argument that is passed to the function. Argument types must match HP Vertica type names. See SQL Data Types. rettype Specifies the data type to be returned by the function. RETURN expression; Specifies the SQL function (function body), which must be in the form of ‘RETURN expression.’ expression can contain built-in functions, operators, and argument names specified in the CREATE FUNCTION statement. A semicolon at the end of the expression is required. Note: Only one RETURN expression is allowed in the CREATE FUNCTION definition. FROM, WHERE, GROUP BY, ORDER BY, LIMIT, aggregation, analytics, and meta function are not allowed. Permissions l To CREATE a function, the user must have CREATE privilege on the schema to contain the function and USAGE privilege on the library containing the function. HP Vertica Analytic Database (7.0.x) Page 1021 of 1539 SQL Reference Manual SQL Statements l To use a function, the user must have USAGE privilege on the schema that contains the function and EXECUTE privileges on the function. l To DROP a function, the user must either be a superuser, the owner of the function, or the owner of the schema which contains the function. See GRANT (User Defined Extension) and REVOKE (User Defined Extension). Notes l A SQL function can be used anywhere in a query where an ordinary SQL expression can be used, except in the table partition clause or the projection segmentation clause. l SQL Macros are flattened in all cases, including DDL. l You can create views on the queries that use SQL functions and then query the views. When you create a view, a SQL function replaces a call to the user-defined function with the function body in a view definition. Therefore, when the body of the user-defined function is replaced, the view should also be replaced. l If you want to change the body of a SQL function, use the CREATE OR REPLACE syntax. The command replaces the function with the new definition. If you change only the argument name or argument type, the system maintains both versions under the same function name. See Examples section below. l If multiple SQL functions with same name and argument type are in the search path, the first match is used when the function is called. l The strictness and volatility (stable, immutable, or volatile) of a SQL Macro are automatically inferred from the function's definition. HP Vertica then determines the correctness of usage, such as where an immutable function is expected but a volatile function is provided. l You can return a list of all SQL functions by querying the system table V_CATALOG.USER_ FUNCTIONS and executing the vsql meta-command \df. Users see only the functions on which they have EXECUTE privileges. Example This following statement creates a SQL function called myzeroifnull that accepts an INTEGER argument and returns an INTEGER result. => CREATE FUNCTION myzeroifnull(x INT) RETURN INT AS BEGIN RETURN (CASE WHEN (x IS NOT NULL) THEN x ELSE 0 END); END; You can use the new SQL function (myzeroifnull) anywhere you use an ordinary SQL expression. For example, create a simple table: HP Vertica Analytic Database (7.0.x) Page 1022 of 1539 SQL Reference Manual SQL Statements => CREATE => INSERT => INSERT => INSERT => SELECT a --1 0 (3 rows) TABLE tabwnulls(col1 INT); INTO tabwnulls VALUES(1); INTO tabwnulls VALUES(NULL); INTO tabwnulls VALUES(0); * FROM tabwnulls; Use the myzeroifnull function in a SELECT statement, where the function calls col1 from table tabwnulls: => SELECT myzeroifnull(col1) FROM tabwnulls; myzeroifnull -------------1 0 0 (3 rows) Use the myzeroifnull function in the GROUP BY clause: => SELECT COUNT(*) FROM tabwnulls GROUP BY myzeroifnull(col1); count ------2 1 (2 rows) If you want to change a SQL function's body, use the CREATE OR REPLACE syntax. The following command modifies the CASE expression: => CREATE OR REPLACE FUNCTION zerowhennull(x INT) RETURN INT RETURN (CASE WHEN (x IS NULL) THEN 0 ELSE x END); END; AS BEGIN To see how this information is stored in the HP Vertica catalog, see Viewing Information About SQL Functions in the Programmer's Guide. See Also l ALTER FUNCTION l DROP FUNCTION l GRANT (User Defined Extension) l REVOKE (User Defined Extension) HP Vertica Analytic Database (7.0.x) Page 1023 of 1539 SQL Reference Manual SQL Statements l USER_FUNCTIONS l Using User-Defined SQL Functions CREATE FUNCTION (UDF) Adds a User Defined Function (UDF) to the catalog. You must have already loaded this library using the CREATE LIBRARY statement. When you call the SQL function, HP Vertica passes the parameters to the function in the library to process it. Note: This topic describes how to use CREATE FUNCTION to create a User Defined Function. If you want to create a SQL function, see CREATE FUNCTION (SQL Function). In addition, if you want to create a user-defined transform function (UDTF), see CREATE TRANSFORM FUNCTION. Syntax CREATE [ OR REPLACE ] FUNCTION [[db-name.]schema.]function-name ... AS LANGUAGE 'language' NAME 'factory' LIBRARY library_name ... [ FENCED | NOT FENCED ]; Parameters [ OR REPLACE ] If you do not supply this parameter, the CREATE FUNCTION statement fails if an existing function matches the name and parameters of the function you are trying to define. If you do supply this parameter, the new function definition overwrites the old. [[db-name.]schema.] [Optional] Specifies the database name and optional schema name. Using a database name identifies objects that are not unique within the current search path (see Setting Search Paths). You must be connected to the database you specify, and you cannot change objects in other databases. Specifying different database objects lets you qualify database objects as explicitly as required. For example, you can use a database and a schema name (mydb.myschema). function-name The name of the function to create. If the function name is schemaqualified (as above), the function is created in the specified schema. This name does not need to match the name of the factory, but it is less confusing if they are the same or similar. LANGUAGE 'language' The programming language used to develop the function. 'C++' and 'R' is supported. HP Vertica Analytic Database (7.0.x) Page 1024 of 1539 SQL Reference Manual SQL Statements NAME 'factory' The name of the factory class in the shared library that generates the object to handle the function's processing. LIBRARY library_name The name of the shared library/R functions that contains the C++ object or R functions to perform the processing for this function. This library must have been previously loaded using the CREATE LIBRARY statement. [ FENCED | NOT FENCED ] Enables or disables Fenced Mode for this function. Fenced mode is enabled by default. Functions written in R always run in fenced mode. Permissions l To CREATE a function, the user must have CREATE privilege on the schema to contain the function and USAGE privilege on the library containing the function. l To use a function, the user must have USAGE privilege on the schema that contains the function and EXECUTE privileges on the function. l To DROP a function, the user must either be a superuser, the owner of the function, or the owner of the schema which contains the function. Notes l The parameters and return value for the function are automatically determined by the CREATE FUNCTION statement, based on data supplied by the factory class. l Multiple functions can share the same name if they have different parameters. When you call a multiply-defined function, HP Vertica selects the UDF function whose input parameters match the parameters in the function call to perform the processing. This behavior is similar to having multiple signatures for a method or function in other programming languages. l You can return a list of all SQL functions and UDFs by querying the system table V_ CATALOG.USER_FUNCTIONS or executing the vsql meta-command \df. Users see only the functions on which they have EXECUTE privileges. Examples The following example demonstrates loading a library named scalarfunctions, then defining a function named Add2ints that is mapped to the Add2intsInfo factory class in the library: => CREATE LIBRARY ScalarFunctions AS '/opt/vertica/sdk/examples/build/ScalarFunctions.s o'; CREATE LIBRARY => CREATE FUNCTION Add2Ints AS LANGUAGE 'C++' NAME 'Add2IntsFactory' LIBRARY ScalarFuncti ons; HP Vertica Analytic Database (7.0.x) Page 1025 of 1539 SQL Reference Manual SQL Statements CREATE FUNCTION => \x Expanded display is on. => SELECT * FROM USER_FUNCTIONS; -[ RECORD 1 ]----------+---------------------------------------------------schema_name | public function_name | Add2Ints procedure_type | User Defined Function function_return_type | Integer function_argument_type | Integer, Integer function_definition | Class 'Add2IntsFactory' in Library 'public.ScalarFunctions' volatility | volatile is_strict | f is_fenced | t comment | => \x Expanded display is off. => -- Try a simple call to the function => SELECT Add2Ints(23,19); Add2Ints ---------42 (1 row) See Also l CREATE LIBRARY l DROP FUNCTION l GRANT (User Defined Extension) l REVOKE (User Defined Extension) l USER_FUNCTIONS l Developing and Using User Defined Extensions CREATE PARSER Adds a User Defined Load PARSER function. You must have already loaded this library using the CREATE LIBRARY statement. When you call the SQL function, HP Vertica passes the parameters to the function in the library to process it. Syntax CREATE [ OR REPLACE ] PARSER [[db-name.]schema.]function-name ... AS LANGUAGE 'language' NAME 'factory' LIBRARY library_name ... [ FENCED | NOT FENCED ]; HP Vertica Analytic Database (7.0.x) Page 1026 of 1539 SQL Reference Manual SQL Statements Parameters [ OR REPLACE ] If you do not supply this parameter, the CREATE PARSER statement fails if an existing function matches the name and parameters of the parser function you are trying to define. If you do supply this parameter, the new parser function definition overwrites the old. [[db-name.]schema.] [Optional] Specifies the database name and optional schema name. Using a database name identifies objects that are not unique within the current search path (see Setting Schema Search Paths). You must be connected to the database you specify. You cannot make changes to objects in other databases. Specifying different database objects lets you qualify database objects as explicitly as required. For example, you can use a database and a schema name (mydb.myschema). function-name The name of the parser function to create. If the parser function name is schema-qualified (as above), the function is created in the specified schema. This name does not need to match the name of the factory, but it is less confusing if they are the same or similar. LANGUAGE 'language' The programming language used to develop the function. 'C++' is the only language supported by User Defined Load functions. NAME 'factory' The name of the factory class in the shared library that generates the object to handle the parser function's processing. This is the same name used by the RegisterFactory class. LIBRARY library_name The name of the shared library that contains the C++ object to perform the processing for this parser function. This library must have been previously loaded using the CREATE LIBRARY statement. [ FENCED Enables or disables Fenced Mode for this function. Fenced mode is enabled by default. NOT FENCED ] Notes l The parameters and return value for the parser function are automatically determined by the CREATE PARSER statement, based on data supplied by the factory class. l You can return a list of all SQL functions and User Defined Functions by querying the system table V_CATALOG.USER_FUNCTIONS or executing the vsql meta-command \df. Users see only the functions on which they have EXECUTE privileges. HP Vertica Analytic Database (7.0.x) Page 1027 of 1539 SQL Reference Manual SQL Statements Permissions l Only a superuser can create or drop a function that uses a UDx library. l To use a User Defined Parser, the user must have USAGE privileges on the schema and EXECUTE privileges on the defined parser function. See GRANT (Function) and REVOKE (Function). Important: Installing an untrusted UDL function can compromise the security of the server. UDx's can contain arbitrary code. In particular, UD Source functions can read data from any arbitrary location. It is up to the developer of the function to enforce proper security limitations. Superusers must not grant access to UDx's to untrusted users. Example The following example demonstrates loading a library named BasicIntegrerParserLib, then defining a function named BasicIntegerParser that is mapped to the BasicIntegerParserFactory factory class in the library: => CREATE LIBRARY BasicIntegerParserLib as '/opt/vertica/sdk/examples/build/BasicIntegerP arser.so'; CREATE LIBRARY => CREATE PARSER BasicIntegerParser AS LANGUAGE 'C++' NAME 'BasicIntegerParserFactory' LI BRARY BasicIntegerParserLib; CREATE PARSER FUNCTION => \x Expanded display is on. => SELECT * FROM user_functions; -[ RECORD 1 ]----------+-------------------schema_name | public function_name | BasicIntegerParser procedure_type | User Defined Parser function_return_type | function_argument_type | function_definition | volatility | is_strict | f is_fenced | f comment | See Also l CREATE LIBRARY l DROP PARSER l GRANT (Function) l REVOKE (Function) HP Vertica Analytic Database (7.0.x) Page 1028 of 1539 SQL Reference Manual SQL Statements l USER_FUNCTIONS l Developing User Defined Load (UDL) Functions CREATE SOURCE Adds a User Defined Load SOURCE function. You must have already loaded this library using the CREATE LIBRARY statement. When you call the SQL function, HP Vertica passes the parameters to the function in the library to process it. Syntax CREATE [ OR REPLACE ] SOURCE [[db-name.]schema.]function-name ... AS LANGUAGE 'language' NAME 'factory' LIBRARY library_name ... [ FENCED | NOT FENCED ]; Parameters [ OR REPLACE ] If you do not supply this parameter, the CREATE SOURCE statement fails if an existing function matches the name and parameters of the source function you are trying to define. If you do supply this parameter, the new source function definition overwrites the old. [[db-name.]schema.] [Optional] Specifies the database name and optional schema name. Using a database name identifies objects that are not unique within the current search path (see Setting Schema Search Paths). You must be connected to the database you specify. You cannot make changes to objects in other databases. Specifying different database objects lets you qualify database objects as explicitly as required. For example, you can use a database and a schema name (mydb.myschema). function-name The name of the source function to create. If the source function name is schema-qualified (as above), the function is created in the specified schema. This name does not need to match the name of the factory, but it is less confusing if they are the same or similar. LANGUAGE 'language' The programming language used to develop the function. 'C++' is the only language supported by User Defined Load functions. NAME 'factory' The name of the factory class in the shared library that generates the object to handle the source function's processing. This is the same name used by the RegisterFactory class. HP Vertica Analytic Database (7.0.x) Page 1029 of 1539 SQL Reference Manual SQL Statements LIBRARY library_name The name of the shared library that contains the C++ object to perform the processing for this source function. This library must have been previously loaded using the CREATE LIBRARY statement. [ FENCED | NOT FENCED ] Enables or disables Fenced Mode for this function. Fenced mode is enabled by default. Notes l The parameters and return value for the source function are automatically determined by the CREATE SOURCE statement, based on data supplied by the factory class. l You can return a list of all SQL functions and User Defined Functions by querying the system table V_CATALOG.USER_FUNCTIONS or executing the vsql meta-command \df. Users see only the functions on which they have EXECUTE privileges. Permissions l Only a superuser can create or drop a function that uses a UDx library. l To use a User Defined Source, the user must have USAGE privileges on the schema and EXECUTE privileges on the defined source function. See GRANT (Function) and REVOKE (Function). Important: Installing an untrusted UDL function can compromise the security of the server. UDx's can contain arbitrary code. In particular, UD Source functions can read data from any arbitrary location. It is up to the developer of the function to enforce proper security limitations. Superusers must not grant access to UDx's to untrusted users. Example The following example demonstrates loading a library named curllib, then defining a function named curl that is mapped to the CurlSourceFactory factory class in the library: => CREATE LIBRARY curllib as '/opt/vertica/sdk/examples/build/cURLLib.so'; CREATE LIBRARY => CREATE SOURCE curl AS LANGUAGE 'C++' NAME 'CurlSourceFactory' LIBRARY curllib; CREATE SOURCE => \x Expanded display is on. => SELECT * FROM user_functions; -[ RECORD 1 ]----------+-------------------schema_name | public function_name | curl procedure_type | User Defined Source function_return_type | function_argument_type | function_definition | HP Vertica Analytic Database (7.0.x) Page 1030 of 1539 SQL Reference Manual SQL Statements volatility is_strict is_fenced comment | | f | f | See Also l CREATE LIBRARY l DROP SOURCE l GRANT (Function) l REVOKE (Function) l USER_FUNCTIONS l Developing User Defined Load (UDL) Functions CREATE TRANSFORM FUNCTION Adds a User Defined Transform Function (UDTF) stored in a shared Linux library to the catalog. You must have already loaded this library using the CREATE LIBRARY statement. When you call the SQL function, HP Vertica passes the input table to the transform function in the library to process. Note: This topic describes how to create a UDTF. If you want to create a user-defined function (UDF), see CREATE FUNCTION (UDF). If you want to create a SQL function, see CREATE FUNCTION (SQL). Syntax CREATE TRANSFORM FUNCTION function-name ... [ AS LANGUAGE 'language' ] NAME 'factory' ... LIBRARY library_name ... [ FENCED | NOT FENCED ]; Parameters function-name The name to assign to the UDTF. This is the name you use in your SQL statements to call the function. LANGUAGE 'language' The programming language used to write the UDTF. Currently, 'C++' and 'R' is supported. If not supplied, C++ is assumed. HP Vertica Analytic Database (7.0.x) Page 1031 of 1539 SQL Reference Manual SQL Statements NAME 'factory' The name of the C++ factory class or R factory function in the shared library that generates the object to handle the function's processing. LIBRARY library_name The name of the shared library that contains the C++ object to perform the processing for this function. This library must have been previously loaded using the CREATE LIBRARY statement. [ FENCED | NOT FENCED ] Enables or disables Fenced Mode for this function. Fenced mode is enabled by default. Functions written in R always run in fenced mode. Permissions l To CREATE a function, the user must have CREATE privilege on the schema to contain the function and USAGE privilege on the library containing the function. l To use a function, the user must have USAGE privilege on the schema that contains the function and EXECUTE privileges on the function. l To DROP a function, the user must either be a superuser, the owner of the function, or the owner of the schema which contains the function. See GRANT (Transform Function) and REVOKE (Transform Function). UDTF Query Restrictions A query that includes a UDTF cannot contain: n Any statements other than the SELECT statement containing the call to the UDTF and a PARTITION BY expression n Any other analytic function n A call to another UDTF n A TIMESERIES clause n A pattern matching clause n A gap filling and interpolation clause Notes l The parameters and return values for the function are automatically determined by the CREATE TRANSFORM FUNCTION statement, based on data supplied by the factory class. l You can assign multiple functions the same name if they have different parameters. When you call a multiply-defined function, HP Vertica selects the UDF function whose input parameters HP Vertica Analytic Database (7.0.x) Page 1032 of 1539 SQL Reference Manual SQL Statements match the parameters in the function call to perform the processing. This behavior is similar to having multiple signatures for a method or function in other programming languages. l You can return a list of all UDFs by querying the system table V_CATALOG.USER_ FUNCTIONS. Users see only the functions on which they have EXECUTE privileges. See Also l DROP FUNCTION l GRANT (User Defined Extension) l REVOKE (User Defined Extension) l USER_FUNCTIONS l Developing and Using User Defined Extensions CREATE HCATALOG SCHEMA Define a schema for data stored in a Hive data warehouse using the HCatalog Connector. For more information, see Using the HCatalog Connector in the Hadoop Integration Guide. Syntax CREATE HCATALOG SCHEMA [IF NOT EXISTS] schemaName [AUTHORIZATION user-id] WITH HOSTNAME='metastore-host' [PORT=hiveMetastore-port] [WEBSERVICE_HOSTNAME='webHCat-hostname'] [WEBSERVICE_PORT=webHCat-port] [HCATALOG_SCHEMA='hive-schema-name'] [HCATALOG_USER='hcat-username'] [HCATALOG_CONNECTION_TIMEOUT=timeout] [HCATALOG_SLOW_TRANSFER_LIMIT=xfer-limit] [HCATALOG_SLOW_TRANSFER_TIME=xfer-time] Parameters Default Value Parameter Description [IF NOT EXISTS] If given, the statement exits without an error when the schema named in schemaName already exists. HP Vertica Analytic Database (7.0.x) N/A Page 1033 of 1539 SQL Reference Manual SQL Statements schemaName The name of the schema to create in the Vertica Analytic Database catalog. The tables in the Hive database will be available through this schema. none user-id The name of an Vertica Analytic Database account to own the schema being created current username metastore-host The hostname or IP address of the database server that stores the Hive data warehouse's metastore information. none hiveMetastore- The port number on which the metastore database is running. port 9083 webHCathostname The hostname or IP address of the WebHCat server (formerly known as Templeton). metastorehost webHCat-port The port number on which the WebHCat service is running. 50111 hive-schemaname The name of the Hive schema or database that the Vertica Analytic Database schema is being mapped to. schemaName hcat-username The username of the HCatalog user to use in when making calls to the WebHCat server. current username timeout The number of seconds the HCatalog Connector waits for a successful connection to the WebHCat server. A value of 0 means wait indefinitely. See notes xfer-limit The lowest data transfer rate (in bytes per second) from the WebHCat server that the HCatalog Connector accepts. See xfer-time for details. See notes xfer-time The number of seconds the HCatalog Connector waits before enforcing the data transfer rate lower limit. After this time has passed, the HCatalog Connector tests whether the data transfer rate from theWebHCat server is at least as fast as the value set in xfer-limit. If it is not, then the HCatalog Connector breaks the connection and terminates the query. See notes Notes The default values for timeout, xfer-limit, and xfer-time values are set by the configuration parameters HCatConnectionTimeout, HCatSlowTransferLimit, and HCatSlowTransferTime. See HCatalog Connector Parameters in the Administrator's Guide for more information. Permissions The user must be a superuser or be granted all permissions on the database to use CREATE HCATALOG SCHEMA. HP Vertica Analytic Database (7.0.x) Page 1034 of 1539 SQL Reference Manual SQL Statements Example The following statement shows using CREATE HCATALOG SCHEMA to define a new schema for tables stored in a Hive database, then querying the system tables that contain information about those tables: => CREATE HCATALOG SCHEMA hcat WITH hostname='hcathost' HCATALOG_SCHEMA='default' -> HCATALOG_USER='hcatuser'; CREATE SCHEMA => -- Show list of all HCatalog schemas => \x Expanded display is on. => SELECT * FROM v_catalog.hcatalog_schemata; -[ RECORD 1 ]--------+-----------------------------schema_id | 45035996273748980 schema_name | hcat schema_owner_id | 45035996273704962 schema_owner | dbadmin create_time | 2013-11-04 15:09:03.504094-05 hostname | hcathost port | 9933 webservice_hostname | hcathost webservice_port | 50111 hcatalog_schema_name | default hcatalog_user_name | hcatuser metastore_db_name | hivemetastoredb => -- List the tables in all HCatalog schemas => SELECT * FROM v_catalog.hcatalog_table_list; -[ RECORD 1 ]------+-----------------table_schema_id | 45035996273748980 table_schema | hcat hcatalog_schema | default table_name | messages hcatalog_user_name | hcatuser -[ RECORD 2 ]------+-----------------table_schema_id | 45035996273748980 table_schema | hcat hcatalog_schema | default table_name | weblog hcatalog_user_name | hcatuser -[ RECORD 3 ]------+-----------------table_schema_id | 45035996273748980 table_schema | hcat hcatalog_schema | default table_name | tweets hcatalog_user_name | hcatuser CREATE LIBRARY Loads a C++ shared library or R file containing user defined functions (UDFs). You supply the absolute path to a Linux shared library (.so) file or R file (.R) that contains the functions you want to access. See Developing and Using User Defined Extensions in the Programmer's Guide for HP Vertica Analytic Database (7.0.x) Page 1035 of 1539 SQL Reference Manual SQL Statements details. If you supply the optional OR REPLACE argument, the library will replace any existing library with the same name. Warning: User defined libraries are directly loaded by HP Vertica and may be run within the database process. By default, most UDF's developed in C++ run in Fenced Mode so that the function process runs outside of HP Vertica. However, if you choose not to run your code in fenced mode, or the type of UDF cannot be run in fenced mode (for example, User Defined Load), then your custom code can negatively impact database. Poorly-coded UDFs can cause instability or even database crashes. Syntax CREATE [OR REPLACE] LIBRARY [[db-name.]schema.]library_name AS 'library_path' [ LANGUAGE 'lang uage' ] Parameters [ OR REPLACE ] If you do not supply this parameter, the CREATE LIBRARY statement fails if an existing library matches the name the library you are trying to define. If you do supply this parameter, the new library replaces the old. [[db-name.]schema.] [Optional] Specifies the database name and optional schema name. Using a database name identifies objects that are not unique within the current search path (see Setting Search Paths). You must be connected to the database you specify, and you cannot change objects in other databases. Specifying different database objects lets you qualify database objects as explicitly as required. For example, you can use a database and a schema name (mydb.myschema). library_name A name to assign to this library. This is the name you use in a CREATE FUNCTION statement to enable user defined functions stored in the library. Note that this name is arbitrary. It does not need to reflect the name of the library file, although it would be less confusing if did. 'library_path' The absolute path and filename of the library to load located on the initiator node. 'language' The programming language used to develop the function. 'R' and 'C++' are supported. Default is 'C++'. Permissions Must be a superuser to create or drop a library. HP Vertica Analytic Database (7.0.x) Page 1036 of 1539 SQL Reference Manual SQL Statements Notes l As part of the loading process, HP Vertica distributes the library file to other nodes in the database. Any nodes that are down or that are added to the cluster later automatically receive a copy of the library file when they join the cluster. Subsequent modification (or deletion) of the file/path provided in the CREATE LIBRARY statement has no effect. l The CREATE LIBRARY statement performs some basic checks on the library file to ensure it is compatible with HP Vertica. The statement fails if it detects that the library was not correctly compiled or it finds other basic incompatibilities. However, there are many issues in shared libraries that CREATE LIBRARY cannot detect. Simply loading the library is no guarantee that it functions correctly. l Libraries are added to the database catalog, and therefore persist across database restarts. Examples To load a library in the home directory of the dbadmin account with the name MyFunctions: => CREATE LIBRARY MyFunctions AS 'home/dbadmin/my_functions.so'; To load a library located in the directory where you started vsql: => \set libfile '\''`pwd`'/MyOtherFunctions.so\''; => CREATE LIBRARY MyOtherFunctions AS :libfile; See Also l DROP LIBRARY l ALTER LIBRARY l CREATE FUNCTION (UDF) CREATE LOCAL TEMPORARY VIEW Creates a new local temporary view. Temporary views have the same restrictions as permanent views, so you cannot perform insert, update, delete, or copy operations on these views. Local temporary views are session-scoped; the view drops automatically when the session ends. For more information, see Managing Sessionsin the Administrator's Guide. Note: HP Vertica does not support global temporary views. HP Vertica Analytic Database (7.0.x) Page 1037 of 1539 SQL Reference Manual SQL Statements Syntax CREATE [ OR REPLACE ] LOCAL TEMPORARY | TEMP VIEW viewname [ ( column-name [, ...] ) ] AS query ] Parameters [ OR REPLACE ] Overwrites any existing local temporary view with the name viewname. If you do not specify this option and a temporary view with that name already exists, CREATE LOCAL TEMP VIEW returns an error. viewname Specifies the name of the local temporary view to create. The temporary view name must be unique. Do not use the same name as any table, view, or projection within the database. If you do not supply a name for the temporary view, the statement uses the current user name. column-name [Optional] Specifies the list of names to use as column names for the temporary view. Columns are presented from left to right in the order given. If you do not specify one or more column names, HP Vertica automatically deduces the column names from the query. query Specifies the SELECT query that the temporary view executes. HP Vertica also uses the query to deduce the list of names to be used as columns names for the temporary view if you do not specify them. Use a SELECT statement to specify the query, which can refer to tables, temp tables, and other views. Permissions The following permissions apply to local temporary views: l To create a local temporary view, the user must have usage permissions on any base object from which the view pulls. l Privileges required on base objects for the view owner must be granted directly; you cannot grant privileges on these objects using roles. l Since local temporary entities are session scoped, local temporary views are visible only to their creator in the session in which it was created. Transforming a SELECT Query to Use a Temporary Local View When HP Vertica processes a query containing a local temp view (or any view) the view is treated as a subquery to the view's enclosing statement. For more information, see CREATE VIEW. HP Vertica Analytic Database (7.0.x) Page 1038 of 1539 SQL Reference Manual SQL Statements Example => CREATE LOCAL TEMP VIEW myview AS SELECT SUM(annual_income), customer_state FROM public.customer_dimension WHERE customer_key IN (SELECT customer_key FROM store.store_sales_fact) GROUP BY customer_state ORDER BY customer_state ASC; The following example uses the myview temporary view with a WHERE clause that limits the results to combined salaries of greater than 2,000,000,000. => SELECT * FROM myview WHERE SUM > 2000000000; SUM | customer_state -------------+---------------2723441590 | AZ 29253817091 | CA 4907216137 | CO 3769455689 | CT 3330524215 | FL 4581840709 | IL 3310667307 | IN 2793284639 | MA 5225333668 | MI 2128169759 | NV 2806150503 | PA 2832710696 | TN 14215397659 | TX 2642551509 | UT (14 rows) See Also l ALTER VIEW l SELECT l CREATE VIEW l SELECT l DROP VIEW l GRANT (View) l REVOKE (View) HP Vertica Analytic Database (7.0.x) Page 1039 of 1539 SQL Reference Manual SQL Statements CREATE NETWORK INTERFACE Identifies a network interface to which the node belongs. Use this statement when you want to configure import/export from individual nodes to other HP Vertica clusters. Syntax CREATE NETWORK INTERFACE network-interface-name ON node-name WITH 'ip address of node' network-interface-name The name you assign to the network interface. node-name The name of the node. IP address of node The IP Address of the node. Permissions Must be a superuser to create a network interface. CREATE PROCEDURE Adds an external procedure to HP Vertica. See Implementing External Procedures in the Programmer's Guide for more information about external procedures. Syntax CREATE PROCEDURE [[db-name.]schema.]procedure-name ( ... [ argname ] [ argtype [,...] ] ) ... AS 'exec-name' ... LANGUAGE 'language-name' ... USER 'OS-user' HP Vertica Analytic Database (7.0.x) Page 1040 of 1539 SQL Reference Manual SQL Statements Parameters [[db-name.]schema.] [Optional] Specifies the schema name. Using a schema identifies objects that are not unique within the current search path (see Setting Schema Search Paths). You can optionally precede a schema with a database name, but you must be connected to the database you specify. You cannot make changes to objects in other databases. The ability to specify different database objects (from database and schemas to tables and columns) lets you qualify database objects as explicitly as required. For example, use a table and column (mytable.column1), a schema, table, and column (myschema.mytable.column1), and, as full qualification, a database, schema, table, and column (mydb.myschema.mytable.column1). procedure-name Specifies a name for the external procedure. If the procedure-name is schema-qualified, the procedure is created in the specified schema. argname [Optional] Presents a descriptive argument name to provide a cue to procedure callers. argtype [Optional] Specifies the data type for argument(s) that will be passed to the procedure. Argument types must match HP Vertica type names. See SQL Data Types. AS Specifies the executable program in the procedures directory. LANGUAGE Specifies the procedure language. This parameter must be set to EXTERNAL. USER Specifies the user executed as. The user is the owner of the file. The user cannot be root. Note: The external program must allow execute privileges for this user. Permissions To create a procedure a superuser must have CREATE privilege on schema to contain procedure. Notes l A procedure file must be owned by the database administrator (OS account) or by a user in the same group as the administrator. (The procedure file owner cannot be root.) The procedure file must also have the set UID attribute enabled, and allow read and execute permission for the group. HP Vertica Analytic Database (7.0.x) Page 1041 of 1539 SQL Reference Manual SQL Statements By default, only a database superuser can execute procedures. However, a superuser can grant the right to execute procedures to other users. See GRANT (Procedure). l Example This example illustrates how to create a procedure named helloplanet for the helloplanet.sh external procedure file. This file accepts one varchar argument. Sample file: #!/bin/bashecho "hello planet argument: $1" >> /tmp/myprocedure.log exit 0 Issue the following SQL to create the procedure: CREATE PROCEDURE helloplanet(arg1 varchar) AS 'helloplanet.sh' LANGUAGE 'external' USER ' release'; See Also DROP PROCEDURE l l CREATE PROFILE Creates a profile that controls password requirements for users. Syntax CREATE PROFILE name LIMIT ... [PASSWORD_LIFE_TIME {life-limit | DEFAULT | UNLIMITED}] ... [PASSWORD_GRACE_TIME {grace_period | DEFAULT | UNLIMITED}] ... [FAILED_LOGIN_ATTEMPTS {login-limit | DEFAULT | UNLIMITED}] ... [PASSWORD_LOCK_TIME {lock-period | DEFAULT | UNLIMITED}] ... [PASSWORD_REUSE_MAX {reuse-limit | DEFAULT | UNLIMITED}] ... [PASSWORD_REUSE_TIME {reuse-period | DEFAULT | UNLIMITED}] ... [PASSWORD_MAX_LENGTH {max-length | DEFAULT | UNLIMITED}] ... [PASSWORD_MIN_LENGTH {min-length | DEFAULT | UNLIMITED}] ... [PASSWORD_MIN_LETTERS {min-letters | DEFAULT | UNLIMITED}] ... [PASSWORD_MIN_UPPERCASE_LETTERS {min-cap-letters | DEFAULT | UNLIMITED}] ... [PASSWORD_MIN_LOWERCASE_LETTERS {min-lower-letters | DEFAULT | UNLIMITED}] ... [PASSWORD_MIN_DIGITS {min-digits | DEFAULT | UNLIMITED}] Note: For all parameters, the special DEFAULT value means that the parameter's value is inherited from the DEFAULT profile. Any changes to the parameter in the DEFAULT profile is reflected by all of the profiles that inherit that parameter. Any parameter not specified in the HP Vertica Analytic Database (7.0.x) Page 1042 of 1539 SQL Reference Manual SQL Statements CREATE PROFILE command is set to DEFAULT. Parameters Meaning of UNLIMITED value Name Description name The name of the profile to create PASSWORD_LIFE_TIME life-limit Integer number of days a Passwords password remains valid. never expire. After the time elapses, the user must change the password (or will be warned that their password has expired if PASSWORD_GRACE_ TIME is set to a value other than zero or UNLIMITED). PASSWORD_GRACE_TIMEgrace-period Integer number of days the users are allowed to login (while being issued a warning message) after their passwords are older than the PASSWORD_ LIFE_TIME. After this period expires, users are forced to change their passwords on login if they have not done so after their password expired. No grace period (the same as zero) FAILED_LOGIN_ATTEMPTSlogin-limit The number of consecutive failed login attempts that result in a user's account being locked. Accounts are never locked, no matter how many failed login attempts are made. HP Vertica Analytic Database (7.0.x) N/A Page 1043 of 1539 SQL Reference Manual SQL Statements Meaning of UNLIMITED value Name Description PASSWORD_LOCK_TIME lock-period Integer value setting the number of days an account is locked after the user's account was locked by having too many failed login attempts. After the PASSWORD_LOCK_ TIME has expired, the account is automatically unlocked. Accounts locked because of too many failed login attempts are never automatically unlocked. They must be manually unlocked by the database superuser. PASSWORD_REUSE_MAX reuse-limit The number of password changes that need to occur before the current password can be reused. Users are not required to change passwords a certain number of times before reusing an old password. PASSWORD_REUSE_TIMEreuse-period The integer number of days that must pass after a password has been set before the before it can be reused. Password reuse is not limited by time. PASSWORD_MAX_LENGTH max-length The maximum number of characters allowed in a password. Value must be in the range of 8 to 100. Passwords are limited to 100 characters. PASSWORD_MIN_LENGTH min-length The minimum number of characters required in a password. Valid range is 0 to max-length. Equal to maxlength. PASSWORD_MIN_LETTERSmin-of-letters Minimum number of letters (a-z and A-Z) that must be in a password. Valid ranged is 0 to max-length. 0 (no minimum). HP Vertica Analytic Database (7.0.x) Page 1044 of 1539 SQL Reference Manual SQL Statements Meaning of UNLIMITED value Name Description PASSWORD_MIN_UPPERCASE_LETTERSmin-cap-letters Minimum number of capital letters (A-Z) that must be in a password. Valid range is is 0 to max-length. 0 (no minimum). PASSWORD_MIN_LOWERCASE_LETTERSmin-lower-letters Minimum number of lowercase letters (a-z) that must be in a password. Valid range is is 0 to maxlength. 0 (no minimum). PASSWORD_MIN_DIGITS min-digits Minimum number of digits (0-9) that must be in a password. Valid range is is 0 to max-length. 0 (no minimum). PASSWORD_MIN_SYMBOLSmin-symbols Minimum number of 0 (no symbols (any printable minimum). non-letter and non-digit character, such as $, #, @, and so on) that must be in a password. Valid range is is 0 to max-length. Permissions Must be a superuser to create a profile. Note: Only the profile settings for how many failed login attempts trigger account locking and how long accounts are locked have an effect on external password authentication methods such as LDAP or Kerberos. All password complexity, reuse, and lifetime settings have an effect on passwords managed by HP Vertica only. See Also l ALTER PROFILE l DROP PROFILE CREATE PROJECTION Creates metadata for a projection in the HP Vertica catalog. You can create a segmented projection, recommended for large tables. Unsegmented projections are recommended only for HP Vertica Analytic Database (7.0.x) Page 1045 of 1539 SQL Reference Manual SQL Statements small tables, which are then replicated across all cluster nodes. You can also create projections using a combination of individual columns and grouped columns. You can optionally apply a specific access rank to one or more columns, and encoding for an individual column or group of columns. Syntax CREATE PROJECTION [ IF NOT EXISTS ] ...[[db-name.]schema.]projection-name ...[ ( { projection-column ...| { GROUPED ( column-reference1, column-reference2 [ ,... ])} ......... [ ACCESSRANK integer ] ......... [ ENCODING Encoding-Type ] } [ ,... ] ) ...] AS SELECT table-column [ , ... ] FROM table-reference [ , ... ] ... [ WHERE Join-Predicate [ AND join-predicate ] ... ... [ ORDER BY table-column [ , ... ] ] ... [ Hash-Segmentation-Clause ... | UNSEGMENTED { NODE node | ALL NODES } ] ... [ KSAFE [ k-num ] ] Parameters [ IF NOT EXISTS ] [Optional] Determines whether the statement generates a NOTICE message or an ERROR if