Apache HBase ™ Reference Guide
apache_hbase_reference_guide
apache_hbase_reference_guide
User Manual:
Open the PDF directly: View PDF .
Page Count: 729 [warning: Documents this large are best viewed by clicking the View PDF Link!]
- Apache HBase ™ Reference Guide
- Contents
- Preface
- Getting Started
- Apache HBase Configuration
- Chapter 3. Configuration Files
- Chapter 4. Basic Prerequisites
- Chapter 5. HBase run modes: Standalone and Distributed
- Chapter 6. Running and Confirming Your Installation
- Chapter 7. Default Configuration
- Chapter 8. Example Configurations
- Chapter 9. The Important Configurations
- Chapter 10. Dynamic Configuration
- Upgrading
- The Apache HBase Shell
- Data Model
- HBase and Schema Design
- RegionServer Sizing Rules of Thumb
- Chapter 35. On the number of column families
- Chapter 36. Rowkey Design
- Chapter 37. Number of Versions
- Chapter 38. Supported Datatypes
- Chapter 39. Joins
- Chapter 40. Time To Live (TTL)
- Chapter 41. Keeping Deleted Cells
- Chapter 42. Secondary Indexes and Alternate Query Paths
- Chapter 43. Constraints
- Chapter 44. Schema Design Case Studies
- Chapter 45. Operational and Performance Configuration Options
- Chapter 46. Special Cases
- HBase and MapReduce
- Chapter 47. HBase, MapReduce, and the CLASSPATH
- Chapter 48. MapReduce Scan Caching
- Chapter 49. Bundled HBase MapReduce Jobs
- Chapter 50. HBase as a MapReduce Job Data Source and Data Sink
- Chapter 51. Writing HFiles Directly During Bulk Import
- Chapter 52. RowCounter Example
- Chapter 53. Map-Task Splitting
- Chapter 54. HBase MapReduce Examples
- Chapter 55. Accessing Other HBase Tables in a MapReduce Job
- Chapter 56. Speculative Execution
- Chapter 57. Cascading
- Securing Apache HBase
- Chapter 58. Using Secure HTTP (HTTPS) for the Web UI
- Chapter 59. Using SPNEGO for Kerberos authentication with Web UIs
- Chapter 60. Secure Client Access to Apache HBase
- Chapter 61. Simple User Access to Apache HBase
- Chapter 62. Securing Access to HDFS and ZooKeeper
- Chapter 63. Securing Access To Your Data
- Chapter 64. Security Configuration Example
- Architecture
- In-memory Compaction
- Apache HBase APIs
- Apache HBase External APIs
- Thrift API and Filter Language
- Apache HBase Coprocessors
- Apache HBase Performance Tuning
- Chapter 92. Operating System
- Chapter 93. Network
- Chapter 94. Java
- Chapter 95. HBase Configurations
- Chapter 96. ZooKeeper
- Chapter 97. Schema Design
- Chapter 98. HBase General Patterns
- Chapter 99. Writing to HBase
- Chapter 100. Reading from HBase
- Chapter 101. Deleting from HBase
- Chapter 102. HDFS
- Chapter 103. Amazon EC2
- Chapter 104. Collocating HBase and MapReduce
- Chapter 105. Case Studies
- Troubleshooting and Debugging Apache HBase
- Chapter 106. General Guidelines
- Chapter 107. Logs
- Chapter 108. Resources
- Chapter 109. Tools
- Chapter 110. Client
- Chapter 111. MapReduce
- Chapter 112. NameNode
- Chapter 113. Network
- Chapter 114. RegionServer
- Chapter 115. Master
- Chapter 116. ZooKeeper
- Chapter 117. Amazon EC2
- Chapter 118. HBase and Hadoop version issues
- Chapter 119. HBase and HDFS
- Chapter 120. Running unit or integration tests
- Chapter 121. Case Studies
- Chapter 122. Cryptographic Features
- Chapter 123. Operating System Specific Issues
- Chapter 124. JDK Issues
- Apache HBase Case Studies
- Apache HBase Operational Management
- Chapter 128. HBase Tools and Utilities
- Chapter 129. Region Management
- Chapter 130. Node Management
- Chapter 131. HBase Metrics
- Chapter 132. HBase Monitoring
- Chapter 133. Cluster Replication
- Chapter 134. Running Multiple Workloads On a Single Cluster
- Chapter 135. HBase Backup
- Chapter 136. HBase Snapshots
- Chapter 137. Storing Snapshots in Microsoft Azure Blob Storage
- Chapter 138. Capacity Planning and Region Sizing
- Chapter 139. Table Rename
- Chapter 140. RegionServer Grouping
- Chapter 141. Region Normalizer
- Building and Developing Apache HBase
- Chapter 142. Getting Involved
- Chapter 143. Apache HBase Repositories
- Chapter 144. IDEs
- Chapter 145. Building Apache HBase
- Chapter 146. Releasing Apache HBase
- Chapter 147. Voting on Release Candidates
- Chapter 148. Announcing Releases
- Chapter 149. Generating the HBase Reference Guide
- Chapter 150. Updating hbase.apache.org
- Chapter 151. Tests
- Chapter 152. Developer Guidelines
- Unit Testing HBase Applications
- Protobuf in HBase
- Procedure Framework (Pv2): HBASE-12439
- AMv2 Description for Devs
- ZooKeeper
- Community
- Appendix
- Appendix A: Contributing to Documentation
- Appendix B: FAQ
- Appendix C: hbck In Depth
- Appendix D: Access Control Matrix
- Appendix E: Compression and Data Block Encoding In HBase
- Appendix F: SQL over HBase
- Appendix G: YCSB
- Appendix H: HFile format
- Appendix I: Other Information About HBase
- Appendix J: HBase History
- Appendix K: HBase and the Apache Software Foundation
- Appendix L: Apache HBase Orca
- Appendix M: Enabling Dapper-like Tracing in HBase
- Chapter 179. Client Modifications
- Chapter 180. Tracing from HBase Shell
- Appendix N: 0.95 RPC Specification
- Appendix O: Known Incompatibilities Among HBase Versions
- Chapter 181. HBase 2.0 Incompatible Changes
Apache HBase ™ Reference Guide
Apache HBase Team
Version 2.1.0
Contents
Preface. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Ê1
Getting Started. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Ê3
1. Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Ê4
2. Quick Start - Standalone HBase . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Ê5
Apache HBase Configuration. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Ê18
3. Configuration Files . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Ê19
4. Basic Prerequisites . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Ê21
5. HBase run modes: Standalone and Distributed . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Ê27
6. Running and Confirming Your Installation. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Ê31
7. Default Configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Ê32
8. Example Configurations. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Ê72
9. The Important Configurations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Ê74
10. Dynamic Configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Ê82
Upgrading. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Ê85
11. HBase version number and compatibility. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Ê86
12. Rollback . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Ê91
13. Upgrade Paths . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Ê95
The Apache HBase Shell . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Ê106
14. Scripting with Ruby . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Ê107
15. Running the Shell in Non-Interactive Mode . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Ê108
16. HBase Shell in OS Scripts. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Ê109
17. Read HBase Shell Commands from a Command File . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Ê111
18. Passing VM Options to the Shell. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Ê113
19. Shell Tricks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Ê114
Data Model. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Ê120
20. Conceptual View . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Ê121
21. Physical View . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Ê123
22. Namespace . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Ê124
23. Table . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Ê126
24. Row . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Ê127
25. Column Family . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Ê128
26. Cells. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Ê129
27. Data Model Operations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Ê130
28. Versions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Ê132
29. Sort Order . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Ê137
30. Column Metadata . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Ê138
31. Joins . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Ê139
32. ACID . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Ê140
HBase and Schema Design . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Ê141
33. Schema Creation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Ê142
34. Table Schema Rules Of Thumb . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Ê143
RegionServer Sizing Rules of Thumb . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Ê144
35. On the number of column families. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Ê145
36. Rowkey Design. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Ê146
37. Number of Versions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Ê153
38. Supported Datatypes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Ê154
39. Joins . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Ê155
40. Time To Live (TTL) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Ê156
41. Keeping Deleted Cells . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Ê157
42. Secondary Indexes and Alternate Query Paths . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Ê161
43. Constraints . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Ê163
44. Schema Design Case Studies . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Ê164
45. Operational and Performance Configuration Options . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Ê174
46. Special Cases. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Ê177
HBase and MapReduce . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Ê178
47. HBase, MapReduce, and the CLASSPATH . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Ê179
48. MapReduce Scan Caching . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Ê184
49. Bundled HBase MapReduce Jobs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Ê185
50. HBase as a MapReduce Job Data Source and Data Sink. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Ê186
51. Writing HFiles Directly During Bulk Import. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Ê187
52. RowCounter Example. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Ê188
53. Map-Task Splitting. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Ê189
54. HBase MapReduce Examples . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Ê190
55. Accessing Other HBase Tables in a MapReduce Job . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Ê197
56. Speculative Execution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Ê198
57. Cascading. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Ê199
Securing Apache HBase . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Ê200
58. Using Secure HTTP (HTTPS) for the Web UI . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Ê201
59. Using SPNEGO for Kerberos authentication with Web UIs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Ê202
60. Secure Client Access to Apache HBase . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Ê204
61. Simple User Access to Apache HBase . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Ê211
62. Securing Access to HDFS and ZooKeeper . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Ê214
63. Securing Access To Your Data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Ê216
64. Security Configuration Example . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Ê242
Architecture. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Ê245
65. Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Ê246
66. Catalog Tables . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Ê248
67. Client. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Ê249
68. Client Request Filters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Ê253
69. Master . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Ê259
70. RegionServer . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Ê260
71. Regions. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Ê280
72. Bulk Loading. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Ê307
73. HDFS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Ê309
74. Timeline-consistent High Available Reads . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Ê310
75. Storing Medium-sized Objects (MOB). . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Ê322
In-memory Compaction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Ê327
76. Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Ê328
77. Enabling. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Ê329
Apache HBase APIs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Ê331
78. Examples . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Ê332
Apache HBase External APIs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Ê334
79. REST . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Ê335
80. Thrift. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Ê346
81. C/C++ Apache HBase Client . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Ê347
82. Using Java Data Objects (JDO) with HBase . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Ê348
83. Scala . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Ê351
84. Jython . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Ê353
Thrift API and Filter Language . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Ê356
85. Filter Language . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Ê357
Apache HBase Coprocessors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Ê363
86. Coprocessor Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Ê364
87. Types of Coprocessors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Ê365
88. Loading Coprocessors. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Ê367
89. Examples . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Ê372
90. Guidelines For Deploying A Coprocessor . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Ê378
91. Restricting Coprocessor Usage . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Ê380
Apache HBase Performance Tuning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Ê381
92. Operating System . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Ê382
93. Network . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Ê383
94. Java . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Ê385
95. HBase Configurations. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Ê386
96. ZooKeeper . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Ê390
97. Schema Design . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Ê391
98. HBase General Patterns . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Ê395
99. Writing to HBase . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Ê396
100. Reading from HBase. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Ê399
101. Deleting from HBase. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Ê404
102. HDFS. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Ê405
103. Amazon EC2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Ê407
104. Collocating HBase and MapReduce. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Ê408
105. Case Studies . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Ê409
Troubleshooting and Debugging Apache HBase. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Ê410
106. General Guidelines . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Ê411
107. Logs. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Ê412
108. Resources . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Ê416
109. Tools . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Ê417
110. Client . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Ê425
111. MapReduce . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Ê429
112. NameNode . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Ê431
113. Network. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Ê434
114. RegionServer . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Ê435
115. Master . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Ê444
116. ZooKeeper . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Ê446
117. Amazon EC2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Ê447
118. HBase and Hadoop version issues . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Ê448
119. HBase and HDFS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Ê449
120. Running unit or integration tests . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Ê452
121. Case Studies . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Ê453
122. Cryptographic Features . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Ê454
123. Operating System Specific Issues . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Ê455
124. JDK Issues . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Ê456
Apache HBase Case Studies . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Ê457
125. Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Ê458
126. Schema Design . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Ê459
127. Performance/Troubleshooting. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Ê460
Apache HBase Operational Management. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Ê464
128. HBase Tools and Utilities . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Ê465
129. Region Management. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Ê484
130. Node Management . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Ê485
131. HBase Metrics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Ê491
132. HBase Monitoring . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Ê496
133. Cluster Replication . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Ê500
134. Running Multiple Workloads On a Single Cluster. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Ê513
135. HBase Backup. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Ê521
136. HBase Snapshots . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Ê523
137. Storing Snapshots in Microsoft Azure Blob Storage . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Ê527
138. Capacity Planning and Region Sizing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Ê528
139. Table Rename. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Ê532
140. RegionServer Grouping . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Ê533
141. Region Normalizer . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Ê537
Building and Developing Apache HBase . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Ê542
142. Getting Involved . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Ê543
143. Apache HBase Repositories. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Ê546
144. IDEs. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Ê547
145. Building Apache HBase . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Ê550
146. Releasing Apache HBase . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Ê554
147. Voting on Release Candidates . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Ê562
148. Announcing Releases . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Ê563
149. Generating the HBase Reference Guide. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Ê564
150. Updating hbase.apache.org. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Ê565
151. Tests . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Ê566
152. Developer Guidelines . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Ê580
Unit Testing HBase Applications. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Ê595
153. JUnit . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Ê596
154. Mockito . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Ê598
155. MRUnit. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Ê600
156. Integration Testing with an HBase Mini-Cluster . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Ê602
Protobuf in HBase . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Ê604
157. Protobuf. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Ê605
Procedure Framework (Pv2): HBASE-12439 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Ê607
158. Procedures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Ê608
159. Subprocedures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Ê611
160. ProcedureExecutor . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Ê612
161. Nonces . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Ê613
162. Wait/Wake/Suspend/Yield . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Ê614
163. Locking . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Ê615
164. Procedure Types . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Ê616
165. References. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Ê617
AMv2 Description for Devs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Ê618
166. Background. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Ê619
167. New System . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Ê620
168. Procedures Detail . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Ê621
169. UI. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Ê623
170. Logging . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Ê624
171. Implementation Notes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Ê625
172. New Configs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Ê626
173. Tools . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Ê627
ZooKeeper . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Ê628
174. Using existing ZooKeeper ensemble. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Ê630
175. SASL Authentication with ZooKeeper . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Ê631
Community . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Ê638
176. Decisions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Ê639
177. Community Roles . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Ê640
178. Commit Message format . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Ê641
Appendix . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Ê642
Appendix A: Contributing to Documentation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Ê643
Appendix B: FAQ . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Ê654
Appendix C: hbck In Depth . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Ê657
Appendix D: Access Control Matrix. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Ê661
Appendix E: Compression and Data Block Encoding In HBase . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Ê667
Appendix F: SQL over HBase . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Ê678
Appendix G: YCSB . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Ê679
Appendix H: HFile format . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Ê680
Appendix I: Other Information About HBase . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Ê689
Appendix J: HBase History . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Ê690
Appendix K: HBase and the Apache Software Foundation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Ê691
Appendix L: Apache HBase Orca . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Ê692
Appendix M: Enabling Dapper-like Tracing in HBase . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Ê693
179. Client Modifications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Ê695
180. Tracing from HBase Shell . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Ê696
Appendix N: 0.95 RPC Specification . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Ê697
Appendix O: Known Incompatibilities Among HBase Versions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Ê701
181. HBase 2.0 Incompatible Changes. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Ê702
Preface
This is the official reference guide for the HBase version it ships with.
Herein you will find either the definitive documentation on an HBase topic as of its standing when
the referenced HBase version shipped, or it will point to the location in Javadoc or JIRA where the
pertinent information can be found.
About This Guide
This reference guide is a work in progress. The source for this guide can be found in the
_src/main/asciidoc directory of the HBase source. This reference guide is marked up using AsciiDoc
from which the finished guide is generated as part of the 'site' build target. Run
mvn site
to generate this documentation. Amendments and improvements to the documentation are
welcomed. Click this link to file a new documentation bug against Apache HBase with some values
pre-selected.
Contributing to the Documentation
For an overview of AsciiDoc and suggestions to get started contributing to the documentation, see
the relevant section later in this documentation.
Heads-up if this is your first foray into the world of distributed computing…
If this is your first foray into the wonderful world of Distributed Computing, then you are in for
some interesting times. First off, distributed systems are hard; making a distributed system hum
requires a disparate skillset that spans systems (hardware and software) and networking.
Your cluster’s operation can hiccup because of any of a myriad set of reasons from bugs in HBase
itself through misconfigurations — misconfiguration of HBase but also operating system
misconfigurations — through to hardware problems whether it be a bug in your network card
drivers or an underprovisioned RAM bus (to mention two recent examples of hardware issues that
manifested as "HBase is slow"). You will also need to do a recalibration if up to this your computing
has been bound to a single box. Here is one good starting point: Fallacies of Distributed Computing.
That said, you are welcome.
It’s a fun place to be.
Yours, the HBase Community.
Reporting Bugs
Please use JIRA to report non-security-related bugs.
To protect existing HBase installations from new vulnerabilities, please do not use JIRA to report
security-related bugs. Instead, send your report to the mailing list private@apache.org, which
allows anyone to send messages, but restricts who can read them. Someone on that list will contact
you to follow up on your report.
1
Support and Testing Expectations
The phrases /supported/, /not supported/, /tested/, and /not tested/ occur several places throughout
this guide. In the interest of clarity, here is a brief explanation of what is generally meant by these
phrases, in the context of HBase.
Commercial technical support for Apache HBase is provided by many Hadoop
vendors. This is not the sense in which the term /support/ is used in the context of
the Apache HBase project. The Apache HBase team assumes no responsibility for
your HBase clusters, your configuration, or your data.
Supported
In the context of Apache HBase, /supported/ means that HBase is designed to work in the way
described, and deviation from the defined behavior or functionality should be reported as a bug.
Not Supported
In the context of Apache HBase, /not supported/ means that a use case or use pattern is not
expected to work and should be considered an antipattern. If you think this designation should
be reconsidered for a given feature or use pattern, file a JIRA or start a discussion on one of the
mailing lists.
Tested
In the context of Apache HBase, /tested/ means that a feature is covered by unit or integration
tests, and has been proven to work as expected.
Not Tested
In the context of Apache HBase, /not tested/ means that a feature or use pattern may or may not
work in a given way, and may or may not corrupt your data or cause operational issues. It is an
unknown, and there are no guarantees. If you can provide proof that a feature designated as
/not tested/ does work in a given way, please submit the tests and/or the metrics so that other
users can gain certainty about such features or use patterns.
2
Getting Started
3
Chapter 2. Quick Start - Standalone HBase
This section describes the setup of a single-node standalone HBase. A standalone instance has all
HBase daemons — the Master, RegionServers, and ZooKeeper — running in a single JVM persisting
to the local filesystem. It is our most basic deploy profile. We will show you how to create a table in
HBase using the hbase shell CLI, insert rows into the table, perform put and scan operations
against the table, enable or disable the table, and start and stop HBase.
Apart from downloading HBase, this procedure should take less than 10 minutes.
2.1. JDK Version Requirements
HBase requires that a JDK be installed. See Java for information about supported JDK versions.
2.2. Get Started with HBase
Procedure: Download, Configure, and Start HBase in Standalone Mode
1. Choose a download site from this list of Apache Download Mirrors. Click on the suggested top
link. This will take you to a mirror of HBase Releases. Click on the folder named stable and then
download the binary file that ends in .tar.gz to your local filesystem. Do not download the file
ending in src.tar.gz for now.
2. Extract the downloaded file, and change to the newly-created directory.
$ tar xzvf hbase-2.1.0-bin.tar.gz
$ cd hbase-2.1.0/
3. You are required to set the JAVA_HOME environment variable before starting HBase. You can set
the variable via your operating system’s usual mechanism, but HBase provides a central
mechanism, conf/hbase-env.sh. Edit this file, uncomment the line starting with JAVA_HOME, and set
it to the appropriate location for your operating system. The JAVA_HOME variable should be set to
a directory which contains the executable file bin/java. Most modern Linux operating systems
provide a mechanism, such as /usr/bin/alternatives on RHEL or CentOS, for transparently
switching between versions of executables such as Java. In this case, you can set JAVA_HOME to the
directory containing the symbolic link to bin/java, which is usually /usr.
JAVA_HOME=/usr
4. Edit conf/hbase-site.xml, which is the main HBase configuration file. At this time, you need to
specify the directory on the local filesystem where HBase and ZooKeeper write data and
acknowledge some risks. By default, a new directory is created under /tmp. Many servers are
configured to delete the contents of /tmp upon reboot, so you should store the data elsewhere.
The following configuration will store HBase’s data in the hbase directory, in the home directory
of the user called testuser. Paste the <property> tags beneath the <configuration> tags, which
should be empty in a new HBase install.
5
Example 1. Example hbase-site.xml for Standalone HBase
<configuration>
Ê <property>
Ê <name>hbase.rootdir</name>
Ê <value>file:///home/testuser/hbase</value>
Ê </property>
Ê <property>
Ê <name>hbase.zookeeper.property.dataDir</name>
Ê <value>/home/testuser/zookeeper</value>
Ê </property>
Ê <property>
Ê <name>hbase.unsafe.stream.capability.enforce</name>
Ê <value>false</value>
Ê <description>
Ê Controls whether HBase will check for stream capabilities (hflush/hsync).
Ê Disable this if you intend to run on LocalFileSystem, denoted by a
rootdir
Ê with the 'file://' scheme, but be mindful of the NOTE below.
Ê WARNING: Setting this to false blinds you to potential data loss and
Ê inconsistent system state in the event of process and/or node failures.
If
Ê HBase is complaining of an inability to use hsync or hflush it's most
Ê likely not a false positive.
Ê </description>
Ê </property>
</configuration>
You do not need to create the HBase data directory. HBase will do this for you. If you create the
directory, HBase will attempt to do a migration, which is not what you want.
The hbase.rootdir in the above example points to a directory in the local
filesystem. The 'file://' prefix is how we denote local filesystem. You should take
the WARNING present in the configuration example to heart. In standalone
mode HBase makes use of the local filesystem abstraction from the Apache
Hadoop project. That abstraction doesn’t provide the durability promises that
HBase needs to operate safely. This is fine for local development and testing
use cases where the cost of cluster failure is well contained. It is not
appropriate for production deployments; eventually you will lose data.
To home HBase on an existing instance of HDFS, set the hbase.rootdir to point at a directory up on
your instance: e.g. hdfs://namenode.example.org:8020/hbase. For more on this variant, see the
section below on Standalone HBase over HDFS.
1. The bin/start-hbase.sh script is provided as a convenient way to start HBase. Issue the command,
6
and if all goes well, a message is logged to standard output showing that HBase started
successfully. You can use the jps command to verify that you have one running process called
HMaster. In standalone mode HBase runs all daemons within this single JVM, i.e. the HMaster, a
single HRegionServer, and the ZooKeeper daemon. Go to http://localhost:16010 to view the
HBase Web UI.
Java needs to be installed and available. If you get an error indicating that Java
is not installed, but it is on your system, perhaps in a non-standard location,
edit the conf/hbase-env.sh file and modify the JAVA_HOME setting to point to the
directory that contains bin/java on your system.
Procedure: Use HBase For the First Time
1. Connect to HBase.
Connect to your running instance of HBase using the hbase shell command, located in the bin/
directory of your HBase install. In this example, some usage and version information that is
printed when you start HBase Shell has been omitted. The HBase Shell prompt ends with a >
character.
$ ./bin/hbase shell
hbase(main):001:0>
2. Display HBase Shell Help Text.
Type help and press Enter, to display some basic usage information for HBase Shell, as well as
several example commands. Notice that table names, rows, columns all must be enclosed in
quote characters.
3. Create a table.
Use the create command to create a new table. You must specify the table name and the
ColumnFamily name.
hbase(main):001:0> create 'test', 'cf'
0 row(s) in 0.4170 seconds
=> Hbase::Table - test
4. List Information About your Table
Use the list command to confirm your table exists
7
hbase(main):002:0> list 'test'
TABLE
test
1 row(s) in 0.0180 seconds
=> ["test"]
Now use the describe command to see details, including configuration defaults
hbase(main):003:0> describe 'test'
Table test is ENABLED
test
COLUMN FAMILIES DESCRIPTION
{NAME => 'cf', VERSIONS => '1', EVICT_BLOCKS_ON_CLOSE => 'false',
NEW_VERSION_BEHAVIOR => 'false', KEEP_DELETED_CELLS => 'FALSE', CACHE_DATA_ON_WRITE
=>
'false', DATA_BLOCK_ENCODING => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0',
REPLICATION_SCOPE => '0', BLOOMFILTER => 'ROW', CACHE_INDEX_ON_WRITE => 'f
alse', IN_MEMORY => 'false', CACHE_BLOOMS_ON_WRITE => 'false',
PREFETCH_BLOCKS_ON_OPEN => 'false', COMPRESSION => 'NONE', BLOCKCACHE => 'true',
BLOCKSIZE
Ê=> '65536'}
1 row(s)
Took 0.9998 seconds
5. Put data into your table.
To put data into your table, use the put command.
hbase(main):003:0> put 'test', 'row1', 'cf:a', 'value1'
0 row(s) in 0.0850 seconds
hbase(main):004:0> put 'test', 'row2', 'cf:b', 'value2'
0 row(s) in 0.0110 seconds
hbase(main):005:0> put 'test', 'row3', 'cf:c', 'value3'
0 row(s) in 0.0100 seconds
Here, we insert three values, one at a time. The first insert is at row1, column cf:a, with a value
of value1. Columns in HBase are comprised of a column family prefix, cf in this example,
followed by a colon and then a column qualifier suffix, a in this case.
6. Scan the table for all data at once.
One of the ways to get data from HBase is to scan. Use the scan command to scan the table for
data. You can limit your scan, but for now, all data is fetched.
8
hbase(main):006:0> scan 'test'
ROW COLUMN+CELL
Êrow1 column=cf:a, timestamp=1421762485768,
value=value1
Êrow2 column=cf:b, timestamp=1421762491785,
value=value2
Êrow3 column=cf:c, timestamp=1421762496210,
value=value3
3 row(s) in 0.0230 seconds
7. Get a single row of data.
To get a single row of data at a time, use the get command.
hbase(main):007:0> get 'test', 'row1'
COLUMN CELL
Êcf:a timestamp=1421762485768, value=value1
1 row(s) in 0.0350 seconds
8. Disable a table.
If you want to delete a table or change its settings, as well as in some other situations, you need
to disable the table first, using the disable command. You can re-enable it using the enable
command.
hbase(main):008:0> disable 'test'
0 row(s) in 1.1820 seconds
hbase(main):009:0> enable 'test'
0 row(s) in 0.1770 seconds
Disable the table again if you tested the enable command above:
hbase(main):010:0> disable 'test'
0 row(s) in 1.1820 seconds
9. Drop the table.
To drop (delete) a table, use the drop command.
hbase(main):011:0> drop 'test'
0 row(s) in 0.1370 seconds
10. Exit the HBase Shell.
9
To exit the HBase Shell and disconnect from your cluster, use the quit command. HBase is still
running in the background.
Procedure: Stop HBase
1. In the same way that the bin/start-hbase.sh script is provided to conveniently start all HBase
daemons, the bin/stop-hbase.sh script stops them.
$ ./bin/stop-hbase.sh
stopping hbase....................
$
2. After issuing the command, it can take several minutes for the processes to shut down. Use the
jps to be sure that the HMaster and HRegionServer processes are shut down.
The above has shown you how to start and stop a standalone instance of HBase. In the next sections
we give a quick overview of other modes of hbase deploy.
2.3. Pseudo-Distributed Local Install
After working your way through quickstart standalone mode, you can re-configure HBase to run in
pseudo-distributed mode. Pseudo-distributed mode means that HBase still runs completely on a
single host, but each HBase daemon (HMaster, HRegionServer, and ZooKeeper) runs as a separate
process: in standalone mode all daemons ran in one jvm process/instance. By default, unless you
configure the hbase.rootdir property as described in quickstart, your data is still stored in /tmp/. In
this walk-through, we store your data in HDFS instead, assuming you have HDFS available. You can
skip the HDFS configuration to continue storing your data in the local filesystem.
Hadoop Configuration
This procedure assumes that you have configured Hadoop and HDFS on your local
system and/or a remote system, and that they are running and available. It also
assumes you are using Hadoop 2. The guide on Setting up a Single Node Cluster in
the Hadoop documentation is a good starting point.
1. Stop HBase if it is running.
If you have just finished quickstart and HBase is still running, stop it. This procedure will create
a totally new directory where HBase will store its data, so any databases you created before will
be lost.
2. Configure HBase.
Edit the hbase-site.xml configuration. First, add the following property which directs HBase to
run in distributed mode, with one JVM instance per daemon.
10
<property>
Ê <name>hbase.cluster.distributed</name>
Ê <value>true</value>
</property>
Next, change the hbase.rootdir from the local filesystem to the address of your HDFS instance,
using the hdfs://// URI syntax. In this example, HDFS is running on the localhost at port 8020.
Be sure to either remove the entry for hbase.unsafe.stream.capability.enforce or set it to true.
<property>
Ê <name>hbase.rootdir</name>
Ê <value>hdfs://localhost:8020/hbase</value>
</property>
You do not need to create the directory in HDFS. HBase will do this for you. If you create the
directory, HBase will attempt to do a migration, which is not what you want.
3. Start HBase.
Use the bin/start-hbase.sh command to start HBase. If your system is configured correctly, the
jps command should show the HMaster and HRegionServer processes running.
4. Check the HBase directory in HDFS.
If everything worked correctly, HBase created its directory in HDFS. In the configuration above,
it is stored in /hbase/ on HDFS. You can use the hadoop fs command in Hadoop’s bin/ directory to
list this directory.
$ ./bin/hadoop fs -ls /hbase
Found 7 items
drwxr-xr-x - hbase users 0 2014-06-25 18:58 /hbase/.tmp
drwxr-xr-x - hbase users 0 2014-06-25 21:49 /hbase/WALs
drwxr-xr-x - hbase users 0 2014-06-25 18:48 /hbase/corrupt
drwxr-xr-x - hbase users 0 2014-06-25 18:58 /hbase/data
-rw-r--r-- 3 hbase users 42 2014-06-25 18:41 /hbase/hbase.id
-rw-r--r-- 3 hbase users 7 2014-06-25 18:41 /hbase/hbase.version
drwxr-xr-x - hbase users 0 2014-06-25 21:49 /hbase/oldWALs
5. Create a table and populate it with data.
You can use the HBase Shell to create a table, populate it with data, scan and get values from it,
using the same procedure as in shell exercises.
6. Start and stop a backup HBase Master (HMaster) server.
11
Running multiple HMaster instances on the same hardware does not make
sense in a production environment, in the same way that running a pseudo-
distributed cluster does not make sense for production. This step is offered for
testing and learning purposes only.
The HMaster server controls the HBase cluster. You can start up to 9 backup HMaster servers,
which makes 10 total HMasters, counting the primary. To start a backup HMaster, use the local-
master-backup.sh. For each backup master you want to start, add a parameter representing the
port offset for that master. Each HMaster uses two ports (16000 and 16010 by default). The port
offset is added to these ports, so using an offset of 2, the backup HMaster would use ports 16002
and 16012. The following command starts 3 backup servers using ports 16002/16012,
16003/16013, and 16005/16015.
$ ./bin/local-master-backup.sh start 2 3 5
To kill a backup master without killing the entire cluster, you need to find its process ID (PID).
The PID is stored in a file with a name like /tmp/hbase-USER-X-master.pid. The only contents of
the file is the PID. You can use the kill -9 command to kill that PID. The following command
will kill the master with port offset 1, but leave the cluster running:
$ cat /tmp/hbase-testuser-1-master.pid |xargs kill -9
7. Start and stop additional RegionServers
The HRegionServer manages the data in its StoreFiles as directed by the HMaster. Generally, one
HRegionServer runs per node in the cluster. Running multiple HRegionServers on the same
system can be useful for testing in pseudo-distributed mode. The local-regionservers.sh
command allows you to run multiple RegionServers. It works in a similar way to the local-
master-backup.sh command, in that each parameter you provide represents the port offset for
an instance. Each RegionServer requires two ports, and the default ports are 16020 and 16030.
Since HBase version 1.1.0, HMaster doesn’t use region server ports, this leaves 10 ports (16020 to
16029 and 16030 to 16039) to be used for RegionServers. For supporting additional
RegionServers, set environment variables HBASE_RS_BASE_PORT and
HBASE_RS_INFO_BASE_PORT to appropriate values before running script local-
regionservers.sh. e.g. With values 16200 and 16300 for base ports, 99 additional RegionServers
can be supported, on a server. The following command starts four additional RegionServers,
running on sequential ports starting at 16022/16032 (base ports 16020/16030 plus 2).
$ .bin/local-regionservers.sh start 2 3 4 5
To stop a RegionServer manually, use the local-regionservers.sh command with the stop
parameter and the offset of the server to stop.
$ .bin/local-regionservers.sh stop 3
12
8. Stop HBase.
You can stop HBase the same way as in the quickstart procedure, using the bin/stop-hbase.sh
command.
2.4. Advanced - Fully Distributed
In reality, you need a fully-distributed configuration to fully test HBase and to use it in real-world
scenarios. In a distributed configuration, the cluster contains multiple nodes, each of which runs
one or more HBase daemon. These include primary and backup Master instances, multiple
ZooKeeper nodes, and multiple RegionServer nodes.
This advanced quickstart adds two more nodes to your cluster. The architecture will be as follows:
Table 1. Distributed Cluster Demo Architecture
Node Name Master ZooKeeper RegionServer
node-a.example.com yes yes no
node-b.example.com backup yes yes
node-c.example.com no yes yes
This quickstart assumes that each node is a virtual machine and that they are all on the same
network. It builds upon the previous quickstart, Pseudo-Distributed Local Install, assuming that the
system you configured in that procedure is now node-a. Stop HBase on node-a before continuing.
Be sure that all the nodes have full access to communicate, and that no firewall
rules are in place which could prevent them from talking to each other. If you see
any errors like no route to host, check your firewall.
Procedure: Configure Passwordless SSH Access
node-a needs to be able to log into node-b and node-c (and to itself) in order to start the daemons.
The easiest way to accomplish this is to use the same username on all hosts, and configure
password-less SSH login from node-a to each of the others.
1. On node-a, generate a key pair.
While logged in as the user who will run HBase, generate a SSH key pair, using the following
command:
$ ssh-keygen -t rsa
If the command succeeds, the location of the key pair is printed to standard output. The default
name of the public key is id_rsa.pub.
2. Create the directory that will hold the shared keys on the other nodes.
On node-b and node-c, log in as the HBase user and create a .ssh/ directory in the user’s home
13
directory, if it does not already exist. If it already exists, be aware that it may already contain
other keys.
3. Copy the public key to the other nodes.
Securely copy the public key from node-a to each of the nodes, by using the scp or some other
secure means. On each of the other nodes, create a new file called .ssh/authorized_keys if it does
not already exist, and append the contents of the id_rsa.pub file to the end of it. Note that you
also need to do this for node-a itself.
$ cat id_rsa.pub >> ~/.ssh/authorized_keys
4. Test password-less login.
If you performed the procedure correctly, you should not be prompted for a password when
you SSH from node-a to either of the other nodes using the same username.
5. Since node-b will run a backup Master, repeat the procedure above, substituting node-b
everywhere you see node-a. Be sure not to overwrite your existing .ssh/authorized_keys files, but
concatenate the new key onto the existing file using the >> operator rather than the > operator.
Procedure: Prepare node-a
node-a will run your primary master and ZooKeeper processes, but no RegionServers. Stop the
RegionServer from starting on node-a.
1. Edit conf/regionservers and remove the line which contains localhost. Add lines with the
hostnames or IP addresses for node-b and node-c.
Even if you did want to run a RegionServer on node-a, you should refer to it by the hostname the
other servers would use to communicate with it. In this case, that would be node-a.example.com.
This enables you to distribute the configuration to each node of your cluster any hostname
conflicts. Save the file.
2. Configure HBase to use node-b as a backup master.
Create a new file in conf/ called backup-masters, and add a new line to it with the hostname for
node-b. In this demonstration, the hostname is node-b.example.com.
3. Configure ZooKeeper
In reality, you should carefully consider your ZooKeeper configuration. You can find out more
about configuring ZooKeeper in zookeeper section. This configuration will direct HBase to start
and manage a ZooKeeper instance on each node of the cluster.
On node-a, edit conf/hbase-site.xml and add the following properties.
14
<property>
Ê <name>hbase.zookeeper.quorum</name>
Ê <value>node-a.example.com,node-b.example.com,node-c.example.com</value>
</property>
<property>
Ê <name>hbase.zookeeper.property.dataDir</name>
Ê <value>/usr/local/zookeeper</value>
</property>
4. Everywhere in your configuration that you have referred to node-a as localhost, change the
reference to point to the hostname that the other nodes will use to refer to node-a. In these
examples, the hostname is node-a.example.com.
Procedure: Prepare node-b and node-c
node-b will run a backup master server and a ZooKeeper instance.
1. Download and unpack HBase.
Download and unpack HBase to node-b, just as you did for the standalone and pseudo-
distributed quickstarts.
2. Copy the configuration files from node-a to node-b.and node-c.
Each node of your cluster needs to have the same configuration information. Copy the contents
of the conf/ directory to the conf/ directory on node-b and node-c.
Procedure: Start and Test Your Cluster
1. Be sure HBase is not running on any node.
If you forgot to stop HBase from previous testing, you will have errors. Check to see whether
HBase is running on any of your nodes by using the jps command. Look for the processes
HMaster, HRegionServer, and HQuorumPeer. If they exist, kill them.
2. Start the cluster.
On node-a, issue the start-hbase.sh command. Your output will be similar to that below.
15
$ bin/start-hbase.sh
node-c.example.com: starting zookeeper, logging to /home/hbuser/hbase-0.98.3-
hadoop2/bin/../logs/hbase-hbuser-zookeeper-node-c.example.com.out
node-a.example.com: starting zookeeper, logging to /home/hbuser/hbase-0.98.3-
hadoop2/bin/../logs/hbase-hbuser-zookeeper-node-a.example.com.out
node-b.example.com: starting zookeeper, logging to /home/hbuser/hbase-0.98.3-
hadoop2/bin/../logs/hbase-hbuser-zookeeper-node-b.example.com.out
starting master, logging to /home/hbuser/hbase-0.98.3-hadoop2/bin/../logs/hbase-
hbuser-master-node-a.example.com.out
node-c.example.com: starting regionserver, logging to /home/hbuser/hbase-0.98.3-
hadoop2/bin/../logs/hbase-hbuser-regionserver-node-c.example.com.out
node-b.example.com: starting regionserver, logging to /home/hbuser/hbase-0.98.3-
hadoop2/bin/../logs/hbase-hbuser-regionserver-node-b.example.com.out
node-b.example.com: starting master, logging to /home/hbuser/hbase-0.98.3-
hadoop2/bin/../logs/hbase-hbuser-master-nodeb.example.com.out
ZooKeeper starts first, followed by the master, then the RegionServers, and finally the backup
masters.
3. Verify that the processes are running.
On each node of the cluster, run the jps command and verify that the correct processes are
running on each server. You may see additional Java processes running on your servers as well,
if they are used for other purposes.
node-a jps Output
$ jps
20355 Jps
20071 HQuorumPeer
20137 HMaster
node-b jps Output
$ jps
15930 HRegionServer
16194 Jps
15838 HQuorumPeer
16010 HMaster
node-c jps Output
$ jps
13901 Jps
13639 HQuorumPeer
13737 HRegionServer
16
ZooKeeper Process Name
The HQuorumPeer process is a ZooKeeper instance which is controlled and
started by HBase. If you use ZooKeeper this way, it is limited to one instance
per cluster node and is appropriate for testing only. If ZooKeeper is run outside
of HBase, the process is called QuorumPeer. For more about ZooKeeper
configuration, including using an external ZooKeeper instance with HBase, see
zookeeper section.
4. Browse to the Web UI.
Web UI Port Changes
Web UI Port Changes
In HBase newer than 0.98.x, the HTTP ports used by the HBase Web UI changed from 60010 for
the Master and 60030 for each RegionServer to 16010 for the Master and 16030 for the
RegionServer.
If everything is set up correctly, you should be able to connect to the UI for the Master
http://node-a.example.com:16010/ or the secondary master at http://node-b.example.com:16010/
using a web browser. If you can connect via localhost but not from another host, check your
firewall rules. You can see the web UI for each of the RegionServers at port 16030 of their IP
addresses, or by clicking their links in the web UI for the Master.
5. Test what happens when nodes or services disappear.
With a three-node cluster you have configured, things will not be very resilient. You can still test
the behavior of the primary Master or a RegionServer by killing the associated processes and
watching the logs.
2.5. Where to go next
The next chapter, configuration, gives more information about the different HBase run modes,
system requirements for running HBase, and critical configuration areas for setting up a
distributed HBase cluster.
17
Apache HBase Configuration
This chapter expands upon the Getting Started chapter to further explain configuration of Apache
HBase. Please read this chapter carefully, especially the Basic Prerequisites to ensure that your
HBase testing and deployment goes smoothly. Familiarize yourself with Support and Testing
Expectations as well.
18
Chapter 3. Configuration Files
Apache HBase uses the same configuration system as Apache Hadoop. All configuration files are
located in the conf/ directory, which needs to be kept in sync for each node on your cluster.
HBase Configuration File Descriptions
backup-masters
Not present by default. A plain-text file which lists hosts on which the Master should start a
backup Master process, one host per line.
hadoop-metrics2-hbase.properties
Used to connect HBase Hadoop’s Metrics2 framework. See the Hadoop Wiki entry for more
information on Metrics2. Contains only commented-out examples by default.
hbase-env.cmd and hbase-env.sh
Script for Windows and Linux / Unix environments to set up the working environment for
HBase, including the location of Java, Java options, and other environment variables. The file
contains many commented-out examples to provide guidance.
hbase-policy.xml
The default policy configuration file used by RPC servers to make authorization decisions on
client requests. Only used if HBase security is enabled.
hbase-site.xml
The main HBase configuration file. This file specifies configuration options which override
HBase’s default configuration. You can view (but do not edit) the default configuration file at
docs/hbase-default.xml. You can also view the entire effective configuration for your cluster
(defaults and overrides) in the HBase Configuration tab of the HBase Web UI.
log4j.properties
Configuration file for HBase logging via log4j.
regionservers
A plain-text file containing a list of hosts which should run a RegionServer in your HBase cluster.
By default this file contains the single entry localhost. It should contain a list of hostnames or IP
addresses, one per line, and should only contain localhost if each node in your cluster will run a
RegionServer on its localhost interface.
Checking XML Validity
When you edit XML, it is a good idea to use an XML-aware editor to be sure that
your syntax is correct and your XML is well-formed. You can also use the xmllint
utility to check that your XML is well-formed. By default, xmllint re-flows and
prints the XML to standard output. To check for well-formedness and only print
output if errors exist, use the command xmllint -noout filename.xml.
19
Keep Configuration In Sync Across the Cluster
When running in distributed mode, after you make an edit to an HBase
configuration, make sure you copy the contents of the conf/ directory to all nodes
of the cluster. HBase will not do this for you. Use rsync, scp, or another secure
mechanism for copying the configuration files to your nodes. For most
configurations, a restart is needed for servers to pick up changes. Dynamic
configuration is an exception to this, to be described later below.
20
Chapter 4. Basic Prerequisites
This section lists required services and some required system configuration.
Java
The following table summarizes the recommendation of the HBase community wrt deploying on
various Java versions. An entry of "yes" is meant to indicate a base level of testing and willingness
to help diagnose and address issues you might run into. Similarly, an entry of "no" or "Not
Supported" generally means that should you run into an issue the community is likely to ask you to
change the Java environment before proceeding to help. In some cases, specific guidance on
limitations (e.g. wether compiling / unit tests work, specific operational issues, etc) will also be
noted.
Long Term Support JDKs are recommended
HBase recommends downstream users rely on JDK releases that are marked as
Long Term Supported (LTS) either from the OpenJDK project or vendors. As of
March 2018 that means Java 8 is the only applicable version and that the next
likely version to see testing will be Java 11 near Q3 2018.
Table 2. Java support by release line
HBase Version JDK 7 JDK 8 JDK 9 JDK 10
2.0 Not Supported yes Not Supported Not Supported
1.3 yes yes Not Supported Not Supported
1.2 yes yes Not Supported Not Supported
HBase will neither build nor run with Java 6.
You must set JAVA_HOME on each node of your cluster. hbase-env.sh provides a handy
mechanism to do this.
Operating System Utilities
ssh
HBase uses the Secure Shell (ssh) command and utilities extensively to communicate between
cluster nodes. Each server in the cluster must be running ssh so that the Hadoop and HBase
daemons can be managed. You must be able to connect to all nodes via SSH, including the local
node, from the Master as well as any backup Master, using a shared key rather than a password.
You can see the basic methodology for such a set-up in Linux or Unix systems at "Procedure:
Configure Passwordless SSH Access". If your cluster nodes use OS X, see the section, SSH: Setting
up Remote Desktop and Enabling Self-Login on the Hadoop wiki.
DNS
HBase uses the local hostname to self-report its IP address.
NTP
The clocks on cluster nodes should be synchronized. A small amount of variation is acceptable,
21
but larger amounts of skew can cause erratic and unexpected behavior. Time synchronization is
one of the first things to check if you see unexplained problems in your cluster. It is
recommended that you run a Network Time Protocol (NTP) service, or another time-
synchronization mechanism on your cluster and that all nodes look to the same service for time
synchronization. See the Basic NTP Configuration at The Linux Documentation Project (TLDP) to
set up NTP.
Limits on Number of Files and Processes (ulimit)
Apache HBase is a database. It requires the ability to open a large number of files at once. Many
Linux distributions limit the number of files a single user is allowed to open to 1024 (or 256 on
older versions of OS X). You can check this limit on your servers by running the command ulimit
-n when logged in as the user which runs HBase. See the Troubleshooting section for some of the
problems you may experience if the limit is too low. You may also notice errors such as the
following:
2010-04-06 03:04:37,542 INFO org.apache.hadoop.hdfs.DFSClient: Exception
increateBlockOutputStream java.io.EOFException
2010-04-06 03:04:37,542 INFO org.apache.hadoop.hdfs.DFSClient: Abandoning block
blk_-6935524980745310745_1391901
It is recommended to raise the ulimit to at least 10,000, but more likely 10,240, because the value
is usually expressed in multiples of 1024. Each ColumnFamily has at least one StoreFile, and
possibly more than six StoreFiles if the region is under load. The number of open files required
depends upon the number of ColumnFamilies and the number of regions. The following is a
rough formula for calculating the potential number of open files on a RegionServer.
Calculate the Potential Number of Open Files
(StoreFiles per ColumnFamily) x (regions per RegionServer)
For example, assuming that a schema had 3 ColumnFamilies per region with an average of 3
StoreFiles per ColumnFamily, and there are 100 regions per RegionServer, the JVM will open 3 *
3 * 100 = 900 file descriptors, not counting open JAR files, configuration files, and others.
Opening a file does not take many resources, and the risk of allowing a user to open too many
files is minimal.
Another related setting is the number of processes a user is allowed to run at once. In Linux and
Unix, the number of processes is set using the ulimit -u command. This should not be confused
with the nproc command, which controls the number of CPUs available to a given user. Under
load, a ulimit -u that is too low can cause OutOfMemoryError exceptions.
Configuring the maximum number of file descriptors and processes for the user who is running
the HBase process is an operating system configuration, rather than an HBase configuration. It is
also important to be sure that the settings are changed for the user that actually runs HBase. To
see which user started HBase, and that user’s ulimit configuration, look at the first line of the
HBase log for that instance.
22
Example 2. ulimit Settings on Ubuntu
To configure ulimit settings on Ubuntu, edit /etc/security/limits.conf, which is a space-
delimited file with four columns. Refer to the man page for limits.conf for details about the
format of this file. In the following example, the first line sets both soft and hard limits for
the number of open files (nofile) to 32768 for the operating system user with the username
hadoop. The second line sets the number of processes to 32000 for the same user.
hadoop - nofile 32768
hadoop - nproc 32000
The settings are only applied if the Pluggable Authentication Module (PAM) environment is
directed to use them. To configure PAM to use these limits, be sure that the
/etc/pam.d/common-session file contains the following line:
session required pam_limits.so
Linux Shell
All of the shell scripts that come with HBase rely on the GNU Bash shell.
Windows
Running production systems on Windows machines is not recommended.
4.1. Hadoop
The following table summarizes the versions of Hadoop supported with each version of HBase.
Older versions not appearing in this table are considered unsupported and likely missing necessary
features, while newer versions are untested but may be suitable.
Based on the version of HBase, you should select the most appropriate version of Hadoop. You can
use Apache Hadoop, or a vendor’s distribution of Hadoop. No distinction is made here. See the
Hadoop wiki for information about vendors of Hadoop.
Hadoop 2.x is recommended.
Hadoop 2.x is faster and includes features, such as short-circuit reads (see
Leveraging local data), which will help improve your HBase random read profile.
Hadoop 2.x also includes important bug fixes that will improve your overall HBase
experience. HBase does not support running with earlier versions of Hadoop. See
the table below for requirements specific to different HBase versions.
Hadoop 3.x is still in early access releases and has not yet been sufficiently tested
by the HBase community for production use cases.
Use the following legend to interpret this table:
23
Hadoop version support matrix
•"S" = supported
•"X" = not supported
•"NT" = Not tested
HBase-1.2.x HBase-1.3.x HBase-1.5.x HBase-2.0.x HBase-2.1.x
Hadoop-2.4.x S S X X X
Hadoop-2.5.x S S X X X
Hadoop-2.6.0 X X X X X
Hadoop-2.6.1+ S S X S X
Hadoop-2.7.0 X X X X X
Hadoop-2.7.1+ S S S S S
Hadoop-2.8.[0-
1]
XXXXX
Hadoop-2.8.2 NT NT NT NT NT
Hadoop-2.8.3+ NT NT NT S S
Hadoop-2.9.0 X X X X X
Hadoop-2.9.1+ NT NT NT NT NT
Hadoop-3.0.x X X X X X
Hadoop-3.1.0 X X X X X
Hadoop Pre-2.6.1 and JDK 1.8 Kerberos
When using pre-2.6.1 Hadoop versions and JDK 1.8 in a Kerberos environment,
HBase server can fail and abort due to Kerberos keytab relogin error. Late version
of JDK 1.7 (1.7.0_80) has the problem too. Refer to HADOOP-10786 for additional
details. Consider upgrading to Hadoop 2.6.1+ in this case.
Hadoop 2.6.x
Hadoop distributions based on the 2.6.x line must have HADOOP-11710 applied if
you plan to run HBase on top of an HDFS Encryption Zone. Failure to do so will
result in cluster failure and data loss. This patch is present in Apache Hadoop
releases 2.6.1+.
Hadoop 2.y.0 Releases
Starting around the time of Hadoop version 2.7.0, the Hadoop PMC got into the
habit of calling out new minor releases on their major version 2 release line as not
stable / production ready. As such, HBase expressly advises downstream users to
avoid running on top of these releases. Note that additionally the 2.8.1 release was
given the same caveat by the Hadoop PMC. For reference, see the release
announcements for Apache Hadoop 2.7.0, Apache Hadoop 2.8.0, Apache Hadoop
2.8.1, and Apache Hadoop 2.9.0.
24
Hadoop 3.0.x Releases
Hadoop distributions that include the Application Timeline Service feature may
cause unexpected versions of HBase classes to be present in the application
classpath. Users planning on running MapReduce applications with HBase should
make sure that YARN-7190 is present in their YARN service (currently fixed in
2.9.1+ and 3.1.0+).
Hadoop 3.1.0 Release
The Hadoop PMC called out the 3.1.0 release as not stable / production ready. As
such, HBase expressly advises downstream users to avoid running on top of this
release. For reference, see the release announcement for Hadoop 3.1.0.
Replace the Hadoop Bundled With HBase!
Because HBase depends on Hadoop, it bundles Hadoop jars under its lib directory.
The bundled jars are ONLY for use in standalone mode. In distributed mode, it is
critical that the version of Hadoop that is out on your cluster match what is under
HBase. Replace the hadoop jars found in the HBase lib directory with the
equivalent hadoop jars from the version you are running on your cluster to avoid
version mismatch issues. Make sure you replace the jars under HBase across your
whole cluster. Hadoop version mismatch issues have various manifestations.
Check for mismatch if HBase appears hung.
4.1.1. dfs.datanode.max.transfer.threads
An HDFS DataNode has an upper bound on the number of files that it will serve at any one time.
Before doing any loading, make sure you have configured Hadoop’s conf/hdfs-site.xml, setting the
dfs.datanode.max.transfer.threads value to at least the following:
<property>
Ê <name>dfs.datanode.max.transfer.threads</name>
Ê <value>4096</value>
</property>
Be sure to restart your HDFS after making the above configuration.
Not having this configuration in place makes for strange-looking failures. One manifestation is a
complaint about missing blocks. For example:
10/12/08 20:10:31 INFO hdfs.DFSClient: Could not obtain block
Ê blk_XXXXXXXXXXXXXXXXXXXXXX_YYYYYYYY from any node: java.io.IOException: No
live nodes
Ê contain current block. Will get new block locations from namenode and
retry...
See also casestudies.max.transfer.threads and note that this property was previously known as
25
Chapter 5. HBase run modes: Standalone
and Distributed
HBase has two run modes: standalone and distributed. Out of the box, HBase runs in standalone
mode. Whatever your mode, you will need to configure HBase by editing files in the HBase conf
directory. At a minimum, you must edit conf/hbase-env.sh to tell HBase which java to use. In this
file you set HBase environment variables such as the heapsize and other options for the JVM, the
preferred location for log files, etc. Set JAVA_HOME to point at the root of your java install.
5.1. Standalone HBase
This is the default mode. Standalone mode is what is described in the quickstart section. In
standalone mode, HBase does not use HDFS — it uses the local filesystem instead — and it runs all
HBase daemons and a local ZooKeeper all up in the same JVM. ZooKeeper binds to a well known
port so clients may talk to HBase.
5.1.1. Standalone HBase over HDFS
A sometimes useful variation on standalone hbase has all daemons running inside the one JVM but
rather than persist to the local filesystem, instead they persist to an HDFS instance.
You might consider this profile when you are intent on a simple deploy profile, the loading is light,
but the data must persist across node comings and goings. Writing to HDFS where data is replicated
ensures the latter.
To configure this standalone variant, edit your hbase-site.xml setting hbase.rootdir to point at a
directory in your HDFS instance but then set hbase.cluster.distributed to false. For example:
<configuration>
Ê <property>
Ê <name>hbase.rootdir</name>
Ê <value>hdfs://namenode.example.org:8020/hbase</value>
Ê </property>
Ê <property>
Ê <name>hbase.cluster.distributed</name>
Ê <value>false</value>
Ê </property>
</configuration>
5.2. Distributed
Distributed mode can be subdivided into distributed but all daemons run on a single node — a.k.a.
pseudo-distributed — and fully-distributed where the daemons are spread across all nodes in the
cluster. The pseudo-distributed vs. fully-distributed nomenclature comes from Hadoop.
Pseudo-distributed mode can run against the local filesystem or it can run against an instance of
27
the Hadoop Distributed File System (HDFS). Fully-distributed mode can ONLY run on HDFS. See the
Hadoop documentation for how to set up HDFS. A good walk-through for setting up HDFS on
Hadoop 2 can be found at http://www.alexjf.net/blog/distributed-systems/hadoop-yarn-installation-
definitive-guide.
5.2.1. Pseudo-distributed
Pseudo-Distributed Quickstart
A quickstart has been added to the quickstart chapter. See quickstart-pseudo.
Some of the information that was originally in this section has been moved there.
A pseudo-distributed mode is simply a fully-distributed mode run on a single host. Use this HBase
configuration for testing and prototyping purposes only. Do not use this configuration for
production or for performance evaluation.
5.3. Fully-distributed
By default, HBase runs in standalone mode. Both standalone mode and pseudo-distributed mode
are provided for the purposes of small-scale testing. For a production environment, distributed
mode is advised. In distributed mode, multiple instances of HBase daemons run on multiple servers
in the cluster.
Just as in pseudo-distributed mode, a fully distributed configuration requires that you set the
hbase.cluster.distributed property to true. Typically, the hbase.rootdir is configured to point to a
highly-available HDFS filesystem.
In addition, the cluster is configured so that multiple cluster nodes enlist as RegionServers,
ZooKeeper QuorumPeers, and backup HMaster servers. These configuration basics are all
demonstrated in quickstart-fully-distributed.
Distributed RegionServers
Typically, your cluster will contain multiple RegionServers all running on different servers, as well
as primary and backup Master and ZooKeeper daemons. The conf/regionservers file on the master
server contains a list of hosts whose RegionServers are associated with this cluster. Each host is on
a separate line. All hosts listed in this file will have their RegionServer processes started and
stopped when the master server starts or stops.
ZooKeeper and HBase
See the ZooKeeper section for ZooKeeper setup instructions for HBase.
28
Example 3. Example Distributed HBase Cluster
This is a bare-bones conf/hbase-site.xml for a distributed HBase cluster. A cluster that is used
for real-world work would contain more custom configuration parameters. Most HBase
configuration directives have default values, which are used unless the value is overridden in
the hbase-site.xml. See "Configuration Files" for more information.
<configuration>
Ê <property>
Ê <name>hbase.rootdir</name>
Ê <value>hdfs://namenode.example.org:8020/hbase</value>
Ê </property>
Ê <property>
Ê <name>hbase.cluster.distributed</name>
Ê <value>true</value>
Ê </property>
Ê <property>
Ê <name>hbase.zookeeper.quorum</name>
Ê <value>node-a.example.com,node-b.example.com,node-c.example.com</value>
Ê </property>
</configuration>
This is an example conf/regionservers file, which contains a list of nodes that should run a
RegionServer in the cluster. These nodes need HBase installed and they need to use the same
contents of the conf/ directory as the Master server
node-a.example.com
node-b.example.com
node-c.example.com
This is an example conf/backup-masters file, which contains a list of each node that should run
a backup Master instance. The backup Master instances will sit idle unless the main Master
becomes unavailable.
node-b.example.com
node-c.example.com
Distributed HBase Quickstart
See quickstart-fully-distributed for a walk-through of a simple three-node cluster configuration
with multiple ZooKeeper, backup HMaster, and RegionServer instances.
Procedure: HDFS Client Configuration
1. Of note, if you have made HDFS client configuration changes on your Hadoop cluster, such as
configuration directives for HDFS clients, as opposed to server-side configurations, you must
use one of the following methods to enable HBase to see and use these configuration changes:
29
a. Add a pointer to your HADOOP_CONF_DIR to the HBASE_CLASSPATH environment variable in hbase-
env.sh.
b. Add a copy of hdfs-site.xml (or hadoop-site.xml) or, better, symlinks, under
${HBASE_HOME}/conf, or
c. if only a small set of HDFS client configurations, add them to hbase-site.xml.
An example of such an HDFS client configuration is dfs.replication. If for example, you want to
run with a replication factor of 5, HBase will create files with the default of 3 unless you do the
above to make the configuration available to HBase.
30
Chapter 6. Running and Confirming Your
Installation
Make sure HDFS is running first. Start and stop the Hadoop HDFS daemons by running bin/start-
hdfs.sh over in the HADOOP_HOME directory. You can ensure it started properly by testing the put and
get of files into the Hadoop filesystem. HBase does not normally use the MapReduce or YARN
daemons. These do not need to be started.
If you are managing your own ZooKeeper, start it and confirm it’s running, else HBase will start up
ZooKeeper for you as part of its start process.
Start HBase with the following command:
bin/start-hbase.sh
Run the above from the HBASE_HOME directory.
You should now have a running HBase instance. HBase logs can be found in the logs subdirectory.
Check them out especially if HBase had trouble starting.
HBase also puts up a UI listing vital attributes. By default it’s deployed on the Master host at port
16010 (HBase RegionServers listen on port 16020 by default and put up an informational HTTP
server at port 16030). If the Master is running on a host named master.example.org on the default
port, point your browser at http://master.example.org:16010 to see the web interface.
Once HBase has started, see the shell exercises section for how to create tables, add data, scan your
insertions, and finally disable and drop your tables.
To stop HBase after exiting the HBase shell enter
$ ./bin/stop-hbase.sh
stopping hbase...............
Shutdown can take a moment to complete. It can take longer if your cluster is comprised of many
machines. If you are running a distributed operation, be sure to wait until HBase has shut down
completely before stopping the Hadoop daemons.
31
Chapter 7. Default Configuration
7.1. hbase-site.xml and hbase-default.xml
Just as in Hadoop where you add site-specific HDFS configuration to the hdfs-site.xml file, for HBase,
site specific customizations go into the file conf/hbase-site.xml. For the list of configurable
properties, see hbase default configurations below or view the raw hbase-default.xml source file in
the HBase source code at src/main/resources.
Not all configuration options make it out to hbase-default.xml. Some configurations would only
appear in source code; the only way to identify these changes are through code review.
Currently, changes here will require a cluster restart for HBase to notice the change.
7.2. HBase Default Configuration
The documentation below is generated using the default hbase configuration file, hbase-default.xml,
as source.
hbase.tmp.dir
Description
Temporary directory on the local filesystem. Change this setting to point to a location more
permanent than '/tmp', the usual resolve for java.io.tmpdir, as the '/tmp' directory is cleared on
machine restart.
Default
${java.io.tmpdir}/hbase-${user.name}
hbase.rootdir
Description
The directory shared by region servers and into which HBase persists. The URL should be 'fully-
qualified' to include the filesystem scheme. For example, to specify the HDFS directory '/hbase'
where the HDFS instance’s namenode is running at namenode.example.org on port 9000, set this
value to: hdfs://namenode.example.org:9000/hbase. By default, we write to whatever
${hbase.tmp.dir} is set too — usually /tmp — so change this configuration or else all data will be
lost on machine restart.
Default
${hbase.tmp.dir}/hbase
hbase.cluster.distributed
Description
The mode the cluster will be in. Possible values are false for standalone mode and true for
distributed mode. If false, startup will run all HBase and ZooKeeper daemons together in the one
JVM.
Default
false
32
hbase.zookeeper.quorum
Description
Comma separated list of servers in the ZooKeeper ensemble (This config. should have been
named hbase.zookeeper.ensemble). For example,
"host1.mydomain.com,host2.mydomain.com,host3.mydomain.com". By default this is set to
localhost for local and pseudo-distributed modes of operation. For a fully-distributed setup, this
should be set to a full list of ZooKeeper ensemble servers. If HBASE_MANAGES_ZK is set in
hbase-env.sh this is the list of servers which hbase will start/stop ZooKeeper on as part of cluster
start/stop. Client-side, we will take this list of ensemble members and put it together with the
hbase.zookeeper.property.clientPort config. and pass it into zookeeper constructor as the
connectString parameter.
Default
localhost
zookeeper.recovery.retry.maxsleeptime
Description
Max sleep time before retry zookeeper operations in milliseconds, a max time is needed here so
that sleep time won’t grow unboundedly
Default
60000
hbase.local.dir
Description
Directory on the local filesystem to be used as a local storage.
Default
${hbase.tmp.dir}/local/
hbase.master.port
Description
The port the HBase Master should bind to.
Default
16000
hbase.master.info.port
Description
The port for the HBase Master web UI. Set to -1 if you do not want a UI instance run.
Default
16010
hbase.master.info.bindAddress
Description
The bind address for the HBase Master web UI
Default
33
0.0.0.0
hbase.master.logcleaner.plugins
Description
A comma-separated list of BaseLogCleanerDelegate invoked by the LogsCleaner service. These
WAL cleaners are called in order, so put the cleaner that prunes the most files in front. To
implement your own BaseLogCleanerDelegate, just put it in HBase’s classpath and add the fully
qualified class name here. Always add the above default log cleaners in the list.
Default
org.apache.hadoop.hbase.master.cleaner.TimeToLiveLogCleaner,org.apache.hadoop.hbase.master.c
leaner.TimeToLiveProcedureWALCleaner
hbase.master.logcleaner.ttl
Description
How long a WAL remain in the archive ({hbase.rootdir}/oldWALs) directory, after which it will
be cleaned by a Master thread. The value is in milliseconds.
Default
600000
hbase.master.procedurewalcleaner.ttl
Description
How long a Procedure WAL will remain in the archive directory, after which it will be cleaned
by a Master thread. The value is in milliseconds.
Default
604800000
hbase.master.hfilecleaner.plugins
Description
A comma-separated list of BaseHFileCleanerDelegate invoked by the HFileCleaner service. These
HFiles cleaners are called in order, so put the cleaner that prunes the most files in front. To
implement your own BaseHFileCleanerDelegate, just put it in HBase’s classpath and add the fully
qualified class name here. Always add the above default log cleaners in the list as they will be
overwritten in hbase-site.xml.
Default
org.apache.hadoop.hbase.master.cleaner.TimeToLiveHFileCleaner
hbase.master.infoserver.redirect
Description
Whether or not the Master listens to the Master web UI port (hbase.master.info.port) and
redirects requests to the web UI server shared by the Master and RegionServer. Config. makes
sense when Master is serving Regions (not the default).
Default
true
hbase.master.fileSplitTimeout
34
Description
Splitting a region, how long to wait on the file-splitting step before aborting the attempt. Default:
600000. This setting used to be known as hbase.regionserver.fileSplitTimeout in hbase-1.x. Split
is now run master-side hence the rename (If a 'hbase.master.fileSplitTimeout' setting found, will
use it to prime the current 'hbase.master.fileSplitTimeout' Configuration.
Default
600000
hbase.regionserver.port
Description
The port the HBase RegionServer binds to.
Default
16020
hbase.regionserver.info.port
Description
The port for the HBase RegionServer web UI Set to -1 if you do not want the RegionServer UI to
run.
Default
16030
hbase.regionserver.info.bindAddress
Description
The address for the HBase RegionServer web UI
Default
0.0.0.0
hbase.regionserver.info.port.auto
Description
Whether or not the Master or RegionServer UI should search for a port to bind to. Enables
automatic port search if hbase.regionserver.info.port is already in use. Useful for testing, turned
off by default.
Default
false
hbase.regionserver.handler.count
Description
Count of RPC Listener instances spun up on RegionServers. Same property is used by the Master
for count of master handlers. Too many handlers can be counter-productive. Make it a multiple
of CPU count. If mostly read-only, handlers count close to cpu count does well. Start with twice
the CPU count and tune from there.
Default
30
35
hbase.ipc.server.callqueue.handler.factor
Description
Factor to determine the number of call queues. A value of 0 means a single queue shared
between all the handlers. A value of 1 means that each handler has its own queue.
Default
0.1
hbase.ipc.server.callqueue.read.ratio
Description
Split the call queues into read and write queues. The specified interval (which should be
between 0.0 and 1.0) will be multiplied by the number of call queues. A value of 0 indicate to not
split the call queues, meaning that both read and write requests will be pushed to the same set of
queues. A value lower than 0.5 means that there will be less read queues than write queues. A
value of 0.5 means there will be the same number of read and write queues. A value greater
than 0.5 means that there will be more read queues than write queues. A value of 1.0 means that
all the queues except one are used to dispatch read requests. Example: Given the total number of
call queues being 10 a read.ratio of 0 means that: the 10 queues will contain both read/write
requests. a read.ratio of 0.3 means that: 3 queues will contain only read requests and 7 queues
will contain only write requests. a read.ratio of 0.5 means that: 5 queues will contain only read
requests and 5 queues will contain only write requests. a read.ratio of 0.8 means that: 8 queues
will contain only read requests and 2 queues will contain only write requests. a read.ratio of 1
means that: 9 queues will contain only read requests and 1 queues will contain only write
requests.
Default
0
hbase.ipc.server.callqueue.scan.ratio
Description
Given the number of read call queues, calculated from the total number of call queues
multiplied by the callqueue.read.ratio, the scan.ratio property will split the read call queues into
small-read and long-read queues. A value lower than 0.5 means that there will be less long-read
queues than short-read queues. A value of 0.5 means that there will be the same number of
short-read and long-read queues. A value greater than 0.5 means that there will be more long-
read queues than short-read queues A value of 0 or 1 indicate to use the same set of queues for
gets and scans. Example: Given the total number of read call queues being 8 a scan.ratio of 0 or 1
means that: 8 queues will contain both long and short read requests. a scan.ratio of 0.3 means
that: 2 queues will contain only long-read requests and 6 queues will contain only short-read
requests. a scan.ratio of 0.5 means that: 4 queues will contain only long-read requests and 4
queues will contain only short-read requests. a scan.ratio of 0.8 means that: 6 queues will
contain only long-read requests and 2 queues will contain only short-read requests.
Default
0
hbase.regionserver.msginterval
Description
36
Interval between messages from the RegionServer to Master in milliseconds.
Default
3000
hbase.regionserver.logroll.period
Description
Period at which we will roll the commit log regardless of how many edits it has.
Default
3600000
hbase.regionserver.logroll.errors.tolerated
Description
The number of consecutive WAL close errors we will allow before triggering a server abort. A
setting of 0 will cause the region server to abort if closing the current WAL writer fails during
log rolling. Even a small value (2 or 3) will allow a region server to ride over transient HDFS
errors.
Default
2
hbase.regionserver.hlog.reader.impl
Description
The WAL file reader implementation.
Default
org.apache.hadoop.hbase.regionserver.wal.ProtobufLogReader
hbase.regionserver.hlog.writer.impl
Description
The WAL file writer implementation.
Default
org.apache.hadoop.hbase.regionserver.wal.ProtobufLogWriter
hbase.regionserver.global.memstore.size
Description
Maximum size of all memstores in a region server before new updates are blocked and flushes
are forced. Defaults to 40% of heap (0.4). Updates are blocked and flushes are forced until size of
all memstores in a region server hits hbase.regionserver.global.memstore.size.lower.limit. The
default value in this configuration has been intentionally left empty in order to honor the old
hbase.regionserver.global.memstore.upperLimit property if present.
Default
none
hbase.regionserver.global.memstore.size.lower.limit
Description
Maximum size of all memstores in a region server before flushes are forced. Defaults to 95% of
37
hbase.regionserver.global.memstore.size (0.95). A 100% value for this value causes the minimum
possible flushing to occur when updates are blocked due to memstore limiting. The default value
in this configuration has been intentionally left empty in order to honor the old
hbase.regionserver.global.memstore.lowerLimit property if present.
Default
none
hbase.systemtables.compacting.memstore.type
Description
Determines the type of memstore to be used for system tables like META, namespace tables etc.
By default NONE is the type and hence we use the default memstore for all the system tables. If
we need to use compacting memstore for system tables then set this property to BASIC/EAGER
Default
NONE
hbase.regionserver.optionalcacheflushinterval
Description
Maximum amount of time an edit lives in memory before being automatically flushed. Default 1
hour. Set it to 0 to disable automatic flushing.
Default
3600000
hbase.regionserver.dns.interface
Description
The name of the Network Interface from which a region server should report its IP address.
Default
default
hbase.regionserver.dns.nameserver
Description
The host name or IP address of the name server (DNS) which a region server should use to
determine the host name used by the master for communication and display purposes.
Default
default
hbase.regionserver.region.split.policy
Description
A split policy determines when a region should be split. The various other split policies that are
available currently are BusyRegionSplitPolicy, ConstantSizeRegionSplitPolicy,
DisabledRegionSplitPolicy, DelimitedKeyPrefixRegionSplitPolicy, KeyPrefixRegionSplitPolicy, and
SteppingSplitPolicy. DisabledRegionSplitPolicy blocks manual region splitting.
Default
org.apache.hadoop.hbase.regionserver.SteppingSplitPolicy
38
hbase.regionserver.regionSplitLimit
Description
Limit for the number of regions after which no more region splitting should take place. This is
not hard limit for the number of regions but acts as a guideline for the regionserver to stop
splitting after a certain limit. Default is set to 1000.
Default
1000
zookeeper.session.timeout
Description
ZooKeeper session timeout in milliseconds. It is used in two different ways. First, this value is
used in the ZK client that HBase uses to connect to the ensemble. It is also used by HBase when it
starts a ZK server and it is passed as the 'maxSessionTimeout'. See http://hadoop.apache.org/
zookeeper/docs/current/zookeeperProgrammers.html#ch_zkSessions. For example, if an HBase
region server connects to a ZK ensemble that’s also managed by HBase, then the session timeout
will be the one specified by this configuration. But, a region server that connects to an ensemble
managed with a different configuration will be subjected that ensemble’s maxSessionTimeout.
So, even though HBase might propose using 90 seconds, the ensemble can have a max timeout
lower than this and it will take precedence. The current default that ZK ships with is 40 seconds,
which is lower than HBase’s.
Default
90000
zookeeper.znode.parent
Description
Root ZNode for HBase in ZooKeeper. All of HBase’s ZooKeeper files that are configured with a
relative path will go under this node. By default, all of HBase’s ZooKeeper file paths are
configured with a relative path, so they will all go under this directory unless changed.
Default
/hbase
zookeeper.znode.acl.parent
Description
Root ZNode for access control lists.
Default
acl
hbase.zookeeper.dns.interface
Description
The name of the Network Interface from which a ZooKeeper server should report its IP address.
Default
default
hbase.zookeeper.dns.nameserver
39
Description
The host name or IP address of the name server (DNS) which a ZooKeeper server should use to
determine the host name used by the master for communication and display purposes.
Default
default
hbase.zookeeper.peerport
Description
Port used by ZooKeeper peers to talk to each other. See http://hadoop.apache.org/zookeeper/docs/
r3.1.1/zookeeperStarted.html#sc_RunningReplicatedZooKeeper for more information.
Default
2888
hbase.zookeeper.leaderport
Description
Port used by ZooKeeper for leader election. See http://hadoop.apache.org/zookeeper/docs/r3.1.1/
zookeeperStarted.html#sc_RunningReplicatedZooKeeper for more information.
Default
3888
hbase.zookeeper.property.initLimit
Description
Property from ZooKeeper’s config zoo.cfg. The number of ticks that the initial synchronization
phase can take.
Default
10
hbase.zookeeper.property.syncLimit
Description
Property from ZooKeeper’s config zoo.cfg. The number of ticks that can pass between sending a
request and getting an acknowledgment.
Default
5
hbase.zookeeper.property.dataDir
Description
Property from ZooKeeper’s config zoo.cfg. The directory where the snapshot is stored.
Default
${hbase.tmp.dir}/zookeeper
hbase.zookeeper.property.clientPort
Description
Property from ZooKeeper’s config zoo.cfg. The port at which the clients will connect.
40
Default
2181
hbase.zookeeper.property.maxClientCnxns
Description
Property from ZooKeeper’s config zoo.cfg. Limit on number of concurrent connections (at the
socket level) that a single client, identified by IP address, may make to a single member of the
ZooKeeper ensemble. Set high to avoid zk connection issues running standalone and pseudo-
distributed.
Default
300
hbase.client.write.buffer
Description
Default size of the BufferedMutator write buffer in bytes. A bigger buffer takes more
memory — on both the client and server side since server instantiates the passed write buffer to
process it — but a larger buffer size reduces the number of RPCs made. For an estimate of
server-side memory-used, evaluate hbase.client.write.buffer * hbase.regionserver.handler.count
Default
2097152
hbase.client.pause
Description
General client pause value. Used mostly as value to wait before running a retry of a failed get,
region lookup, etc. See hbase.client.retries.number for description of how we backoff from this
initial pause amount and how this pause works w/ retries.
Default
100
hbase.client.pause.cqtbe
Description
Whether or not to use a special client pause for CallQueueTooBigException (cqtbe). Set this
property to a higher value than hbase.client.pause if you observe frequent CQTBE from the same
RegionServer and the call queue there keeps full
Default
none
hbase.client.retries.number
Description
Maximum retries. Used as maximum for all retryable operations such as the getting of a cell’s
value, starting a row update, etc. Retry interval is a rough function based on hbase.client.pause.
At first we retry at this interval but then with backoff, we pretty quickly reach retrying every ten
seconds. See HConstants#RETRY_BACKOFF for how the backup ramps up. Change this setting
and hbase.client.pause to suit your workload.
41
Default
15
hbase.client.max.total.tasks
Description
The maximum number of concurrent mutation tasks a single HTable instance will send to the
cluster.
Default
100
hbase.client.max.perserver.tasks
Description
The maximum number of concurrent mutation tasks a single HTable instance will send to a
single region server.
Default
2
hbase.client.max.perregion.tasks
Description
The maximum number of concurrent mutation tasks the client will maintain to a single Region.
That is, if there is already hbase.client.max.perregion.tasks writes in progress for this region,
new puts won’t be sent to this region until some writes finishes.
Default
1
hbase.client.perserver.requests.threshold
Description
The max number of concurrent pending requests for one server in all client threads (process
level). Exceeding requests will be thrown ServerTooBusyException immediately to prevent
user’s threads being occupied and blocked by only one slow region server. If you use a fix
number of threads to access HBase in a synchronous way, set this to a suitable value which is
related to the number of threads will help you. See https://issues.apache.org/jira/browse/HBASE-
16388 for details.
Default
2147483647
hbase.client.scanner.caching
Description
Number of rows that we try to fetch when calling next on a scanner if it is not served from
(local, client) memory. This configuration works together with
hbase.client.scanner.max.result.size to try and use the network efficiently. The default value is
Integer.MAX_VALUE by default so that the network will fill the chunk size defined by
hbase.client.scanner.max.result.size rather than be limited by a particular number of rows since
the size of rows varies table to table. If you know ahead of time that you will not require more
than a certain number of rows from a scan, this configuration should be set to that row limit via
42
Scan#setCaching. Higher caching values will enable faster scanners but will eat up more
memory and some calls of next may take longer and longer times when the cache is empty. Do
not set this value such that the time between invocations is greater than the scanner timeout; i.e.
hbase.client.scanner.timeout.period
Default
2147483647
hbase.client.keyvalue.maxsize
Description
Specifies the combined maximum allowed size of a KeyValue instance. This is to set an upper
boundary for a single entry saved in a storage file. Since they cannot be split it helps avoiding
that a region cannot be split any further because the data is too large. It seems wise to set this to
a fraction of the maximum region size. Setting it to zero or less disables the check.
Default
10485760
hbase.server.keyvalue.maxsize
Description
Maximum allowed size of an individual cell, inclusive of value and all key components. A value
of 0 or less disables the check. The default value is 10MB. This is a safety setting to protect the
server from OOM situations.
Default
10485760
hbase.client.scanner.timeout.period
Description
Client scanner lease period in milliseconds.
Default
60000
hbase.client.localityCheck.threadPoolSize
Default
2
hbase.bulkload.retries.number
Description
Maximum retries. This is maximum number of iterations to atomic bulk loads are attempted in
the face of splitting operations 0 means never give up.
Default
10
hbase.master.balancer.maxRitPercent
Description
The max percent of regions in transition when balancing. The default value is 1.0. So there are
no balancer throttling. If set this config to 0.01, It means that there are at most 1% regions in
43
transition when balancing. Then the cluster’s availability is at least 99% when balancing.
Default
1.0
hbase.balancer.period
Description
Period at which the region balancer runs in the Master.
Default
300000
hbase.normalizer.period
Description
Period at which the region normalizer runs in the Master.
Default
300000
hbase.regions.slop
Description
Rebalance if any regionserver has average + (average * slop) regions. The default value of this
parameter is 0.001 in StochasticLoadBalancer (the default load balancer), while the default is 0.2
in other load balancers (i.e., SimpleLoadBalancer).
Default
0.001
hbase.server.thread.wakefrequency
Description
Time to sleep in between searches for work (in milliseconds). Used as sleep interval by service
threads such as log roller.
Default
10000
hbase.server.versionfile.writeattempts
Description
How many times to retry attempting to write a version file before just aborting. Each attempt is
separated by the hbase.server.thread.wakefrequency milliseconds.
Default
3
hbase.hregion.memstore.flush.size
Description
Memstore will be flushed to disk if size of the memstore exceeds this number of bytes. Value is
checked by a thread that runs every hbase.server.thread.wakefrequency.
Default
44
134217728
hbase.hregion.percolumnfamilyflush.size.lower.bound.min
Description
If FlushLargeStoresPolicy is used and there are multiple column families, then every time that
we hit the total memstore limit, we find out all the column families whose memstores exceed a
"lower bound" and only flush them while retaining the others in memory. The "lower bound"
will be "hbase.hregion.memstore.flush.size / column_family_number" by default unless value of
this property is larger than that. If none of the families have their memstore size more than
lower bound, all the memstores will be flushed (just as usual).
Default
16777216
hbase.hregion.preclose.flush.size
Description
If the memstores in a region are this size or larger when we go to close, run a "pre-flush" to clear
out memstores before we put up the region closed flag and take the region offline. On close, a
flush is run under the close flag to empty memory. During this time the region is offline and we
are not taking on any writes. If the memstore content is large, this flush could take a long time to
complete. The preflush is meant to clean out the bulk of the memstore before putting up the
close flag and taking the region offline so the flush that runs under the close flag has little to do.
Default
5242880
hbase.hregion.memstore.block.multiplier
Description
Block updates if memstore has hbase.hregion.memstore.block.multiplier times
hbase.hregion.memstore.flush.size bytes. Useful preventing runaway memstore during spikes in
update traffic. Without an upper-bound, memstore fills such that when it flushes the resultant
flush files take a long time to compact or split, or worse, we OOME.
Default
4
hbase.hregion.memstore.mslab.enabled
Description
Enables the MemStore-Local Allocation Buffer, a feature which works to prevent heap
fragmentation under heavy write loads. This can reduce the frequency of stop-the-world GC
pauses on large heaps.
Default
true
hbase.hregion.max.filesize
Description
Maximum HFile size. If the sum of the sizes of a region’s HFiles has grown to exceed this value,
the region is split in two.
45
Default
10737418240
hbase.hregion.majorcompaction
Description
Time between major compactions, expressed in milliseconds. Set to 0 to disable time-based
automatic major compactions. User-requested and size-based major compactions will still run.
This value is multiplied by hbase.hregion.majorcompaction.jitter to cause compaction to start at
a somewhat-random time during a given window of time. The default value is 7 days, expressed
in milliseconds. If major compactions are causing disruption in your environment, you can
configure them to run at off-peak times for your deployment, or disable time-based major
compactions by setting this parameter to 0, and run major compactions in a cron job or by
another external mechanism.
Default
604800000
hbase.hregion.majorcompaction.jitter
Description
A multiplier applied to hbase.hregion.majorcompaction to cause compaction to occur a given
amount of time either side of hbase.hregion.majorcompaction. The smaller the number, the
closer the compactions will happen to the hbase.hregion.majorcompaction interval.
Default
0.50
hbase.hstore.compactionThreshold
Description
If more than this number of StoreFiles exist in any one Store (one StoreFile is written per flush
of MemStore), a compaction is run to rewrite all StoreFiles into a single StoreFile. Larger values
delay compaction, but when compaction does occur, it takes longer to complete.
Default
3
hbase.hstore.flusher.count
Description
The number of flush threads. With fewer threads, the MemStore flushes will be queued. With
more threads, the flushes will be executed in parallel, increasing the load on HDFS, and
potentially causing more compactions.
Default
2
hbase.hstore.blockingStoreFiles
Description
If more than this number of StoreFiles exist in any one Store (one StoreFile is written per flush
of MemStore), updates are blocked for this region until a compaction is completed, or until
hbase.hstore.blockingWaitTime has been exceeded.
46
Default
16
hbase.hstore.blockingWaitTime
Description
The time for which a region will block updates after reaching the StoreFile limit defined by
hbase.hstore.blockingStoreFiles. After this time has elapsed, the region will stop blocking
updates even if a compaction has not been completed.
Default
90000
hbase.hstore.compaction.min
Description
The minimum number of StoreFiles which must be eligible for compaction before compaction
can run. The goal of tuning hbase.hstore.compaction.min is to avoid ending up with too many
tiny StoreFiles to compact. Setting this value to 2 would cause a minor compaction each time you
have two StoreFiles in a Store, and this is probably not appropriate. If you set this value too high,
all the other values will need to be adjusted accordingly. For most cases, the default value is
appropriate. In previous versions of HBase, the parameter hbase.hstore.compaction.min was
named hbase.hstore.compactionThreshold.
Default
3
hbase.hstore.compaction.max
Description
The maximum number of StoreFiles which will be selected for a single minor compaction,
regardless of the number of eligible StoreFiles. Effectively, the value of
hbase.hstore.compaction.max controls the length of time it takes a single compaction to
complete. Setting it larger means that more StoreFiles are included in a compaction. For most
cases, the default value is appropriate.
Default
10
hbase.hstore.compaction.min.size
Description
A StoreFile (or a selection of StoreFiles, when using ExploringCompactionPolicy) smaller than
this size will always be eligible for minor compaction. HFiles this size or larger are evaluated by
hbase.hstore.compaction.ratio to determine if they are eligible. Because this limit represents the
"automatic include" limit for all StoreFiles smaller than this value, this value may need to be
reduced in write-heavy environments where many StoreFiles in the 1-2 MB range are being
flushed, because every StoreFile will be targeted for compaction and the resulting StoreFiles
may still be under the minimum size and require further compaction. If this parameter is
lowered, the ratio check is triggered more quickly. This addressed some issues seen in earlier
versions of HBase but changing this parameter is no longer necessary in most situations.
Default: 128 MB expressed in bytes.
47
Default
134217728
hbase.hstore.compaction.max.size
Description
A StoreFile (or a selection of StoreFiles, when using ExploringCompactionPolicy) larger than this
size will be excluded from compaction. The effect of raising hbase.hstore.compaction.max.size is
fewer, larger StoreFiles that do not get compacted often. If you feel that compaction is
happening too often without much benefit, you can try raising this value. Default: the value of
LONG.MAX_VALUE, expressed in bytes.
Default
9223372036854775807
hbase.hstore.compaction.ratio
Description
For minor compaction, this ratio is used to determine whether a given StoreFile which is larger
than hbase.hstore.compaction.min.size is eligible for compaction. Its effect is to limit compaction
of large StoreFiles. The value of hbase.hstore.compaction.ratio is expressed as a floating-point
decimal. A large ratio, such as 10, will produce a single giant StoreFile. Conversely, a low value,
such as .25, will produce behavior similar to the BigTable compaction algorithm, producing four
StoreFiles. A moderate value of between 1.0 and 1.4 is recommended. When tuning this value,
you are balancing write costs with read costs. Raising the value (to something like 1.4) will have
more write costs, because you will compact larger StoreFiles. However, during reads, HBase will
need to seek through fewer StoreFiles to accomplish the read. Consider this approach if you
cannot take advantage of Bloom filters. Otherwise, you can lower this value to something like 1.0
to reduce the background cost of writes, and use Bloom filters to control the number of
StoreFiles touched during reads. For most cases, the default value is appropriate.
Default
1.2F
hbase.hstore.compaction.ratio.offpeak
Description
Allows you to set a different (by default, more aggressive) ratio for determining whether larger
StoreFiles are included in compactions during off-peak hours. Works in the same way as
hbase.hstore.compaction.ratio. Only applies if hbase.offpeak.start.hour and
hbase.offpeak.end.hour are also enabled.
Default
5.0F
hbase.hstore.time.to.purge.deletes
Description
The amount of time to delay purging of delete markers with future timestamps. If unset, or set to
0, all delete markers, including those with future timestamps, are purged during the next major
compaction. Otherwise, a delete marker is kept until the major compaction which occurs after
the marker’s timestamp plus the value of this setting, in milliseconds.
48
Default
0
hbase.offpeak.start.hour
Description
The start of off-peak hours, expressed as an integer between 0 and 23, inclusive. Set to -1 to
disable off-peak.
Default
-1
hbase.offpeak.end.hour
Description
The end of off-peak hours, expressed as an integer between 0 and 23, inclusive. Set to -1 to
disable off-peak.
Default
-1
hbase.regionserver.thread.compaction.throttle
Description
There are two different thread pools for compactions, one for large compactions and the other
for small compactions. This helps to keep compaction of lean tables (such as hbase:meta) fast. If
a compaction is larger than this threshold, it goes into the large compaction pool. In most cases,
the default value is appropriate. Default: 2 x hbase.hstore.compaction.max x
hbase.hregion.memstore.flush.size (which defaults to 128MB). The value field assumes that the
value of hbase.hregion.memstore.flush.size is unchanged from the default.
Default
2684354560
hbase.regionserver.majorcompaction.pagecache.drop
Description
Specifies whether to drop pages read/written into the system page cache by major compactions.
Setting it to true helps prevent major compactions from polluting the page cache, which is
almost always required, especially for clusters with low/moderate memory to storage ratio.
Default
true
hbase.regionserver.minorcompaction.pagecache.drop
Description
Specifies whether to drop pages read/written into the system page cache by minor compactions.
Setting it to true helps prevent minor compactions from polluting the page cache, which is most
beneficial on clusters with low memory to storage ratio or very write heavy clusters. You may
want to set it to false under moderate to low write workload when bulk of the reads are on the
most recently written data.
Default
49
true
hbase.hstore.compaction.kv.max
Description
The maximum number of KeyValues to read and then write in a batch when flushing or
compacting. Set this lower if you have big KeyValues and problems with Out Of Memory
Exceptions Set this higher if you have wide, small rows.
Default
10
hbase.storescanner.parallel.seek.enable
Description
Enables StoreFileScanner parallel-seeking in StoreScanner, a feature which can reduce response
latency under special conditions.
Default
false
hbase.storescanner.parallel.seek.threads
Description
The default thread pool size if parallel-seeking feature enabled.
Default
10
hfile.block.cache.size
Description
Percentage of maximum heap (-Xmx setting) to allocate to block cache used by a StoreFile.
Default of 0.4 means allocate 40%. Set to 0 to disable but it’s not recommended; you need at least
enough cache to hold the storefile indices.
Default
0.4
hfile.block.index.cacheonwrite
Description
This allows to put non-root multi-level index blocks into the block cache at the time the index is
being written.
Default
false
hfile.index.block.max.size
Description
When the size of a leaf-level, intermediate-level, or root-level index block in a multi-level block
index grows to this size, the block is written out and a new block is started.
Default
131072
50
hbase.bucketcache.ioengine
Description
Where to store the contents of the bucketcache. One of: offheap, file, files or mmap. If a file or
files, set it to file(s):PATH_TO_FILE. mmap means the content will be in an mmaped file. Use
mmap:PATH_TO_FILE. See http://hbase.apache.org/book.html#offheap.blockcache for more
information.
Default
none
hbase.bucketcache.size
Description
A float that EITHER represents a percentage of total heap memory size to give to the cache (if <
1.0) OR, it is the total capacity in megabytes of BucketCache. Default: 0.0
Default
none
hbase.bucketcache.bucket.sizes
Description
A comma-separated list of sizes for buckets for the bucketcache. Can be multiple sizes. List block
sizes in order from smallest to largest. The sizes you use will depend on your data access
patterns. Must be a multiple of 256 else you will run into 'java.io.IOException: Invalid HFile
block magic' when you go to read from cache. If you specify no values here, then you pick up the
default bucketsizes set in code (See BucketAllocator#DEFAULT_BUCKET_SIZES).
Default
none
hfile.format.version
Description
The HFile format version to use for new files. Version 3 adds support for tags in hfiles (See
http://hbase.apache.org/book.html#hbase.tags). Also see the configuration
'hbase.replication.rpc.codec'.
Default
3
hfile.block.bloom.cacheonwrite
Description
Enables cache-on-write for inline blocks of a compound Bloom filter.
Default
false
io.storefile.bloom.block.size
Description
The size in bytes of a single block ("chunk") of a compound Bloom filter. This size is
51
approximate, because Bloom blocks can only be inserted at data block boundaries, and the
number of keys per data block varies.
Default
131072
hbase.rs.cacheblocksonwrite
Description
Whether an HFile block should be added to the block cache when the block is finished.
Default
false
hbase.rpc.timeout
Description
This is for the RPC layer to define how long (millisecond) HBase client applications take for a
remote call to time out. It uses pings to check connections but will eventually throw a
TimeoutException.
Default
60000
hbase.client.operation.timeout
Description
Operation timeout is a top-level restriction (millisecond) that makes sure a blocking operation in
Table will not be blocked more than this. In each operation, if rpc request fails because of
timeout or other reason, it will retry until success or throw RetriesExhaustedException. But if
the total time being blocking reach the operation timeout before retries exhausted, it will break
early and throw SocketTimeoutException.
Default
1200000
hbase.cells.scanned.per.heartbeat.check
Description
The number of cells scanned in between heartbeat checks. Heartbeat checks occur during the
processing of scans to determine whether or not the server should stop scanning in order to
send back a heartbeat message to the client. Heartbeat messages are used to keep the client-
server connection alive during long running scans. Small values mean that the heartbeat checks
will occur more often and thus will provide a tighter bound on the execution time of the scan.
Larger values mean that the heartbeat checks occur less frequently
Default
10000
hbase.rpc.shortoperation.timeout
Description
This is another version of "hbase.rpc.timeout". For those RPC operation within cluster, we rely
on this configuration to set a short timeout limitation for short operation. For example, short rpc
52
timeout for region server’s trying to report to active master can benefit quicker master failover
process.
Default
10000
hbase.ipc.client.tcpnodelay
Description
Set no delay on rpc socket connections. See http://docs.oracle.com/javase/1.5.0/docs/api/java/net/
Socket.html#getTcpNoDelay()
Default
true
hbase.regionserver.hostname
Description
This config is for experts: don’t set its value unless you really know what you are doing. When
set to a non-empty value, this represents the (external facing) hostname for the underlying
server. See https://issues.apache.org/jira/browse/HBASE-12954 for details.
Default
none
hbase.regionserver.hostname.disable.master.reversedns
Description
This config is for experts: don’t set its value unless you really know what you are doing. When
set to true, regionserver will use the current node hostname for the servername and HMaster
will skip reverse DNS lookup and use the hostname sent by regionserver instead. Note that this
config and hbase.regionserver.hostname are mutually exclusive. See https://issues.apache.org/
jira/browse/HBASE-18226 for more details.
Default
false
hbase.master.keytab.file
Description
Full path to the kerberos keytab file to use for logging in the configured HMaster server
principal.
Default
none
hbase.master.kerberos.principal
Description
Ex. "hbase/_HOST@EXAMPLE.COM". The kerberos principal name that should be used to run the
HMaster process. The principal name should be in the form: user/hostname@DOMAIN. If
"_HOST" is used as the hostname portion, it will be replaced with the actual hostname of the
running instance.
53
Default
none
hbase.regionserver.keytab.file
Description
Full path to the kerberos keytab file to use for logging in the configured HRegionServer server
principal.
Default
none
hbase.regionserver.kerberos.principal
Description
Ex. "hbase/_HOST@EXAMPLE.COM". The kerberos principal name that should be used to run the
HRegionServer process. The principal name should be in the form: user/hostname@DOMAIN. If
"_HOST" is used as the hostname portion, it will be replaced with the actual hostname of the
running instance. An entry for this principal must exist in the file specified in
hbase.regionserver.keytab.file
Default
none
hadoop.policy.file
Description
The policy configuration file used by RPC servers to make authorization decisions on client
requests. Only used when HBase security is enabled.
Default
hbase-policy.xml
hbase.superuser
Description
List of users or groups (comma-separated), who are allowed full privileges, regardless of stored
ACLs, across the cluster. Only used when HBase security is enabled.
Default
none
hbase.auth.key.update.interval
Description
The update interval for master key for authentication tokens in servers in milliseconds. Only
used when HBase security is enabled.
Default
86400000
hbase.auth.token.max.lifetime
Description
The maximum lifetime in milliseconds after which an authentication token expires. Only used
54
when HBase security is enabled.
Default
604800000
hbase.ipc.client.fallback-to-simple-auth-allowed
Description
When a client is configured to attempt a secure connection, but attempts to connect to an
insecure server, that server may instruct the client to switch to SASL SIMPLE (unsecure)
authentication. This setting controls whether or not the client will accept this instruction from
the server. When false (the default), the client will not allow the fallback to SIMPLE
authentication, and will abort the connection.
Default
false
hbase.ipc.server.fallback-to-simple-auth-allowed
Description
When a server is configured to require secure connections, it will reject connection attempts
from clients using SASL SIMPLE (unsecure) authentication. This setting allows secure servers to
accept SASL SIMPLE connections from clients when the client requests. When false (the default),
the server will not allow the fallback to SIMPLE authentication, and will reject the connection.
WARNING: This setting should ONLY be used as a temporary measure while converting clients
over to secure authentication. It MUST BE DISABLED for secure operation.
Default
false
hbase.display.keys
Description
When this is set to true the webUI and such will display all start/end keys as part of the table
details, region names, etc. When this is set to false, the keys are hidden.
Default
true
hbase.coprocessor.enabled
Description
Enables or disables coprocessor loading. If 'false' (disabled), any other coprocessor related
configuration will be ignored.
Default
true
hbase.coprocessor.user.enabled
Description
Enables or disables user (aka. table) coprocessor loading. If 'false' (disabled), any table
coprocessor attributes in table descriptors will be ignored. If "hbase.coprocessor.enabled" is
'false' this setting has no effect.
55
Default
true
hbase.coprocessor.region.classes
Description
A comma-separated list of Coprocessors that are loaded by default on all tables. For any override
coprocessor method, these classes will be called in order. After implementing your own
Coprocessor, just put it in HBase’s classpath and add the fully qualified class name here. A
coprocessor can also be loaded on demand by setting HTableDescriptor.
Default
none
hbase.coprocessor.master.classes
Description
A comma-separated list of org.apache.hadoop.hbase.coprocessor.MasterObserver coprocessors
that are loaded by default on the active HMaster process. For any implemented coprocessor
methods, the listed classes will be called in order. After implementing your own
MasterObserver, just put it in HBase’s classpath and add the fully qualified class name here.
Default
none
hbase.coprocessor.abortonerror
Description
Set to true to cause the hosting server (master or regionserver) to abort if a coprocessor fails to
load, fails to initialize, or throws an unexpected Throwable object. Setting this to false will allow
the server to continue execution but the system wide state of the coprocessor in question will
become inconsistent as it will be properly executing in only a subset of servers, so this is most
useful for debugging only.
Default
true
hbase.rest.port
Description
The port for the HBase REST server.
Default
8080
hbase.rest.readonly
Description
Defines the mode the REST server will be started in. Possible values are: false: All HTTP methods
are permitted - GET/PUT/POST/DELETE. true: Only the GET method is permitted.
Default
false
56
hbase.rest.threads.max
Description
The maximum number of threads of the REST server thread pool. Threads in the pool are reused
to process REST requests. This controls the maximum number of requests processed
concurrently. It may help to control the memory used by the REST server to avoid OOM issues. If
the thread pool is full, incoming requests will be queued up and wait for some free threads.
Default
100
hbase.rest.threads.min
Description
The minimum number of threads of the REST server thread pool. The thread pool always has at
least these number of threads so the REST server is ready to serve incoming requests.
Default
2
hbase.rest.support.proxyuser
Description
Enables running the REST server to support proxy-user mode.
Default
false
hbase.defaults.for.version.skip
Description
Set to true to skip the 'hbase.defaults.for.version' check. Setting this to true can be useful in
contexts other than the other side of a maven generation; i.e. running in an IDE. You’ll want to
set this boolean to true to avoid seeing the RuntimeException complaint: "hbase-default.xml file
seems to be for and old version of HBase (\${hbase.version}), this version is X.X.X-SNAPSHOT"
Default
false
hbase.table.lock.enable
Description
Set to true to enable locking the table in zookeeper for schema change operations. Table locking
from master prevents concurrent schema modifications to corrupt table state.
Default
true
hbase.table.max.rowsize
Description
Maximum size of single row in bytes (default is 1 Gb) for Get’ting or Scan’ning without in-row
scan flag set. If row size exceeds this limit RowTooBigException is thrown to client.
Default
57
1073741824
hbase.thrift.minWorkerThreads
Description
The "core size" of the thread pool. New threads are created on every connection until this many
threads are created.
Default
16
hbase.thrift.maxWorkerThreads
Description
The maximum size of the thread pool. When the pending request queue overflows, new threads
are created until their number reaches this number. After that, the server starts dropping
connections.
Default
1000
hbase.thrift.maxQueuedRequests
Description
The maximum number of pending Thrift connections waiting in the queue. If there are no idle
threads in the pool, the server queues requests. Only when the queue overflows, new threads
are added, up to hbase.thrift.maxQueuedRequests threads.
Default
1000
hbase.regionserver.thrift.framed
Description
Use Thrift TFramedTransport on the server side. This is the recommended transport for thrift
servers and requires a similar setting on the client side. Changing this to false will select the
default transport, vulnerable to DoS when malformed requests are issued due to THRIFT-601.
Default
false
hbase.regionserver.thrift.framed.max_frame_size_in_mb
Description
Default frame size when using framed transport, in MB
Default
2
hbase.regionserver.thrift.compact
Description
Use Thrift TCompactProtocol binary serialization protocol.
Default
false
58
hbase.rootdir.perms
Description
FS Permissions for the root data subdirectory in a secure (kerberos) setup. When master starts, it
creates the rootdir with this permissions or sets the permissions if it does not match.
Default
700
hbase.wal.dir.perms
Description
FS Permissions for the root WAL directory in a secure(kerberos) setup. When master starts, it
creates the WAL dir with this permissions or sets the permissions if it does not match.
Default
700
hbase.data.umask.enable
Description
Enable, if true, that file permissions should be assigned to the files written by the regionserver
Default
false
hbase.data.umask
Description
File permissions that should be used to write data files when hbase.data.umask.enable is true
Default
000
hbase.snapshot.enabled
Description
Set to true to allow snapshots to be taken / restored / cloned.
Default
true
hbase.snapshot.restore.take.failsafe.snapshot
Description
Set to true to take a snapshot before the restore operation. The snapshot taken will be used in
case of failure, to restore the previous state. At the end of the restore operation this snapshot will
be deleted
Default
true
hbase.snapshot.restore.failsafe.name
Description
Name of the failsafe snapshot taken by the restore operation. You can use the {snapshot.name},
{table.name} and {restore.timestamp} variables to create a name based on what you are
59
restoring.
Default
hbase-failsafe-{snapshot.name}-{restore.timestamp}
hbase.server.compactchecker.interval.multiplier
Description
The number that determines how often we scan to see if compaction is necessary. Normally,
compactions are done after some events (such as memstore flush), but if region didn’t receive a
lot of writes for some time, or due to different compaction policies, it may be necessary to check
it periodically. The interval between checks is hbase.server.compactchecker.interval.multiplier
multiplied by hbase.server.thread.wakefrequency.
Default
1000
hbase.lease.recovery.timeout
Description
How long we wait on dfs lease recovery in total before giving up.
Default
900000
hbase.lease.recovery.dfs.timeout
Description
How long between dfs recover lease invocations. Should be larger than the sum of the time it
takes for the namenode to issue a block recovery command as part of datanode;
dfs.heartbeat.interval and the time it takes for the primary datanode, performing block recovery
to timeout on a dead datanode; usually dfs.client.socket-timeout. See the end of HBASE-8389 for
more.
Default
64000
hbase.column.max.version
Description
New column family descriptors will use this value as the default number of versions to keep.
Default
1
dfs.client.read.shortcircuit
Description
If set to true, this configuration parameter enables short-circuit local reads.
Default
false
dfs.domain.socket.path
Description
60
This is a path to a UNIX domain socket that will be used for communication between the
DataNode and local HDFS clients, if dfs.client.read.shortcircuit is set to true. If the string "_PORT"
is present in this path, it will be replaced by the TCP port of the DataNode. Be careful about
permissions for the directory that hosts the shared domain socket; dfsclient will complain if
open to other users than the HBase user.
Default
none
hbase.dfs.client.read.shortcircuit.buffer.size
Description
If the DFSClient configuration dfs.client.read.shortcircuit.buffer.size is unset, we will use what is
configured here as the short circuit read default direct byte buffer size. DFSClient native default
is 1MB; HBase keeps its HDFS files open so number of file blocks * 1MB soon starts to add up and
threaten OOME because of a shortage of direct memory. So, we set it down from the default.
Make it > the default hbase block size set in the HColumnDescriptor which is usually 64k.
Default
131072
hbase.regionserver.checksum.verify
Description
If set to true (the default), HBase verifies the checksums for hfile blocks. HBase writes
checksums inline with the data when it writes out hfiles. HDFS (as of this writing) writes
checksums to a separate file than the data file necessitating extra seeks. Setting this flag saves
some on i/o. Checksum verification by HDFS will be internally disabled on hfile streams when
this flag is set. If the hbase-checksum verification fails, we will switch back to using HDFS
checksums (so do not disable HDFS checksums! And besides this feature applies to hfiles only,
not to WALs). If this parameter is set to false, then hbase will not verify any checksums, instead
it will depend on checksum verification being done in the HDFS client.
Default
true
hbase.hstore.bytes.per.checksum
Description
Number of bytes in a newly created checksum chunk for HBase-level checksums in hfile blocks.
Default
16384
hbase.hstore.checksum.algorithm
Description
Name of an algorithm that is used to compute checksums. Possible values are NULL, CRC32,
CRC32C.
Default
CRC32C
61
hbase.client.scanner.max.result.size
Description
Maximum number of bytes returned when calling a scanner’s next method. Note that when a
single row is larger than this limit the row is still returned completely. The default value is 2MB,
which is good for 1ge networks. With faster and/or high latency networks this value should be
increased.
Default
2097152
hbase.server.scanner.max.result.size
Description
Maximum number of bytes returned when calling a scanner’s next method. Note that when a
single row is larger than this limit the row is still returned completely. The default value is
100MB. This is a safety setting to protect the server from OOM situations.
Default
104857600
hbase.status.published
Description
This setting activates the publication by the master of the status of the region server. When a
region server dies and its recovery starts, the master will push this information to the client
application, to let them cut the connection immediately instead of waiting for a timeout.
Default
false
hbase.status.publisher.class
Description
Implementation of the status publication with a multicast message.
Default
org.apache.hadoop.hbase.master.ClusterStatusPublisher$MulticastPublisher
hbase.status.listener.class
Description
Implementation of the status listener with a multicast message.
Default
org.apache.hadoop.hbase.client.ClusterStatusListener$MulticastListener
hbase.status.multicast.address.ip
Description
Multicast address to use for the status publication by multicast.
Default
226.1.1.3
62
hbase.status.multicast.address.port
Description
Multicast port to use for the status publication by multicast.
Default
16100
hbase.dynamic.jars.dir
Description
The directory from which the custom filter JARs can be loaded dynamically by the region server
without the need to restart. However, an already loaded filter/co-processor class would not be
un-loaded. See HBASE-1936 for more details. Does not apply to coprocessors.
Default
${hbase.rootdir}/lib
hbase.security.authentication
Description
Controls whether or not secure authentication is enabled for HBase. Possible values are 'simple'
(no authentication), and 'kerberos'.
Default
simple
hbase.rest.filter.classes
Description
Servlet filters for REST service.
Default
org.apache.hadoop.hbase.rest.filter.GzipFilter
hbase.master.loadbalancer.class
Description
Class used to execute the regions balancing when the period occurs. See the class comment for
more on how it works http://hbase.apache.org/devapidocs/org/apache/hadoop/hbase/master/
balancer/StochasticLoadBalancer.html It replaces the DefaultLoadBalancer as the default (since
renamed as the SimpleLoadBalancer).
Default
org.apache.hadoop.hbase.master.balancer.StochasticLoadBalancer
hbase.master.loadbalance.bytable
Description
Factor Table name when the balancer runs. Default: false.
Default
false
hbase.master.normalizer.class
Description
63
Class used to execute the region normalization when the period occurs. See the class comment
for more on how it works http://hbase.apache.org/devapidocs/org/apache/hadoop/hbase/master/
normalizer/SimpleRegionNormalizer.html
Default
org.apache.hadoop.hbase.master.normalizer.SimpleRegionNormalizer
hbase.rest.csrf.enabled
Description
Set to true to enable protection against cross-site request forgery (CSRF)
Default
false
hbase.rest-csrf.browser-useragents-regex
Description
A comma-separated list of regular expressions used to match against an HTTP request’s User-
Agent header when protection against cross-site request forgery (CSRF) is enabled for REST
server by setting hbase.rest.csrf.enabled to true. If the incoming User-Agent matches any of these
regular expressions, then the request is considered to be sent by a browser, and therefore CSRF
prevention is enforced. If the request’s User-Agent does not match any of these regular
expressions, then the request is considered to be sent by something other than a browser, such
as scripted automation. In this case, CSRF is not a potential attack vector, so the prevention is not
enforced. This helps achieve backwards-compatibility with existing automation that has not
been updated to send the CSRF prevention header.
Default
Mozilla.,Opera.
hbase.security.exec.permission.checks
Description
If this setting is enabled and ACL based access control is active (the AccessController coprocessor
is installed either as a system coprocessor or on a table as a table coprocessor) then you must
grant all relevant users EXEC privilege if they require the ability to execute coprocessor
endpoint calls. EXEC privilege, like any other permission, can be granted globally to a user, or to
a user on a per table or per namespace basis. For more information on coprocessor endpoints,
see the coprocessor section of the HBase online manual. For more information on granting or
revoking permissions using the AccessController, see the security section of the HBase online
manual.
Default
false
hbase.procedure.regionserver.classes
Description
A comma-separated list of org.apache.hadoop.hbase.procedure.RegionServerProcedureManager
procedure managers that are loaded by default on the active HRegionServer process. The
lifecycle methods (init/start/stop) will be called by the active HRegionServer process to perform
the specific globally barriered procedure. After implementing your own
64
RegionServerProcedureManager, just put it in HBase’s classpath and add the fully qualified class
name here.
Default
none
hbase.procedure.master.classes
Description
A comma-separated list of org.apache.hadoop.hbase.procedure.MasterProcedureManager
procedure managers that are loaded by default on the active HMaster process. A procedure is
identified by its signature and users can use the signature and an instant name to trigger an
execution of a globally barriered procedure. After implementing your own
MasterProcedureManager, just put it in HBase’s classpath and add the fully qualified class name
here.
Default
none
hbase.coordinated.state.manager.class
Description
Fully qualified name of class implementing coordinated state manager.
Default
org.apache.hadoop.hbase.coordination.ZkCoordinatedStateManager
hbase.regionserver.storefile.refresh.period
Description
The period (in milliseconds) for refreshing the store files for the secondary regions. 0 means this
feature is disabled. Secondary regions sees new files (from flushes and compactions) from
primary once the secondary region refreshes the list of files in the region (there is no
notification mechanism). But too frequent refreshes might cause extra Namenode pressure. If
the files cannot be refreshed for longer than HFile TTL (hbase.master.hfilecleaner.ttl) the
requests are rejected. Configuring HFile TTL to a larger value is also recommended with this
setting.
Default
0
hbase.region.replica.replication.enabled
Description
Whether asynchronous WAL replication to the secondary region replicas is enabled or not. If
this is enabled, a replication peer named "region_replica_replication" will be created which will
tail the logs and replicate the mutations to region replicas for tables that have region replication
> 1. If this is enabled once, disabling this replication also requires disabling the replication peer
using shell or Admin java class. Replication to secondary region replicas works over standard
inter-cluster replication.
Default
false
65
hbase.http.filter.initializers
Description
A comma separated list of class names. Each class in the list must extend
org.apache.hadoop.hbase.http.FilterInitializer. The corresponding Filter will be initialized. Then,
the Filter will be applied to all user facing jsp and servlet web pages. The ordering of the list
defines the ordering of the filters. The default StaticUserWebFilter add a user principal as
defined by the hbase.http.staticuser.user property.
Default
org.apache.hadoop.hbase.http.lib.StaticUserWebFilter
hbase.security.visibility.mutations.checkauths
Description
This property if enabled, will check whether the labels in the visibility expression are associated
with the user issuing the mutation
Default
false
hbase.http.max.threads
Description
The maximum number of threads that the HTTP Server will create in its ThreadPool.
Default
16
hbase.replication.rpc.codec
Description
The codec that is to be used when replication is enabled so that the tags are also replicated. This
is used along with HFileV3 which supports tags in them. If tags are not used or if the hfile
version used is HFileV2 then KeyValueCodec can be used as the replication codec. Note that
using KeyValueCodecWithTags for replication when there are no tags causes no harm.
Default
org.apache.hadoop.hbase.codec.KeyValueCodecWithTags
hbase.replication.source.maxthreads
Description
The maximum number of threads any replication source will use for shipping edits to the sinks
in parallel. This also limits the number of chunks each replication batch is broken into. Larger
values can improve the replication throughput between the master and slave clusters. The
default of 10 will rarely need to be changed.
Default
10
hbase.http.staticuser.user
Description
The user name to filter as, on static web filters while rendering content. An example use is the
66
HDFS web UI (user to be used for browsing files).
Default
dr.stack
hbase.regionserver.handler.abort.on.error.percent
Description
The percent of region server RPC threads failed to abort RS. -1 Disable aborting; 0 Abort if even a
single handler has died; 0.x Abort only when this percent of handlers have died; 1 Abort only all
of the handers have died.
Default
0.5
hbase.mob.file.cache.size
Description
Number of opened file handlers to cache. A larger value will benefit reads by providing more
file handlers per mob file cache and would reduce frequent file opening and closing. However, if
this is set too high, this could lead to a "too many opened file handlers" The default value is 1000.
Default
1000
hbase.mob.cache.evict.period
Description
The amount of time in seconds before the mob cache evicts cached mob files. The default value
is 3600 seconds.
Default
3600
hbase.mob.cache.evict.remain.ratio
Description
The ratio (between 0.0 and 1.0) of files that remains cached after an eviction is triggered when
the number of cached mob files exceeds the hbase.mob.file.cache.size. The default value is 0.5f.
Default
0.5f
hbase.master.mob.ttl.cleaner.period
Description
The period that ExpiredMobFileCleanerChore runs. The unit is second. The default value is one
day. The MOB file name uses only the date part of the file creation time in it. We use this time for
deciding TTL expiry of the files. So the removal of TTL expired files might be delayed. The max
delay might be 24 hrs.
Default
86400
67
hbase.mob.compaction.mergeable.threshold
Description
If the size of a mob file is less than this value, it’s regarded as a small file and needs to be merged
in mob compaction. The default value is 1280MB.
Default
1342177280
hbase.mob.delfile.max.count
Description
The max number of del files that is allowed in the mob compaction. In the mob compaction,
when the number of existing del files is larger than this value, they are merged until number of
del files is not larger this value. The default value is 3.
Default
3
hbase.mob.compaction.batch.size
Description
The max number of the mob files that is allowed in a batch of the mob compaction. The mob
compaction merges the small mob files to bigger ones. If the number of the small files is very
large, it could lead to a "too many opened file handlers" in the merge. And the merge has to be
split into batches. This value limits the number of mob files that are selected in a batch of the
mob compaction. The default value is 100.
Default
100
hbase.mob.compaction.chore.period
Description
The period that MobCompactionChore runs. The unit is second. The default value is one week.
Default
604800
hbase.mob.compactor.class
Description
Implementation of mob compactor, the default one is PartitionedMobCompactor.
Default
org.apache.hadoop.hbase.mob.compactions.PartitionedMobCompactor
hbase.mob.compaction.threads.max
Description
The max number of threads used in MobCompactor.
Default
1
68
hbase.snapshot.master.timeout.millis
Description
Timeout for master for the snapshot procedure execution.
Default
300000
hbase.snapshot.region.timeout
Description
Timeout for regionservers to keep threads in snapshot request pool waiting.
Default
300000
hbase.rpc.rows.warning.threshold
Description
Number of rows in a batch operation above which a warning will be logged.
Default
5000
hbase.master.wait.on.service.seconds
Description
Default is 5 minutes. Make it 30 seconds for tests. See HBASE-19794 for some context.
Default
30
7.3. hbase-env.sh
Set HBase environment variables in this file. Examples include options to pass the JVM on start of
an HBase daemon such as heap size and garbage collector configs. You can also set configurations
for HBase configuration, log directories, niceness, ssh options, where to locate process pid files, etc.
Open the file at conf/hbase-env.sh and peruse its content. Each option is fairly well documented. Add
your own environment variables here if you want them read by HBase daemons on startup.
Changes here will require a cluster restart for HBase to notice the change.
7.4. log4j.properties
Edit this file to change rate at which HBase files are rolled and to change the level at which HBase
logs messages.
Changes here will require a cluster restart for HBase to notice the change though log levels can be
changed for particular daemons via the HBase UI.
69
7.5. Client configuration and dependencies connecting
to an HBase cluster
If you are running HBase in standalone mode, you don’t need to configure anything for your client
to work provided that they are all on the same machine.
Since the HBase Master may move around, clients bootstrap by looking to ZooKeeper for current
critical locations. ZooKeeper is where all these values are kept. Thus clients require the location of
the ZooKeeper ensemble before they can do anything else. Usually this ensemble location is kept
out in the hbase-site.xml and is picked up by the client from the CLASSPATH.
If you are configuring an IDE to run an HBase client, you should include the conf/ directory on your
classpath so hbase-site.xml settings can be found (or add src/test/resources to pick up the hbase-
site.xml used by tests).
For Java applications using Maven, including the hbase-shaded-client module is the recommended
dependency when connecting to a cluster:
<dependency>
Ê <groupId>org.apache.hbase</groupId>
Ê <artifactId>hbase-shaded-client</artifactId>
Ê <version>2.0.0</version>
</dependency>
A basic example hbase-site.xml for client only may look as follows:
<?xml version="1.0"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
<configuration>
Ê <property>
Ê <name>hbase.zookeeper.quorum</name>
Ê <value>example1,example2,example3</value>
Ê <description>The directory shared by region servers.
Ê </description>
Ê </property>
</configuration>
7.5.1. Java client configuration
The configuration used by a Java client is kept in an HBaseConfiguration instance.
The factory method on HBaseConfiguration, HBaseConfiguration.create();, on invocation, will read
in the content of the first hbase-site.xml found on the client’s CLASSPATH, if one is present (Invocation
will also factor in any hbase-default.xml found; an hbase-default.xml ships inside the
hbase.X.X.X.jar). It is also possible to specify configuration directly without having to read from a
hbase-site.xml. For example, to set the ZooKeeper ensemble for the cluster programmatically do as
follows:
70
Configuration config = HBaseConfiguration.create();
config.set("hbase.zookeeper.quorum", "localhost"); // Here we are running zookeeper
locally
If multiple ZooKeeper instances make up your ZooKeeper ensemble, they may be specified in a
comma-separated list (just as in the hbase-site.xml file). This populated Configuration instance can
then be passed to an Table, and so on.
7.6. Timeout settings
HBase provides many timeout settings to limit the execution time of different remote operations.
The hbase.rpc.timeout property limits how long an RPC call can run before it times out. You can also
specify a timeout for read and write operations using hbase.rpc.read.timeout and
hbase.rpc.write.timeout configuration properties. In the absence of these properties
hbase.rpc.timeout will be used. A higher-level timeout is hbase.client.operation.timeout which is
valid for each client call. Timeout for scan operations is controlled differently. To set it you can use
hbase.client.scanner.timeout.period property.
71
Chapter 8. Example Configurations
8.1. Basic Distributed HBase Install
Here is a basic configuration example for a distributed ten node cluster: * The nodes are named
example0, example1, etc., through node example9 in this example. * The HBase Master and the HDFS
NameNode are running on the node example0. * RegionServers run on nodes example1-example9. * A
3-node ZooKeeper ensemble runs on example1, example2, and example3 on the default ports. *
ZooKeeper data is persisted to the directory /export/zookeeper.
Below we show what the main configuration files — hbase-site.xml, regionservers, and hbase-
env.sh — found in the HBase conf directory might look like.
8.1.1. hbase-site.xml
<?xml version="1.0"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
<configuration>
Ê <property>
Ê <name>hbase.zookeeper.quorum</name>
Ê <value>example1,example2,example3</value>
Ê <description>The directory shared by RegionServers.
Ê </description>
Ê </property>
Ê <property>
Ê <name>hbase.zookeeper.property.dataDir</name>
Ê <value>/export/zookeeper</value>
Ê <description>Property from ZooKeeper config zoo.cfg.
Ê The directory where the snapshot is stored.
Ê </description>
Ê </property>
Ê <property>
Ê <name>hbase.rootdir</name>
Ê <value>hdfs://example0:8020/hbase</value>
Ê <description>The directory shared by RegionServers.
Ê </description>
Ê </property>
Ê <property>
Ê <name>hbase.cluster.distributed</name>
Ê <value>true</value>
Ê <description>The mode the cluster will be in. Possible values are
Ê false: standalone and pseudo-distributed setups with managed ZooKeeper
Ê true: fully-distributed with unmanaged ZooKeeper Quorum (see hbase-env.sh)
Ê </description>
Ê </property>
</configuration>
72
8.1.2. regionservers
In this file you list the nodes that will run RegionServers. In our case, these nodes are example1-
example9.
example1
example2
example3
example4
example5
example6
example7
example8
example9
8.1.3. hbase-env.sh
The following lines in the hbase-env.sh file show how to set the JAVA_HOME environment variable
(required for HBase) and set the heap to 4 GB (rather than the default value of 1 GB). If you copy
and paste this example, be sure to adjust the JAVA_HOME to suit your environment.
# The java implementation to use.
export JAVA_HOME=/usr/java/jdk1.8.0/
# The maximum amount of heap to use. Default is left to JVM default.
export HBASE_HEAPSIZE=4G
Use rsync to copy the content of the conf directory to all nodes of the cluster.
73
Chapter 9. The Important Configurations
Below we list some important configurations. We’ve divided this section into required configuration
and worth-a-look recommended configs.
9.1. Required Configurations
Review the os and hadoop sections.
9.1.1. Big Cluster Configurations
If you have a cluster with a lot of regions, it is possible that a Regionserver checks in briefly after
the Master starts while all the remaining RegionServers lag behind. This first server to check in will
be assigned all regions which is not optimal. To prevent the above scenario from happening, up the
hbase.master.wait.on.regionservers.mintostart property from its default value of 1. See HBASE-
6389 Modify the conditions to ensure that Master waits for sufficient number of Region Servers
before starting region assignments for more detail.
9.2. Recommended Configurations
9.2.1. ZooKeeper Configuration
zookeeper.session.timeout
The default timeout is three minutes (specified in milliseconds). This means that if a server crashes,
it will be three minutes before the Master notices the crash and starts recovery. You might need to
tune the timeout down to a minute or even less so the Master notices failures sooner. Before
changing this value, be sure you have your JVM garbage collection configuration under control,
otherwise, a long garbage collection that lasts beyond the ZooKeeper session timeout will take out
your RegionServer. (You might be fine with this — you probably want recovery to start on the
server if a RegionServer has been in GC for a long period of time).
To change this configuration, edit hbase-site.xml, copy the changed file across the cluster and
restart.
We set this value high to save our having to field questions up on the mailing lists asking why a
RegionServer went down during a massive import. The usual cause is that their JVM is untuned and
they are running into long GC pauses. Our thinking is that while users are getting familiar with
HBase, we’d save them having to know all of its intricacies. Later when they’ve built some
confidence, then they can play with configuration such as this.
Number of ZooKeeper Instances
See zookeeper.
9.2.2. HDFS Configurations
74
dfs.datanode.failed.volumes.tolerated
This is the "…number of volumes that are allowed to fail before a DataNode stops offering service.
By default any volume failure will cause a datanode to shutdown" from the hdfs-default.xml
description. You might want to set this to about half the amount of your available disks.
hbase.regionserver.handler.count
This setting defines the number of threads that are kept open to answer incoming requests to user
tables. The rule of thumb is to keep this number low when the payload per request approaches the
MB (big puts, scans using a large cache) and high when the payload is small (gets, small puts, ICVs,
deletes). The total size of the queries in progress is limited by the setting
hbase.ipc.server.max.callqueue.size.
It is safe to set that number to the maximum number of incoming clients if their payload is small,
the typical example being a cluster that serves a website since puts aren’t typically buffered and
most of the operations are gets.
The reason why it is dangerous to keep this setting high is that the aggregate size of all the puts that
are currently happening in a region server may impose too much pressure on its memory, or even
trigger an OutOfMemoryError. A RegionServer running on low memory will trigger its JVM’s
garbage collector to run more frequently up to a point where GC pauses become noticeable (the
reason being that all the memory used to keep all the requests' payloads cannot be trashed, no
matter how hard the garbage collector tries). After some time, the overall cluster throughput is
affected since every request that hits that RegionServer will take longer, which exacerbates the
problem even more.
You can get a sense of whether you have too little or too many handlers by rpc.logging on an
individual RegionServer then tailing its logs (Queued requests consume memory).
9.2.3. Configuration for large memory machines
HBase ships with a reasonable, conservative configuration that will work on nearly all machine
types that people might want to test with. If you have larger machines — HBase has 8G and larger
heap — you might find the following configuration options helpful. TODO.
9.2.4. Compression
You should consider enabling ColumnFamily compression. There are several options that are near-
frictionless and in most all cases boost performance by reducing the size of StoreFiles and thus
reducing I/O.
See compression for more information.
9.2.5. Configuring the size and number of WAL files
HBase uses wal to recover the memstore data that has not been flushed to disk in case of an RS
failure. These WAL files should be configured to be slightly smaller than HDFS block (by default a
HDFS block is 64Mb and a WAL file is ~60Mb).
75
HBase also has a limit on the number of WAL files, designed to ensure there’s never too much data
that needs to be replayed during recovery. This limit needs to be set according to memstore
configuration, so that all the necessary data would fit. It is recommended to allocate enough WAL
files to store at least that much data (when all memstores are close to full). For example, with 16Gb
RS heap, default memstore settings (0.4), and default WAL file size (~60Mb), 16Gb*0.4/60, the
starting point for WAL file count is ~109. However, as all memstores are not expected to be full all
the time, less WAL files can be allocated.
9.2.6. Managed Splitting
HBase generally handles splitting of your regions based upon the settings in your hbase-default.xml
and hbase-site.xml configuration files. Important settings include
hbase.regionserver.region.split.policy, hbase.hregion.max.filesize,
hbase.regionserver.regionSplitLimit. A simplistic view of splitting is that when a region grows to
hbase.hregion.max.filesize, it is split. For most usage patterns, you should use automatic splitting.
See manual region splitting decisions for more information about manual region splitting.
Instead of allowing HBase to split your regions automatically, you can choose to manage the
splitting yourself. Manually managing splits works if you know your keyspace well, otherwise let
HBase figure where to split for you. Manual splitting can mitigate region creation and movement
under load. It also makes it so region boundaries are known and invariant (if you disable region
splitting). If you use manual splits, it is easier doing staggered, time-based major compactions to
spread out your network IO load.
Disable Automatic Splitting
To disable automatic splitting, you can set region split policy in either cluster configuration or table
configuration to be org.apache.hadoop.hbase.regionserver.DisabledRegionSplitPolicy
Automatic Splitting Is Recommended
If you disable automatic splits to diagnose a problem or during a period of fast
data growth, it is recommended to re-enable them when your situation becomes
more stable. The potential benefits of managing region splits yourself are not
undisputed.
Determine the Optimal Number of Pre-Split Regions
The optimal number of pre-split regions depends on your application and environment. A good rule
of thumb is to start with 10 pre-split regions per server and watch as data grows over time. It is
better to err on the side of too few regions and perform rolling splits later. The optimal number of
regions depends upon the largest StoreFile in your region. The size of the largest StoreFile will
increase with time if the amount of data grows. The goal is for the largest region to be just large
enough that the compaction selection algorithm only compacts it during a timed major compaction.
Otherwise, the cluster can be prone to compaction storms with a large number of regions under
compaction at the same time. It is important to understand that the data growth causes compaction
storms and not the manual split decision.
If the regions are split into too many large regions, you can increase the major compaction interval
by configuring HConstants.MAJOR_COMPACTION_PERIOD. The
org.apache.hadoop.hbase.util.RegionSplitter utility also provides a network-IO-safe rolling split of
76
all regions.
9.2.7. Managed Compactions
By default, major compactions are scheduled to run once in a 7-day period.
If you need to control exactly when and how often major compaction runs, you can disable
managed major compactions. See the entry for hbase.hregion.majorcompaction in the
compaction.parameters table for details.
Do Not Disable Major Compactions
Major compactions are absolutely necessary for StoreFile clean-up. Do not disable
them altogether. You can run major compactions manually via the HBase shell or
via the Admin API.
For more information about compactions and the compaction file selection process, see compaction
9.2.8. Speculative Execution
Speculative Execution of MapReduce tasks is on by default, and for HBase clusters it is generally
advised to turn off Speculative Execution at a system-level unless you need it for a specific case,
where it can be configured per-job. Set the properties mapreduce.map.speculative and
mapreduce.reduce.speculative to false.
9.3. Other Configurations
9.3.1. Balancer
The balancer is a periodic operation which is run on the master to redistribute regions on the
cluster. It is configured via hbase.balancer.period and defaults to 300000 (5 minutes).
See master.processes.loadbalancer for more information on the LoadBalancer.
9.3.2. Disabling Blockcache
Do not turn off block cache (You’d do it by setting hfile.block.cache.size to zero). Currently we do
not do well if you do this because the RegionServer will spend all its time loading HFile indices over
and over again. If your working set is such that block cache does you no good, at least size the block
cache such that HFile indices will stay up in the cache (you can get a rough idea on the size you
need by surveying RegionServer UIs; you’ll see index block size accounted near the top of the
webpage).
9.3.3. Nagle’s or the small package problem
If a big 40ms or so occasional delay is seen in operations against HBase, try the Nagles' setting. For
example, see the user mailing list thread, Inconsistent scan performance with caching set to 1 and
the issue cited therein where setting notcpdelay improved scan speeds. You might also see the
graphs on the tail of HBASE-7008 Set scanner caching to a better default where our Lars Hofhansl
77
tries various data sizes w/ Nagle’s on and off measuring the effect.
9.3.4. Better Mean Time to Recover (MTTR)
This section is about configurations that will make servers come back faster after a fail. See the
Deveraj Das and Nicolas Liochon blog post Introduction to HBase Mean Time to Recover (MTTR) for
a brief introduction.
The issue HBASE-8354 forces Namenode into loop with lease recovery requests is messy but has a
bunch of good discussion toward the end on low timeouts and how to cause faster recovery
including citation of fixes added to HDFS. Read the Varun Sharma comments. The below suggested
configurations are Varun’s suggestions distilled and tested. Make sure you are running on a late-
version HDFS so you have the fixes he refers to and himself adds to HDFS that help HBase MTTR
(e.g. HDFS-3703, HDFS-3712, and HDFS-4791 — Hadoop 2 for sure has them and late Hadoop 1 has
some). Set the following in the RegionServer.
<property>
Ê <name>hbase.lease.recovery.dfs.timeout</name>
Ê <value>23000</value>
Ê <description>How much time we allow elapse between calls to recover lease.
Ê Should be larger than the dfs timeout.</description>
</property>
<property>
Ê <name>dfs.client.socket-timeout</name>
Ê <value>10000</value>
Ê <description>Down the DFS timeout from 60 to 10 seconds.</description>
</property>
And on the NameNode/DataNode side, set the following to enable 'staleness' introduced in HDFS-
3703, HDFS-3912.
78
<property>
Ê <name>dfs.client.socket-timeout</name>
Ê <value>10000</value>
Ê <description>Down the DFS timeout from 60 to 10 seconds.</description>
</property>
<property>
Ê <name>dfs.datanode.socket.write.timeout</name>
Ê <value>10000</value>
Ê <description>Down the DFS timeout from 8 * 60 to 10 seconds.</description>
</property>
<property>
Ê <name>ipc.client.connect.timeout</name>
Ê <value>3000</value>
Ê <description>Down from 60 seconds to 3.</description>
</property>
<property>
Ê <name>ipc.client.connect.max.retries.on.timeouts</name>
Ê <value>2</value>
Ê <description>Down from 45 seconds to 3 (2 == 3 retries).</description>
</property>
<property>
Ê <name>dfs.namenode.avoid.read.stale.datanode</name>
Ê <value>true</value>
Ê <description>Enable stale state in hdfs</description>
</property>
<property>
Ê <name>dfs.namenode.stale.datanode.interval</name>
Ê <value>20000</value>
Ê <description>Down from default 30 seconds</description>
</property>
<property>
Ê <name>dfs.namenode.avoid.write.stale.datanode</name>
Ê <value>true</value>
Ê <description>Enable stale state in hdfs</description>
</property>
9.3.5. JMX
JMX (Java Management Extensions) provides built-in instrumentation that enables you to monitor
and manage the Java VM. To enable monitoring and management from remote systems, you need
to set system property com.sun.management.jmxremote.port (the port number through which you
want to enable JMX RMI connections) when you start the Java VM. See the official documentation
for more information. Historically, besides above port mentioned, JMX opens two additional
random TCP listening ports, which could lead to port conflict problem. (See HBASE-10289 for
details)
As an alternative, you can use the coprocessor-based JMX implementation provided by HBase. To
enable it, add below property in hbase-site.xml:
79
<property>
Ê <name>hbase.coprocessor.regionserver.classes</name>
Ê <value>org.apache.hadoop.hbase.JMXListener</value>
</property>
DO NOT set com.sun.management.jmxremote.port for Java VM at the same time.
Currently it supports Master and RegionServer Java VM. By default, the JMX listens on TCP port
10102, you can further configure the port using below properties:
<property>
Ê <name>regionserver.rmi.registry.port</name>
Ê <value>61130</value>
</property>
<property>
Ê <name>regionserver.rmi.connector.port</name>
Ê <value>61140</value>
</property>
The registry port can be shared with connector port in most cases, so you only need to configure
regionserver.rmi.registry.port. However if you want to use SSL communication, the 2 ports must be
configured to different values.
By default the password authentication and SSL communication is disabled. To enable password
authentication, you need to update hbase-env.sh like below:
export HBASE_JMX_BASE="-Dcom.sun.management.jmxremote.authenticate=true
\
Ê -Dcom.sun.management.jmxremote.password.file=your_password_file
\
Ê -Dcom.sun.management.jmxremote.access.file=your_access_file"
export HBASE_MASTER_OPTS="$HBASE_MASTER_OPTS $HBASE_JMX_BASE "
export HBASE_REGIONSERVER_OPTS="$HBASE_REGIONSERVER_OPTS $HBASE_JMX_BASE "
See example password/access file under $JRE_HOME/lib/management.
To enable SSL communication with password authentication, follow below steps:
80
#1. generate a key pair, stored in myKeyStore
keytool -genkey -alias jconsole -keystore myKeyStore
#2. export it to file jconsole.cert
keytool -export -alias jconsole -keystore myKeyStore -file jconsole.cert
#3. copy jconsole.cert to jconsole client machine, import it to jconsoleKeyStore
keytool -import -alias jconsole -keystore jconsoleKeyStore -file jconsole.cert
And then update hbase-env.sh like below:
export HBASE_JMX_BASE="-Dcom.sun.management.jmxremote.ssl=true
\
Ê -Djavax.net.ssl.keyStore=/home/tianq/myKeyStore
\
Ê -Djavax.net.ssl.keyStorePassword=your_password_in_step_1
\
Ê -Dcom.sun.management.jmxremote.authenticate=true
\
Ê -Dcom.sun.management.jmxremote.password.file=your_password file
\
Ê -Dcom.sun.management.jmxremote.access.file=your_access_file"
export HBASE_MASTER_OPTS="$HBASE_MASTER_OPTS $HBASE_JMX_BASE "
export HBASE_REGIONSERVER_OPTS="$HBASE_REGIONSERVER_OPTS $HBASE_JMX_BASE "
Finally start jconsole on the client using the key store:
jconsole -J-Djavax.net.ssl.trustStore=/home/tianq/jconsoleKeyStore
To enable the HBase JMX implementation on Master, you also need to add below
property in hbase-site.xml:
<property>
Ê <name>hbase.coprocessor.master.classes</name>
Ê <value>org.apache.hadoop.hbase.JMXListener</value>
</property>
The corresponding properties for port configuration are master.rmi.registry.port (by default
10101) and master.rmi.connector.port (by default the same as registry.port)
81
Chapter 10. Dynamic Configuration
It is possible to change a subset of the configuration without requiring a server restart. In the HBase
shell, the operations update_config and update_all_config will prompt a server or all servers to
reload configuration.
Only a subset of all configurations can currently be changed in the running server. Here are those
configurations:
Table 3. Configurations support dynamically change
Key
hbase.ipc.server.fallback-to-simple-auth-allowed
hbase.cleaner.scan.dir.concurrent.size
hbase.regionserver.thread.compaction.large
hbase.regionserver.thread.compaction.small
hbase.regionserver.thread.split
hbase.regionserver.throughput.controller
hbase.regionserver.thread.hfilecleaner.throttle
hbase.regionserver.hfilecleaner.large.queue.size
hbase.regionserver.hfilecleaner.small.queue.size
hbase.regionserver.hfilecleaner.large.thread.count
hbase.regionserver.hfilecleaner.small.thread.count
hbase.regionserver.flush.throughput.controller
hbase.hstore.compaction.max.size
hbase.hstore.compaction.max.size.offpeak
hbase.hstore.compaction.min.size
hbase.hstore.compaction.min
hbase.hstore.compaction.max
hbase.hstore.compaction.ratio
hbase.hstore.compaction.ratio.offpeak
hbase.regionserver.thread.compaction.throttle
hbase.hregion.majorcompaction
hbase.hregion.majorcompaction.jitter
hbase.hstore.min.locality.to.skip.major.compact
hbase.hstore.compaction.date.tiered.max.storefile.age.millis
hbase.hstore.compaction.date.tiered.incoming.window.min
hbase.hstore.compaction.date.tiered.window.policy.class
hbase.hstore.compaction.date.tiered.single.output.for.minor.compaction
hbase.hstore.compaction.date.tiered.window.factory.class
82
Key
hbase.offpeak.start.hour
hbase.offpeak.end.hour
hbase.oldwals.cleaner.thread.size
hbase.procedure.worker.keep.alive.time.msec
hbase.procedure.worker.add.stuck.percentage
hbase.procedure.worker.monitor.interval.msec
hbase.procedure.worker.stuck.threshold.msec
hbase.regions.slop
hbase.regions.overallSlop
hbase.balancer.tablesOnMaster
hbase.balancer.tablesOnMaster.systemTablesOnly
hbase.util.ip.to.rack.determiner
hbase.ipc.server.max.callqueue.length
hbase.ipc.server.priority.max.callqueue.length
hbase.ipc.server.callqueue.type
hbase.ipc.server.callqueue.codel.target.delay
hbase.ipc.server.callqueue.codel.interval
hbase.ipc.server.callqueue.codel.lifo.threshold
hbase.master.balancer.stochastic.maxSteps
hbase.master.balancer.stochastic.stepsPerRegion
hbase.master.balancer.stochastic.maxRunningTime
hbase.master.balancer.stochastic.runMaxSteps
hbase.master.balancer.stochastic.numRegionLoadsToRemember
hbase.master.loadbalance.bytable
hbase.master.balancer.stochastic.minCostNeedBalance
hbase.master.balancer.stochastic.localityCost
hbase.master.balancer.stochastic.rackLocalityCost
hbase.master.balancer.stochastic.readRequestCost
hbase.master.balancer.stochastic.writeRequestCost
hbase.master.balancer.stochastic.memstoreSizeCost
hbase.master.balancer.stochastic.storefileSizeCost
hbase.master.balancer.stochastic.regionReplicaHostCostKey
hbase.master.balancer.stochastic.regionReplicaRackCostKey
hbase.master.balancer.stochastic.regionCountCost
hbase.master.balancer.stochastic.primaryRegionCountCost
hbase.master.balancer.stochastic.moveCost
83
Key
hbase.master.balancer.stochastic.maxMovePercent
hbase.master.balancer.stochastic.tableSkewCost
84
Chapter 11. HBase version number and
compatibility
11.1. Aspirational Semantic Versioning
Starting with the 1.0.0 release, HBase is working towards Semantic Versioning for its release
versioning. In summary:
Given a version number MAJOR.MINOR.PATCH, increment the:
•MAJOR version when you make incompatible API changes,
•MINOR version when you add functionality in a backwards-compatible manner, and
•PATCH version when you make backwards-compatible bug fixes.
•Additional labels for pre-release and build metadata are available as extensions to the
MAJOR.MINOR.PATCH format.
Compatibility Dimensions
In addition to the usual API versioning considerations HBase has other compatibility dimensions
that we need to consider.
Client-Server wire protocol compatibility
•Allows updating client and server out of sync.
•We could only allow upgrading the server first. I.e. the server would be backward compatible to
an old client, that way new APIs are OK.
•Example: A user should be able to use an old client to connect to an upgraded cluster.
Server-Server protocol compatibility
•Servers of different versions can co-exist in the same cluster.
•The wire protocol between servers is compatible.
•Workers for distributed tasks, such as replication and log splitting, can co-exist in the same
cluster.
•Dependent protocols (such as using ZK for coordination) will also not be changed.
•Example: A user can perform a rolling upgrade.
File format compatibility
•Support file formats backward and forward compatible
•Example: File, ZK encoding, directory layout is upgraded automatically as part of an HBase
upgrade. User can downgrade to the older version and everything will continue to work.
Client API compatibility
•Allow changing or removing existing client APIs.
•An API needs to be deprecated for a major version before we will change/remove it.
86
•APIs available in a patch version will be available in all later patch versions. However, new APIs
may be added which will not be available in earlier patch versions.
•New APIs introduced in a patch version will only be added in a source compatible way [1: See
'Source Compatibility' https://blogs.oracle.com/darcy/entry/kinds_of_compatibility]: i.e. code that
implements public APIs will continue to compile.
◦Example: A user using a newly deprecated API does not need to modify application code
with HBase API calls until the next major version. *
Client Binary compatibility
•Client code written to APIs available in a given patch release can run unchanged (no
recompilation needed) against the new jars of later patch versions.
•Client code written to APIs available in a given patch release might not run against the old jars
from an earlier patch version.
◦Example: Old compiled client code will work unchanged with the new jars.
•If a Client implements an HBase Interface, a recompile MAY be required upgrading to a newer
minor version (See release notes for warning about incompatible changes). All effort will be
made to provide a default implementation so this case should not arise.
Server-Side Limited API compatibility (taken from Hadoop)
•Internal APIs are marked as Stable, Evolving, or Unstable
•This implies binary compatibility for coprocessors and plugins (pluggable classes, including
replication) as long as these are only using marked interfaces/classes.
•Example: Old compiled Coprocessor, Filter, or Plugin code will work unchanged with the new
jars.
Dependency Compatibility
•An upgrade of HBase will not require an incompatible upgrade of a dependent project, except
for Apache Hadoop.
•An upgrade of HBase will not require an incompatible upgrade of the Java runtime.
•Example: Upgrading HBase to a version that supports Dependency Compatibility won’t require
that you upgrade your Apache ZooKeeper service.
•Example: If your current version of HBase supported running on JDK 8, then an upgrade to a
version that supports Dependency Compatibility will also run on JDK 8.
Hadoop Versions
Previously, we tried to maintain dependency compatibility for the underly Hadoop
service but over the last few years this has proven untenable. While the HBase
project attempts to maintain support for older versions of Hadoop, we drop the
"supported" designator for minor versions that fail to continue to see releases.
Additionally, the Hadoop project has its own set of compatibility guidelines, which
means in some cases having to update to a newer supported minor release might
break some of our compatibility promises.
Operational Compatibility
87
•Metric changes
•Behavioral changes of services
•JMX APIs exposed via the /jmx/ endpoint
Summary
•A patch upgrade is a drop-in replacement. Any change that is not Java binary and source
compatible would not be allowed. [2: See http://docs.oracle.com/javase/specs/jls/se7/html/jls-
13.html.] Downgrading versions within patch releases may not be compatible.
•A minor upgrade requires no application/client code modification. Ideally it would be a drop-in
replacement but client code, coprocessors, filters, etc might have to be recompiled if new jars
are used.
•A major upgrade allows the HBase community to make breaking changes.
Table 4. Compatibility Matrix [4: Note that this indicates what could break, not that it will break. We
will/should add specifics in our release notes.]
Major Minor Patch
Client-Server wire
Compatibility
N Y Y
Server-Server
Compatibility
N Y Y
File Format
Compatibility
N [3:
comp_matrix_offline_u
pgrade_note,Running
an offline upgrade tool
without downgrade
might be needed. We
will typically only
support migrating data
from major version X
to major version X+1.]
Y Y
Client API
Compatibility
N Y Y
Client Binary
Compatibility
NNY
Server-Side Limited API Compatibility
Stable N Y Y
Evolving N N Y
Unstable N N N
Dependency
Compatibility
N Y Y
Operational
Compatibility
NNY
88
11.1.1. HBase API Surface
HBase has a lot of API points, but for the compatibility matrix above, we differentiate between
Client API, Limited Private API, and Private API. HBase uses Apache Yetus Audience Annotations to
guide downstream expectations for stability.
•InterfaceAudience (javadocs): captures the intended audience, possible values include:
◦Public: safe for end users and external projects
◦LimitedPrivate: used for internals we expect to be pluggable, such as coprocessors
◦Private: strictly for use within HBase itself Classes which are defined as IA.Private may be
used as parameters or return values for interfaces which are declared IA.LimitedPrivate.
Treat the IA.Private object as opaque; do not try to access its methods or fields directly.
•InterfaceStability (javadocs): describes what types of interface changes are permitted. Possible
values include:
◦Stable: the interface is fixed and is not expected to change
◦Evolving: the interface may change in future minor verisons
◦Unstable: the interface may change at any time
Please keep in mind the following interactions between the InterfaceAudience and
InterfaceStability annotations within the HBase project:
•IA.Public classes are inherently stable and adhere to our stability guarantees relating to the
type of upgrade (major, minor, or patch).
•IA.LimitedPrivate classes should always be annotated with one of the given InterfaceStability
values. If they are not, you should presume they are IS.Unstable.
•IA.Private classes should be considered implicitly unstable, with no guarantee of stability
between releases.
HBase Client API
HBase Client API consists of all the classes or methods that are marked with
InterfaceAudience.Public interface. All main classes in hbase-client and dependent modules
have either InterfaceAudience.Public, InterfaceAudience.LimitedPrivate, or
InterfaceAudience.Private marker. Not all classes in other modules (hbase-server, etc) have the
marker. If a class is not annotated with one of these, it is assumed to be a
InterfaceAudience.Private class.
HBase LimitedPrivate API
LimitedPrivate annotation comes with a set of target consumers for the interfaces. Those
consumers are coprocessors, phoenix, replication endpoint implementations or similar. At this
point, HBase only guarantees source and binary compatibility for these interfaces between
patch versions.
HBase Private API
All classes annotated with InterfaceAudience.Private or all classes that do not have the
annotation are for HBase internal use only. The interfaces and method signatures can change at
89
any point in time. If you are relying on a particular interface that is marked Private, you should
open a jira to propose changing the interface to be Public or LimitedPrivate, or an interface
exposed for this purpose.
Binary Compatibility
When we say two HBase versions are compatible, we mean that the versions are wire and binary
compatible. Compatible HBase versions means that clients can talk to compatible but differently
versioned servers. It means too that you can just swap out the jars of one version and replace them
with the jars of another, compatible version and all will just work. Unless otherwise specified,
HBase point versions are (mostly) binary compatible. You can safely do rolling upgrades between
binary compatible versions; i.e. across maintenance releases: e.g. from 1.2.4 to 1.2.6. See link:[Does
compatibility between versions also mean binary compatibility?] discussion on the HBase dev
mailing list.
11.2. Rolling Upgrades
A rolling upgrade is the process by which you update the servers in your cluster a server at a time.
You can rolling upgrade across HBase versions if they are binary or wire compatible. See Rolling
Upgrade Between Versions that are Binary/Wire Compatible for more on what this means. Coarsely,
a rolling upgrade is a graceful stop each server, update the software, and then restart. You do this
for each server in the cluster. Usually you upgrade the Master first and then the RegionServers. See
Rolling Restart for tools that can help use the rolling upgrade process.
For example, in the below, HBase was symlinked to the actual HBase install. On upgrade, before
running a rolling restart over the cluster, we changed the symlink to point at the new HBase
software version and then ran
$ HADOOP_HOME=~/hadoop-2.6.0-CRC-SNAPSHOT ~/hbase/bin/rolling-restart.sh --config
~/conf_hbase
The rolling-restart script will first gracefully stop and restart the master, and then each of the
RegionServers in turn. Because the symlink was changed, on restart the server will come up using
the new HBase version. Check logs for errors as the rolling upgrade proceeds.
Rolling Upgrade Between Versions that are Binary/Wire Compatible
Unless otherwise specified, HBase minor versions are binary compatible. You can do a Rolling
Upgrades between HBase point versions. For example, you can go to 1.2.4 from 1.2.6 by doing a
rolling upgrade across the cluster replacing the 1.2.4 binary with a 1.2.6 binary.
In the minor version-particular sections below, we call out where the versions are wire/protocol
compatible and in this case, it is also possible to do a Rolling Upgrades.
90
Chapter 12. Rollback
Sometimes things don’t go as planned when attempting an upgrade. This section explains how to
perform a rollback to an earlier HBase release. Note that this should only be needed between Major
and some Minor releases. You should always be able to downgrade between HBase Patch releases
within the same Minor version. These instructions may require you to take steps before you start
the upgrade process, so be sure to read through this section beforehand.
12.1. Caveats
Rollback vs Downgrade
This section describes how to perform a rollback on an upgrade between HBase minor and major
versions. In this document, rollback refers to the process of taking an upgraded cluster and
restoring it to the old version while losing all changes that have occurred since upgrade. By contrast,
a cluster downgrade would restore an upgraded cluster to the old version while maintaining any
data written since the upgrade. We currently only offer instructions to rollback HBase clusters.
Further, rollback only works when these instructions are followed prior to performing the upgrade.
When these instructions talk about rollback vs downgrade of prerequisite cluster services (i.e.
HDFS), you should treat leaving the service version the same as a degenerate case of downgrade.
Replication
Unless you are doing an all-service rollback, the HBase cluster will lose any configured peers for
HBase replication. If your cluster is configured for HBase replication, then prior to following these
instructions you should document all replication peers. After performing the rollback you should
then add each documented peer back to the cluster. For more information on enabling HBase
replication, listing peers, and adding a peer see Managing and Configuring Cluster Replication. Note
also that data written to the cluster since the upgrade may or may not have already been replicated
to any peers. Determining which, if any, peers have seen replication data as well as rolling back the
data in those peers is out of the scope of this guide.
Data Locality
Unless you are doing an all-service rollback, going through a rollback procedure will likely destroy
all locality for Region Servers. You should expect degraded performance until after the cluster has
had time to go through compactions to restore data locality. Optionally, you can force a compaction
to speed this process up at the cost of generating cluster load.
Configurable Locations
The instructions below assume default locations for the HBase data directory and the HBase znode.
Both of these locations are configurable and you should verify the value used in your cluster before
proceeding. In the event that you have a different value, just replace the default with the one found
in your configuration * HBase data directory is configured via the key 'hbase.rootdir' and has a
default value of '/hbase'. * HBase znode is configured via the key 'zookeeper.znode.parent' and has
a default value of '/hbase'.
91
12.2. All service rollback
If you will be performing a rollback of both the HDFS and ZooKeeper services, then HBase’s data
will be rolled back in the process.
Requirements
•Ability to rollback HDFS and ZooKeeper
Before upgrade
No additional steps are needed pre-upgrade. As an extra precautionary measure, you may wish to
use distcp to back up the HBase data off of the cluster to be upgraded. To do so, follow the steps in
the 'Before upgrade' section of 'Rollback after HDFS downgrade' but copy to another HDFS instance
instead of within the same instance.
Performing a rollback
1. Stop HBase
2. Perform a rollback for HDFS and ZooKeeper (HBase should remain stopped)
3. Change the installed version of HBase to the previous version
4. Start HBase
5. Verify HBase contents—use the HBase shell to list tables and scan some known values.
12.3. Rollback after HDFS rollback and ZooKeeper
downgrade
If you will be rolling back HDFS but going through a ZooKeeper downgrade, then HBase will be in
an inconsistent state. You must ensure the cluster is not started until you complete this process.
Requirements
•Ability to rollback HDFS
•Ability to downgrade ZooKeeper
Before upgrade
No additional steps are needed pre-upgrade. As an extra precautionary measure, you may wish to
use distcp to back up the HBase data off of the cluster to be upgraded. To do so, follow the steps in
the 'Before upgrade' section of 'Rollback after HDFS downgrade' but copy to another HDFS instance
instead of within the same instance.
Performing a rollback
1. Stop HBase
2. Perform a rollback for HDFS and a downgrade for ZooKeeper (HBase should remain stopped)
3. Change the installed version of HBase to the previous version
4. Clean out ZooKeeper information related to HBase. WARNING: This step will permanently
destroy all replication peers. Please see the section on HBase Replication under Caveats for
more information.
92
Clean HBase information out of ZooKeeper
[hpnewton@gateway_node.example.com ~]$ zookeeper-client -server
zookeeper1.example.com:2181,zookeeper2.example.com:2181,zookeeper3.example.com:2181
Welcome to ZooKeeper!
JLine support is disabled
rmr /hbase
quit
Quitting...
5. Start HBase
6. Verify HBase contents—use the HBase shell to list tables and scan some known values.
12.4. Rollback after HDFS downgrade
If you will be performing an HDFS downgrade, then you’ll need to follow these instructions
regardless of whether ZooKeeper goes through rollback, downgrade, or reinstallation.
Requirements
•Ability to downgrade HDFS
•Pre-upgrade cluster must be able to run MapReduce jobs
•HDFS super user access
•Sufficient space in HDFS for at least two copies of the HBase data directory
Before upgrade
Before beginning the upgrade process, you must take a complete backup of HBase’s backing data.
The following instructions cover backing up the data within the current HDFS instance.
Alternatively, you can use the distcp command to copy the data to another HDFS cluster.
1. Stop the HBase cluster
2. Copy the HBase data directory to a backup location using the distcp command as the HDFS
super user (shown below on a security enabled cluster)
Using distcp to backup the HBase data directory
[hpnewton@gateway_node.example.com ~]$ kinit -k -t hdfs.keytab hdfs@EXAMPLE.COM
[hpnewton@gateway_node.example.com ~]$ hadoop distcp /hbase /hbase-pre-upgrade-
backup
3. Distcp will launch a mapreduce job to handle copying the files in a distributed fashion. Check
the output of the distcp command to ensure this job completed successfully.
Performing a rollback
1. Stop HBase
2. Perform a downgrade for HDFS and a downgrade/rollback for ZooKeeper (HBase should remain
stopped)
93
3. Change the installed version of HBase to the previous version
4. Restore the HBase data directory from prior to the upgrade as the HDFS super user (shown
below on a security enabled cluster). If you backed up your data on another HDFS cluster
instead of locally, you will need to use the distcp command to copy it back to the current HDFS
cluster.
Restore the HBase data directory
[hpnewton@gateway_node.example.com ~]$ kinit -k -t hdfs.keytab hdfs@EXAMPLE.COM
[hpnewton@gateway_node.example.com ~]$ hdfs dfs -mv /hbase /hbase-upgrade-rollback
[hpnewton@gateway_node.example.com ~]$ hdfs dfs -mv /hbase-pre-upgrade-backup
/hbase
5. Clean out ZooKeeper information related to HBase. WARNING: This step will permanently
destroy all replication peers. Please see the section on HBase Replication under Caveats for
more information.
Clean HBase information out of ZooKeeper
[hpnewton@gateway_node.example.com ~]$ zookeeper-client -server
zookeeper1.example.com:2181,zookeeper2.example.com:2181,zookeeper3.example.com:2181
Welcome to ZooKeeper!
JLine support is disabled
rmr /hbase
quit
Quitting...
6. Start HBase
7. Verify HBase contents–use the HBase shell to list tables and scan some known values.
94
Chapter 13. Upgrade Paths
13.1. Upgrading from 1.x to 2.x
In this section we will first call out significant changes compared to the prior stable HBase release
and then go over the upgrade process. Be sure to read the former with care so you avoid suprises.
13.1.1. Changes of Note!
First we’ll cover deployment / operational changes that you might hit when upgrading to HBase
2.0+. After that we’ll call out changes for downstream applications. Please note that Coprocessors
are covered in the operational section. Also note that this section is not meant to convey
information about new features that may be of interest to you. For a complete summary of changes,
please see the CHANGES.txt file in the source release artifact for the version you are planning to
upgrade to.
Update to basic prerequisite minimums in HBase 2.0+
As noted in the section Basic Prerequisites, HBase 2.0+ requires a minimum of Java 8 and Hadoop
2.6. The HBase community recommends ensuring you have already completed any needed
upgrades in prerequisites prior to upgrading your HBase version.
HBCK must match HBase server version
You must not use an HBase 1.x version of HBCK against an HBase 2.0+ cluster. HBCK is strongly tied
to the HBase server version. Using the HBCK tool from an earlier release against an HBase 2.0+
cluster will destructively alter said cluster in unrecoverable ways.
As of HBase 2.0, HBCK is a read-only tool that can report the status of some non-public system
internals. You should not rely on the format nor content of these internals to remain consistent
across HBase releases.
Configuration settings no longer in HBase 2.0+
The following configuration settings are no longer applicable or available. For details, please see
the detailed release notes.
•hbase.config.read.zookeeper.config (see ZooKeeper configs no longer read from zoo.cfg for
migration details)
•hbase.zookeeper.useMulti (HBase now always uses ZK’s multi functionality)
•hbase.rpc.client.threads.max
•hbase.rpc.client.nativetransport
•hbase.fs.tmp.dir
•hbase.bucketcache.combinedcache.enabled
•hbase.bucketcache.ioengine no longer supports the 'heap' value.
•hbase.bulkload.staging.dir
•hbase.balancer.tablesOnMaster wasn’t removed, strictly speaking, but its meaning has
95
fundamentally changed and users should not set it. See the section "Master hosting regions"
feature broken and unsupported for details.
•hbase.master.distributed.log.replay See the section "Distributed Log Replay" feature broken and
removed for details
•hbase.regionserver.disallow.writes.when.recovering See the section "Distributed Log Replay"
feature broken and removed for details
•hbase.regionserver.wal.logreplay.batch.size See the section "Distributed Log Replay" feature
broken and removed for details
•hbase.master.catalog.timeout
•hbase.regionserver.catalog.timeout
•hbase.metrics.exposeOperationTimes
•hbase.metrics.showTableName
•hbase.online.schema.update.enable (HBase now always supports this)
•hbase.thrift.htablepool.size.max
Configuration properties that were renamed in HBase 2.0+
The following properties have been renamed. Attempts to set the old property will be ignored at
run time.
Table 5. Renamed properties
Old name New name
hbase.rpc.server.nativetransport hbase.netty.nativetransport
hbase.netty.rpc.server.worker.count hbase.netty.worker.count
hbase.hfile.compactions.discharger.interval hbase.hfile.compaction.discharger.interval
hbase.hregion.percolumnfamilyflush.size.lower.
bound
hbase.hregion.percolumnfamilyflush.size.lower.
bound.min
Configuration settings with different defaults in HBase 2.0+
The following configuration settings changed their default value. Where applicable, the value to set
to restore the behavior of HBase 1.2 is given.
•hbase.security.authorization now defaults to false. set to true to restore same behavior as
previous default.
•hbase.client.retries.number is now set to 10. Previously it was 35. Downstream users are
advised to use client timeouts as described in section Timeout settings instead.
•hbase.client.serverside.retries.multiplier is now set to 3. Previously it was 10. Downstream users
are advised to use client timesout as describe in section Timeout settings instead.
•hbase.master.fileSplitTimeout is now set to 10 minutes. Previously it was 30 seconds.
•hbase.regionserver.logroll.multiplier is now set to 0.5. Previously it was 0.95. This change is tied
with the following doubling of block size. Combined, these two configuration changes should
make for WALs of about the same size as those in hbase-1.x but there should be less incidence of
small blocks because we fail to roll the WAL before we hit the blocksize threshold. See HBASE-
96
19148 for discussion.
•hbase.regionserver.hlog.blocksize defaults to 2x the HDFS default block size for the WAL dir.
Previously it was equal to the HDFS default block size for the WAL dir.
•hbase.client.start.log.errors.counter changed to 5. Previously it was 9.
•hbase.ipc.server.callqueue.type changed to 'fifo'. In HBase versions 1.0 - 1.2 it was 'deadline'. In
prior and later 1.x versions it already defaults to 'fifo'.
•hbase.hregion.memstore.chunkpool.maxsize is 1.0 by default. Previously it was 0.0. Effectively,
this means previously we would not use a chunk pool when our memstore is onheap and now
we will. See the section Long GC pauses for more infromation about the MSLAB chunk pool.
•hbase.master.cleaner.interval is now set to 10 minutes. Previously it was 1 minute.
•hbase.master.procedure.threads will now default to 1/4 of the number of available CPUs, but
not less than 16 threads. Previously it would be number of threads equal to number of CPUs.
•hbase.hstore.blockingStoreFiles is now 16. Previously it was 10.
•hbase.http.max.threads is now 16. Previously it was 10.
•hbase.client.max.perserver.tasks is now 2. Previously it was 5.
•hbase.normalizer.period is now 5 minutes. Previously it was 30 minutes.
•hbase.regionserver.region.split.policy is now SteppingSplitPolicy. Previously it was
IncreasingToUpperBoundRegionSplitPolicy.
•replication.source.ratio is now 0.5. Previously it was 0.1.
"Master hosting regions" feature broken and unsupported
The feature "Master acts as region server" and associated follow-on work available in HBase 1.y is
non-functional in HBase 2.y and should not be used in a production setting due to deadlock on
Master initialization. Downstream users are advised to treat related configuration settings as
experimental and the feature as inappropriate for production settings.
A brief summary of related changes:
•Master no longer carries regions by default
•hbase.balancer.tablesOnMaster is a boolean, default false (if it holds an HBase 1.x list of tables,
will default to false)
•hbase.balancer.tablesOnMaster.systemTablesOnly is boolean to keep user tables off master.
default false
•those wishing to replicate old list-of-servers config should deploy a stand-alone RegionServer
process and then rely on Region Server Groups
"Distributed Log Replay" feature broken and removed
The Distributed Log Replay feature was broken and has been removed from HBase 2.y+. As a
consequence all related configs, metrics, RPC fields, and logging have also been removed. Note that
this feature was found to be unreliable in the run up to HBase 1.0, defaulted to being unused, and
was effectively removed in HBase 1.2.0 when we started ignoring the config that turns it on
(HBASE-14465). If you are currently using the feature, be sure to perform a clean shutdown, ensure
97
all DLR work is complete, and disable the feature prior to upgrading.
prefix-tree encoding removed
The prefix-tree encoding was removed from HBase 2.0.0 (HBASE-19179). It was (late!) deprecated in
hbase-1.2.7, hbase-1.4.0, and hbase-1.3.2.
This feature was removed because it as not being actively maintained. If interested in reviving this
sweet facility which improved random read latencies at the expensive of slowed writes, write the
HBase developers list at dev at hbase dot apache dot org.
The prefix-tree encoding needs to be removed from all tables before upgrading to HBase 2.0+. To do
that first you need to change the encoding from PREFIX_TREE to something else that is supported in
HBase 2.0. After that you have to major compact the tables that were using PREFIX_TREE encoding
before. To check which column families are using incompatible data block encoding you can use
Pre-Upgrade Validator.
Changed metrics
The following metrics have changed names:
•Metrics previously published under the name "AssignmentManger" [sic] are now published
under the name "AssignmentManager"
The following metrics have changed their meaning:
•The metric 'blockCacheEvictionCount' published on a per-region server basis no longer includes
blocks removed from the cache due to the invalidation of the hfiles they are from (e.g. via
compaction).
•The metric 'totalRequestCount' increments once per request; previously it incremented by the
number of Actions carried in the request; e.g. if a request was a multi made of four Gets and two
Puts, we’d increment 'totalRequestCount' by six; now we increment by one regardless. Expect to
see lower values for this metric in hbase-2.0.0.
•The 'readRequestCount' now counts reads that return a non-empty row where in older hbases,
we’d increment 'readRequestCount' whether a Result or not. This change will flatten the profile
of the read-requests graphs if requests for non-existent rows. A YCSB read-heavy workload can
do this dependent on how the database was loaded.
The following metrics have been removed:
•Metrics related to the Distributed Log Replay feature are no longer present. They were
previsouly found in the region server context under the name 'replay'. See the section
"Distributed Log Replay" feature broken and removed for details.
The following metrics have been added:
•'totalRowActionRequestCount' is a count of region row actions summing reads and writes.
Changed logging
HBase-2.0.0 now uses slf4j as its logging frontend. Prevously, we used log4j (1.2). For most the
transition should be seamless; slf4j does a good job interpreting log4j.properties logging
98
configuration files such that you should not notice any difference in your log system emissions.
That said, your log4j.properties may need freshening. See HBASE-20351 for example, where a stale
log configuration file manifest as netty configuration being dumped at DEBUG level as preamble on
every shell command invocation.
ZooKeeper configs no longer read from zoo.cfg
HBase no longer optionally reads the 'zoo.cfg' file for ZooKeeper related configuration settings. If
you previously relied on the 'hbase.config.read.zookeeper.config' config for this functionality, you
should migrate any needed settings to the hbase-site.xml file while adding the prefix
'hbase.zookeeper.property.' to each property name.
Changes in permissions
The following permission related changes either altered semantics or defaults:
•Permissions granted to a user now merge with existing permissions for that user, rather than
over-writing them. (see the release note on HBASE-17472 for details)
•Region Server Group commands (added in 1.4.0) now require admin privileges.
Most Admin APIs don’t work against an HBase 2.0+ cluster from pre-HBase 2.0 clients
A number of admin commands are known to not work when used from a pre-HBase 2.0 client. This
includes an HBase Shell that has the library jars from pre-HBase 2.0. You will need to plan for an
outage of use of admin APIs and commands until you can also update to the needed client version.
The following client operations do not work against HBase 2.0+ cluster when executed from a pre-
HBase 2.0 client:
•list_procedures
•split
•merge_region
•list_quotas
•enable_table_replication
•disable_table_replication
•Snapshot related commands
Deprecated in 1.0 admin commands have been removed.
The following commands that were deprecated in 1.0 have been removed. Where applicable the
replacement command is listed.
•The 'hlog' command has been removed. Downstream users should rely on the 'wal' command
instead.
Region Server memory consumption changes.
Users upgrading from versions prior to HBase 1.4 should read the instructions in section Region
Server memory consumption changes..
Additionally, HBase 2.0 has changed how memstore memory is tracked for flushing decisions.
99
Previously, both the data size and overhead for storage were used to calculate utilization against
the flush threashold. Now, only data size is used to make these per-region decisions. Globally the
addition of the storage overhead is used to make decisions about forced flushes.
Web UI for splitting and merging operate on row prefixes
Previously, the Web UI included functionality on table status pages to merge or split based on an
encoded region name. In HBase 2.0, instead this functionality works by taking a row prefix.
Special upgrading for Replication users from pre-HBase 1.4
User running versions of HBase prior to the 1.4.0 release that make use of replication should be
sure to read the instructions in the section Replication peer’s TableCFs config.
HBase shell changes
The HBase shell command relies on a bundled JRuby instance. This bundled JRuby been updated
from version 1.6.8 to version 9.1.10.0. The represents a change from Ruby 1.8 to Ruby 2.3.3, which
introduces non-compatible language changes for user scripts.
The HBase shell command now ignores the '--return-values' flag that was present in early HBase 1.4
releases. Instead the shell always behaves as though that flag were passed. If you wish to avoid
having expression results printed in the console you should alter your IRB configuration as noted in
the section irbrc.
Coprocessor APIs have changed in HBase 2.0+
All Coprocessor APIs have been refactored to improve supportability around binary API
compatibility for future versions of HBase. If you or applications you rely on have custom HBase
coprocessors, you should read the release notes for HBASE-18169 for details of changes you will
need to make prior to upgrading to HBase 2.0+.
For example, if you had a BaseRegionObserver in HBase 1.2 then at a minimum you will need to
update it to implement both RegionObserver and RegionCoprocessor and add the method
...
Ê @Override
Ê public Optional<RegionObserver> getRegionObserver() {
Ê return Optional.of(this);
Ê }
...
HBase 2.0+ can no longer write HFile v2 files.
HBase has simplified our internal HFile handling. As a result, we can no longer write HFile versions
earlier than the default of version 3. Upgrading users should ensure that hfile.format.version is not
set to 2 in hbase-site.xml before upgrading. Failing to do so will cause Region Server failure. HBase
can still read HFiles written in the older version 2 format.
HBase 2.0+ can no longer read Sequence File based WAL file.
HBase can no longer read the deprecated WAL files written in the Apache Hadoop Sequence File
format. The hbase.regionserver.hlog.reader.impl and hbase.regionserver.hlog.reader.impl
configuration entries should be set to use the Protobuf based WAL reader / writer classes. This
100
implementation has been the default since HBase 0.96, so legacy WAL files should not be a concern
for most downstream users.
A clean cluster shutdown should ensure there are no WAL files. If you are unsure of a given WAL
file’s format you can use the hbase wal command to parse files while the HBase cluster is offline. In
HBase 2.0+, this command will not be able to read a Sequence File based WAL. For more
information on the tool see the section WALPrettyPrinter.
Change in behavior for filters
The Filter ReturnCode NEXT_ROW has been redefined as skipping to next row in current family, not
to next row in all family. it’s more reasonable, because ReturnCode is a concept in store level, not in
region level.
Downstream HBase 2.0+ users should use the shaded client
Downstream users are strongly urged to rely on the Maven coordinates org.apache.hbase:hbase-
shaded-client for their runtime use. This artifact contains all the needed implementation details for
talking to an HBase cluster while minimizing the number of third party dependencies exposed.
Note that this artifact exposes some classes in the org.apache.hadoop package space (e.g.
o.a.h.configuration.Configuration) so that we can maintain source compatibility with our public
API. Those classes are included so that they can be altered to use the same relocated third party
dependencies as the rest of the HBase client code. In the event that you need to also use Hadoop in
your code, you should ensure all Hadoop related jars precede the HBase client jar in your classpath.
Downstream HBase 2.0+ users of MapReduce must switch to new artifact
Downstream users of HBase’s integration for Apache Hadoop MapReduce must switch to relying on
the org.apache.hbase:hbase-shaded-mapreduce module for their runtime use. Historically,
downstream users relied on either the org.apache.hbase:hbase-server or org.apache.hbase:hbase-
shaded-server artifacts for these classes. Both uses are no longer supported and in the vast majority
of cases will fail at runtime.
Note that this artifact exposes some classes in the org.apache.hadoop package space (e.g.
o.a.h.configuration.Configuration) so that we can maintain source compatibility with our public
API. Those classes are included so that they can be altered to use the same relocated third party
dependencies as the rest of the HBase client code. In the event that you need to also use Hadoop in
your code, you should ensure all Hadoop related jars precede the HBase client jar in your classpath.
Significant changes to runtime classpath
A number of internal dependencies for HBase were updated or removed from the runtime
classpath. Downstream client users who do not follow the guidance in Downstream HBase 2.0+
users should use the shaded client will have to examine the set of dependencies Maven pulls in for
impact. Downstream users of LimitedPrivate Coprocessor APIs will need to examine the runtime
environment for impact. For details on our new handling of third party libraries that have
historically been a problem with respect to harmonizing compatible runtime versions, see the
reference guide section The hbase-thirdparty dependency and shading/relocation.
Multiple breaking changes to source and binary compatibility for client API
The Java client API for HBase has a number of changes that break both source and binary
compatibility for details see the Compatibility Check Report for the release you’ll be upgrading to.
101
Tracing implementation changes
The backing implementation of HBase’s tracing features was updated from Apache HTrace 3 to
HTrace 4, which includes several breaking changes. While HTrace 3 and 4 can coexist in the same
runtime, they will not integrate with each other, leading to disjoint trace information.
The internal changes to HBase during this upgrade were sufficient for compilation, but it has not
been confirmed that there are no regressions in tracing functionality. Please consider this feature
expiremental for the immediate future.
If you previously relied on client side tracing integrated with HBase operations, it is recommended
that you upgrade your usage to HTrace 4 as well.
Performance
You will likely see a change in the performance profile on upgrade to hbase-2.0.0 given read and
write paths have undergone significant change. On release, writes may be slower with reads about
the same or much better, dependent on context. Be prepared to spend time re-tuning (See Apache
HBase Performance Tuning). Performance is also an area that is now under active review so look
forward to improvement in coming releases (See HBASE-20188 TESTING Performance).
13.1.2. Upgrading Coprocessors to 2.0
Coprocessors have changed substantially in 2.0 ranging from top level design changes in class
hierarchies to changed/removed methods, interfaces, etc. (Parent jira: HBASE-18169 Coprocessor fix
and cleanup before 2.0.0 release). Some of the reasons for such widespread changes:
1. Pass Interfaces instead of Implementations; e.g. TableDescriptor instead of HTableDescriptor
and Region instead of HRegion (HBASE-18241 Change client.Table and client.Admin to not use
HTableDescriptor).
2. Design refactor so implementers need to fill out less boilerplate and so we can do more compile-
time checking (HBASE-17732)
3. Purge Protocol Buffers from Coprocessor API (HBASE-18859, HBASE-16769, etc)
4. Cut back on what we expose to Coprocessors removing hooks on internals that were too private
to expose (for eg. HBASE-18453 CompactionRequest should not be exposed to user directly;
HBASE-18298 RegionServerServices Interface cleanup for CP expose; etc)
To use coprocessors in 2.0, they should be rebuilt against new API otherwise they will fail to load
and HBase processes will die.
Suggested order of changes to upgrade the coprocessors:
1. Directly implement observer interfaces instead of extending Base*Observer classes. Change Foo
extends BaseXXXObserver to Foo implements XXXObserver. (HBASE-17312).
2. Adapt to design change from Inheritence to Composition (HBASE-17732) by following this
example.
3. getTable() has been removed from the CoprocessorEnvrionment, coprocessors should self-
manage Table instances.
102
Some examples of writing coprocessors with new API can be found in hbase-example module here .
Lastly, if an api has been changed/removed that breaks you in an irreparable way, and if there’s a
good justification to add it back, bring it our notice (dev@hbase.apache.org).
13.1.3. Rolling Upgrade from 1.x to 2.x
Rolling upgrades are currently an experimental feature. They have had limited testing. There are
likely corner cases as yet uncovered in our limited experience so you should be careful if you go
this route. The stop/upgrade/start described in the next section, Upgrade process from 1.x to 2.x, is
the safest route.
That said, the below is a prescription for a rolling upgrade of a 1.4 cluster.
Pre-Requirements
•Upgrade to the latest 1.4.x release. Pre 1.4 releases may also work but are not tested, so please
upgrade to 1.4.3+ before upgrading to 2.x, unless you are an expert and familiar with the region
assignment and crash processing. See the section Upgrading from pre-1.4 to 1.4+ on how to
upgrade to 1.4.x.
•Make sure that the zk-less assignment is enabled, i.e, set hbase.assignment.usezk to false. This is
the most important thing. It allows the 1.x master to assign/unassign regions to/from 2.x region
servers. See the release note section of HBASE-11059 on how to migrate from zk based
assignment to zk less assignment.
•We have tested rolling upgrading from 1.4.3 to 2.1.0, but it should also work if you want to
upgrade to 2.0.x.
Instructions
1. Unload a region server and upgrade it to 2.1.0. With HBASE-17931 in place, the meta region and
regions for other system tables will be moved to this region server immediately. If not, please
move them manually to the new region server. This is very important because
◦The schema of meta region is hard coded, if meta is on an old region server, then the new
region servers can not access it as it does not have some families, for example, table state.
◦Client with lower version can communicate with server with higher version, but not vice
versa. If the meta region is on an old region server, the new region server will use a client
with higher version to communicate with a server with lower version, this may introduce
strange problems.
2. Rolling upgrade all other region servers.
3. Upgrading masters.
It is OK that during the rolling upgrading there are region server crashes. The 1.x master can assign
regions to both 1.x and 2.x region servers, and HBASE-19166 fixed a problem so that 1.x region
server can also read the WALs written by 2.x region server and split them.
103
please read the Changes of Note! section carefully before rolling upgrading. Make
sure that you do not use the removed features in 2.0, for example, the prefix-tree
encoding, the old hfile format, etc. They could both fail the upgrading and leave
the cluster in an intermediate state and hard to recover.
If you have success running this prescription, please notify the dev list with a note
on your experience and/or update the above with any deviations you may have
taken so others going this route can benefit from your efforts.
13.1.4. Upgrade process from 1.x to 2.x
To upgrade an existing HBase 1.x cluster, you should:
•Clean shutdown of existing 1.x cluster
•Update coprocessors
•Upgrade Master roles first
•Upgrade RegionServers
•(Eventually) Upgrade Clients
13.2. Upgrading from pre-1.4 to 1.4+
13.2.1. Region Server memory consumption changes.
Users upgrading from versions prior to HBase 1.4 should be aware that the estimates of heap usage
by the memstore objects (KeyValue, object and array header sizes, etc) have been made more
accurate for heap sizes up to 32G (using CompressedOops), resulting in them dropping by 10-50% in
practice. This also results in less number of flushes and compactions due to "fatter" flushes. YMMV.
As a result, the actual heap usage of the memstore before being flushed may increase by up to
100%. If configured memory limits for the region server had been tuned based on observed usage,
this change could result in worse GC behavior or even OutOfMemory errors. Set the environment
property (not hbase-site.xml) "hbase.memorylayout.use.unsafe" to false to disable.
13.2.2. Replication peer’s TableCFs config
Before 1.4, the table name can’t include namespace for replication peer’s TableCFs config. It was
fixed by add TableCFs to ReplicationPeerConfig which was stored on Zookeeper. So when upgrade
to 1.4, you have to update the original ReplicationPeerConfig data on Zookeeper firstly. There are
four steps to upgrade when your cluster have a replication peer with TableCFs config.
•Disable the replication peer.
•If master has permission to write replication peer znode, then rolling update master directly. If
not, use TableCFsUpdater tool to update the replication peer’s config.
$ bin/hbase org.apache.hadoop.hbase.replication.master.TableCFsUpdater update
104
•Rolling update regionservers.
•Enable the replication peer.
Notes:
•Can’t use the old client(before 1.4) to change the replication peer’s config. Because the client will
write config to Zookeeper directly, the old client will miss TableCFs config. And the old client
write TableCFs config to the old tablecfs znode, it will not work for new version regionserver.
13.2.3. Raw scan now ignores TTL
Doing a raw scan will now return results that have expired according to TTL settings.
13.3. Upgrading to 1.x
Please consult the documentation published specifically for the version of HBase that you are
upgrading to for details on the upgrade process.
105
The Apache HBase Shell
The Apache HBase Shell is (J)Ruby's IRB with some HBase particular commands added. Anything
you can do in IRB, you should be able to do in the HBase Shell.
To run the HBase shell, do as follows:
$ ./bin/hbase shell
Type help and then <RETURN> to see a listing of shell commands and options. Browse at least the
paragraphs at the end of the help output for the gist of how variables and command arguments are
entered into the HBase shell; in particular note how table names, rows, and columns, etc., must be
quoted.
See shell exercises for example basic shell operation.
Here is a nicely formatted listing of all shell commands by Rajeshbabu Chintaguntla.
106
Chapter 14. Scripting with Ruby
For examples scripting Apache HBase, look in the HBase bin directory. Look at the files that end in
*.rb. To run one of these files, do as follows:
$ ./bin/hbase org.jruby.Main PATH_TO_SCRIPT
107
Chapter 15. Running the Shell in Non-
Interactive Mode
A new non-interactive mode has been added to the HBase Shell (HBASE-11658). Non-interactive
mode captures the exit status (success or failure) of HBase Shell commands and passes that status
back to the command interpreter. If you use the normal interactive mode, the HBase Shell will only
ever return its own exit status, which will nearly always be 0 for success.
To invoke non-interactive mode, pass the -n or --non-interactive option to HBase Shell.
108
Chapter 16. HBase Shell in OS Scripts
You can use the HBase shell from within operating system script interpreters like the Bash shell
which is the default command interpreter for most Linux and UNIX distributions. The following
guidelines use Bash syntax, but could be adjusted to work with C-style shells such as csh or tcsh,
and could probably be modified to work with the Microsoft Windows script interpreter as well.
Submissions are welcome.
Spawning HBase Shell commands in this way is slow, so keep that in mind when
you are deciding when combining HBase operations with the operating system
command line is appropriate.
Example 4. Passing Commands to the HBase Shell
You can pass commands to the HBase Shell in non-interactive mode (see
hbase.shell.noninteractive) using the echo command and the | (pipe) operator. Be sure to
escape characters in the HBase commands which would otherwise be interpreted by the shell.
Some debug-level output has been truncated from the example below.
$ echo "describe 'test1'" | ./hbase shell -n
Version 0.98.3-hadoop2, rd5e65a9144e315bb0a964e7730871af32f5018d5, Sat May 31
19:56:09 PDT 2014
describe 'test1'
DESCRIPTION ENABLED
Ê'test1', {NAME => 'cf', DATA_BLOCK_ENCODING => 'NON true
ÊE', BLOOMFILTER => 'ROW', REPLICATION_SCOPE => '0',
Ê VERSIONS => '1', COMPRESSION => 'NONE', MIN_VERSIO
ÊNS => '0', TTL => 'FOREVER', KEEP_DELETED_CELLS =>
Ê'false', BLOCKSIZE => '65536', IN_MEMORY => 'false'
Ê, BLOCKCACHE => 'true'}
1 row(s) in 3.2410 seconds
To suppress all output, echo it to /dev/null:
$ echo "describe 'test'" | ./hbase shell -n > /dev/null 2>&1
109
Example 5. Checking the Result of a Scripted Command
Since scripts are not designed to be run interactively, you need a way to check whether your
command failed or succeeded. The HBase shell uses the standard convention of returning a
value of 0 for successful commands, and some non-zero value for failed commands. Bash
stores a command’s return value in a special environment variable called $?. Because that
variable is overwritten each time the shell runs any command, you should store the result in a
different, script-defined variable.
This is a naive script that shows one way to store the return value and make a decision based
upon it.
#!/bin/bash
echo "describe 'test'" | ./hbase shell -n > /dev/null 2>&1
status=$?
echo "The status was " $status
if ($status == 0); then
Ê echo "The command succeeded"
else
Ê echo "The command may have failed."
fi
return $status
16.1. Checking for Success or Failure In Scripts
Getting an exit code of 0 means that the command you scripted definitely succeeded. However,
getting a non-zero exit code does not necessarily mean the command failed. The command could
have succeeded, but the client lost connectivity, or some other event obscured its success. This is
because RPC commands are stateless. The only way to be sure of the status of an operation is to
check. For instance, if your script creates a table, but returns a non-zero exit value, you should
check whether the table was actually created before trying again to create it.
110
Chapter 17. Read HBase Shell Commands
from a Command File
You can enter HBase Shell commands into a text file, one command per line, and pass that file to
the HBase Shell.
Example Command File
create 'test', 'cf'
list 'test'
put 'test', 'row1', 'cf:a', 'value1'
put 'test', 'row2', 'cf:b', 'value2'
put 'test', 'row3', 'cf:c', 'value3'
put 'test', 'row4', 'cf:d', 'value4'
scan 'test'
get 'test', 'row1'
disable 'test'
enable 'test'
111
Example 6. Directing HBase Shell to Execute the Commands
Pass the path to the command file as the only argument to the hbase shell command. Each
command is executed and its output is shown. If you do not include the exit command in your
script, you are returned to the HBase shell prompt. There is no way to programmatically check
each individual command for success or failure. Also, though you see the output for each
command, the commands themselves are not echoed to the screen so it can be difficult to line
up the command with its output.
$ ./hbase shell ./sample_commands.txt
0 row(s) in 3.4170 seconds
TABLE
test
1 row(s) in 0.0590 seconds
0 row(s) in 0.1540 seconds
0 row(s) in 0.0080 seconds
0 row(s) in 0.0060 seconds
0 row(s) in 0.0060 seconds
ROW COLUMN+CELL
Êrow1 column=cf:a, timestamp=1407130286968, value=value1
Êrow2 column=cf:b, timestamp=1407130286997, value=value2
Êrow3 column=cf:c, timestamp=1407130287007, value=value3
Êrow4 column=cf:d, timestamp=1407130287015, value=value4
4 row(s) in 0.0420 seconds
COLUMN CELL
Êcf:a timestamp=1407130286968, value=value1
1 row(s) in 0.0110 seconds
0 row(s) in 1.5630 seconds
0 row(s) in 0.4360 seconds
112
Chapter 18. Passing VM Options to the Shell
You can pass VM options to the HBase Shell using the HBASE_SHELL_OPTS environment variable. You
can set this in your environment, for instance by editing ~/.bashrc, or set it as part of the command
to launch HBase Shell. The following example sets several garbage-collection-related variables, just
for the lifetime of the VM running the HBase Shell. The command should be run all on a single line,
but is broken by the \ character, for readability.
$ HBASE_SHELL_OPTS="-verbose:gc -XX:+PrintGCApplicationStoppedTime
-XX:+PrintGCDateStamps \
Ê -XX:+PrintGCDetails -Xloggc:$HBASE_HOME/logs/gc-hbase.log" ./bin/hbase shell
113
Chapter 19. Shell Tricks
19.1. Table variables
HBase 0.95 adds shell commands that provides jruby-style object-oriented references for tables.
Previously all of the shell commands that act upon a table have a procedural style that always took
the name of the table as an argument. HBase 0.95 introduces the ability to assign a table to a jruby
variable. The table reference can be used to perform data read write operations such as puts, scans,
and gets well as admin functionality such as disabling, dropping, describing tables.
For example, previously you would always specify a table name:
hbase(main):000:0> create 't', 'f'
0 row(s) in 1.0970 seconds
hbase(main):001:0> put 't', 'rold', 'f', 'v'
0 row(s) in 0.0080 seconds
hbase(main):002:0> scan 't'
ROW COLUMN+CELL
Êrold column=f:, timestamp=1378473207660, value=v
1 row(s) in 0.0130 seconds
hbase(main):003:0> describe 't'
DESCRIPTION
ENABLED
Ê't', {NAME => 'f', DATA_BLOCK_ENCODING => 'NONE', BLOOMFILTER => 'ROW', REPLICATION_
true
ÊSCOPE => '0', VERSIONS => '1', COMPRESSION => 'NONE', MIN_VERSIONS => '0', TTL => '2
Ê147483647', KEEP_DELETED_CELLS => 'false', BLOCKSIZE => '65536', IN_MEMORY => 'false
Ê', BLOCKCACHE => 'true'}
1 row(s) in 1.4430 seconds
hbase(main):004:0> disable 't'
0 row(s) in 14.8700 seconds
hbase(main):005:0> drop 't'
0 row(s) in 23.1670 seconds
hbase(main):006:0>
Now you can assign the table to a variable and use the results in jruby shell code.
114
hbase(main):007 > t = create 't', 'f'
0 row(s) in 1.0970 seconds
=> Hbase::Table - t
hbase(main):008 > t.put 'r', 'f', 'v'
0 row(s) in 0.0640 seconds
hbase(main):009 > t.scan
ROW COLUMN+CELL
Êr column=f:, timestamp=1331865816290, value=v
1 row(s) in 0.0110 seconds
hbase(main):010:0> t.describe
DESCRIPTION
ENABLED
Ê't', {NAME => 'f', DATA_BLOCK_ENCODING => 'NONE', BLOOMFILTER => 'ROW', REPLICATION_
true
ÊSCOPE => '0', VERSIONS => '1', COMPRESSION => 'NONE', MIN_VERSIONS => '0', TTL => '2
Ê147483647', KEEP_DELETED_CELLS => 'false', BLOCKSIZE => '65536', IN_MEMORY => 'false
Ê', BLOCKCACHE => 'true'}
1 row(s) in 0.0210 seconds
hbase(main):038:0> t.disable
0 row(s) in 6.2350 seconds
hbase(main):039:0> t.drop
0 row(s) in 0.2340 seconds
If the table has already been created, you can assign a Table to a variable by using the get_table
method:
hbase(main):011 > create 't','f'
0 row(s) in 1.2500 seconds
=> Hbase::Table - t
hbase(main):012:0> tab = get_table 't'
0 row(s) in 0.0010 seconds
=> Hbase::Table - t
hbase(main):013:0> tab.put 'r1' ,'f', 'v'
0 row(s) in 0.0100 seconds
hbase(main):014:0> tab.scan
ROW COLUMN+CELL
Êr1 column=f:, timestamp=1378473876949, value=v
1 row(s) in 0.0240 seconds
hbase(main):015:0>
The list functionality has also been extended so that it returns a list of table names as strings. You
can then use jruby to script table operations based on these names. The list_snapshots command
also acts similarly.
115
hbase(main):016 > tables = list('t.*')
TABLE
t
1 row(s) in 0.1040 seconds
=> #<#<Class:0x7677ce29>:0x21d377a4>
hbase(main):017:0> tables.map { |t| disable t ; drop t}
0 row(s) in 2.2510 seconds
=> [nil]
hbase(main):018:0>
19.2. irbrc
Create an .irbrc file for yourself in your home directory. Add customizations. A useful one is
command history so commands are save across Shell invocations:
$ more .irbrc
require 'irb/ext/save-history'
IRB.conf[:SAVE_HISTORY] = 100
IRB.conf[:HISTORY_FILE] = "#{ENV['HOME']}/.irb-save-history"
If you’d like to avoid printing the result of evaluting each expression to stderr, for example the
array of tables returned from the "list" command:
$ echo "IRB.conf[:ECHO] = false" >>~/.irbrc
See the ruby documentation of .irbrc to learn about other possible configurations.
19.3. LOG data to timestamp
To convert the date '08/08/16 20:56:29' from an hbase log into a timestamp, do:
hbase(main):021:0> import java.text.SimpleDateFormat
hbase(main):022:0> import java.text.ParsePosition
hbase(main):023:0> SimpleDateFormat.new("yy/MM/dd HH:mm:ss").parse("08/08/16
20:56:29", ParsePosition.new(0)).getTime() => 1218920189000
To go the other direction:
hbase(main):021:0> import java.util.Date
hbase(main):022:0> Date.new(1218920189000).toString() => "Sat Aug 16 20:56:29 UTC
2008"
116
To output in a format that is exactly like that of the HBase log format will take a little messing with
SimpleDateFormat.
19.4. Query Shell Configuration
hbase(main):001:0> @shell.hbase.configuration.get("hbase.rpc.timeout")
=> "60000"
To set a config in the shell:
hbase(main):005:0> @shell.hbase.configuration.setInt("hbase.rpc.timeout", 61010)
hbase(main):006:0> @shell.hbase.configuration.get("hbase.rpc.timeout")
=> "61010"
19.5. Pre-splitting tables with the HBase Shell
You can use a variety of options to pre-split tables when creating them via the HBase Shell create
command.
The simplest approach is to specify an array of split points when creating the table. Note that when
specifying string literals as split points, these will create split points based on the underlying byte
representation of the string. So when specifying a split point of '10', we are actually specifying the
byte split point '\x31\30'.
The split points will define n+1 regions where n is the number of split points. The lowest region will
contain all keys from the lowest possible key up to but not including the first split point key. The
next region will contain keys from the first split point up to, but not including the next split point
key. This will continue for all split points up to the last. The last region will be defined from the last
split point up to the maximum possible key.
hbase>create 't1','f',SPLITS => ['10','20','30']
In the above example, the table 't1' will be created with column family 'f', pre-split to four regions.
Note the first region will contain all keys from '\x00' up to '\x30' (as '\x31' is the ASCII code for '1').
You can pass the split points in a file using following variation. In this example, the splits are read
from a file corresponding to the local path on the local filesystem. Each line in the file specifies a
split point key.
hbase>create 't14','f',SPLITS_FILE=>'splits.txt'
The other options are to automatically compute splits based on a desired number of regions and a
splitting algorithm. HBase supplies algorithms for splitting the key range based on uniform splits or
based on hexadecimal keys, but you can provide your own splitting algorithm to subdivide the key
117
range.
# create table with four regions based on random bytes keys
hbase>create 't2','f1', { NUMREGIONS => 4 , SPLITALGO => 'UniformSplit' }
# create table with five regions based on hex keys
hbase>create 't3','f1', { NUMREGIONS => 5, SPLITALGO => 'HexStringSplit' }
As the HBase Shell is effectively a Ruby environment, you can use simple Ruby scripts to compute
splits algorithmically.
# generate splits for long (Ruby fixnum) key range from start to end key
hbase(main):070:0> def gen_splits(start_key,end_key,num_regions)
hbase(main):071:1> results=[]
hbase(main):072:1> range=end_key-start_key
hbase(main):073:1> incr=(range/num_regions).floor
hbase(main):074:1> for i in 1 .. num_regions-1
hbase(main):075:2> results.push([i*incr+start_key].pack("N"))
hbase(main):076:2> end
hbase(main):077:1> return results
hbase(main):078:1> end
hbase(main):079:0>
hbase(main):080:0> splits=gen_splits(1,2000000,10)
=> ["\000\003\r@", "\000\006\032\177", "\000\t'\276", "\000\f4\375", "\000\017B<",
"\000\022O{", "\000\025\\\272", "\000\030i\371", "\000\ew8"]
hbase(main):081:0> create 'test_splits','f',SPLITS=>splits
0 row(s) in 0.2670 seconds
=> Hbase::Table - test_splits
Note that the HBase Shell command truncate effectively drops and recreates the table with default
options which will discard any pre-splitting. If you need to truncate a pre-split table, you must drop
and recreate the table explicitly to re-specify custom split options.
19.6. Debug
19.6.1. Shell debug switch
You can set a debug switch in the shell to see more output — e.g. more of the stack trace on
exception — when you run a command:
hbase> debug <RETURN>
19.6.2. DEBUG log level
To enable DEBUG level logging in the shell, launch it with the -d option.
118
$ ./bin/hbase shell -d
19.7. Commands
19.7.1. count
Count command returns the number of rows in a table. It’s quite fast when configured with the
right CACHE
hbase> count '<tablename>', CACHE => 1000
The above count fetches 1000 rows at a time. Set CACHE lower if your rows are big. Default is to
fetch one row at a time.
119
Data Model
In HBase, data is stored in tables, which have rows and columns. This is a terminology overlap with
relational databases (RDBMSs), but this is not a helpful analogy. Instead, it can be helpful to think of
an HBase table as a multi-dimensional map.
HBase Data Model Terminology
Table
An HBase table consists of multiple rows.
Row
A row in HBase consists of a row key and one or more columns with values associated with
them. Rows are sorted alphabetically by the row key as they are stored. For this reason, the
design of the row key is very important. The goal is to store data in such a way that related rows
are near each other. A common row key pattern is a website domain. If your row keys are
domains, you should probably store them in reverse (org.apache.www, org.apache.mail,
org.apache.jira). This way, all of the Apache domains are near each other in the table, rather
than being spread out based on the first letter of the subdomain.
Column
A column in HBase consists of a column family and a column qualifier, which are delimited by a
: (colon) character.
Column Family
Column families physically colocate a set of columns and their values, often for performance
reasons. Each column family has a set of storage properties, such as whether its values should be
cached in memory, how its data is compressed or its row keys are encoded, and others. Each row
in a table has the same column families, though a given row might not store anything in a given
column family.
Column Qualifier
A column qualifier is added to a column family to provide the index for a given piece of data.
Given a column family content, a column qualifier might be content:html, and another might be
content:pdf. Though column families are fixed at table creation, column qualifiers are mutable
and may differ greatly between rows.
Cell
A cell is a combination of row, column family, and column qualifier, and contains a value and a
timestamp, which represents the value’s version.
Timestamp
A timestamp is written alongside each value, and is the identifier for a given version of a value.
By default, the timestamp represents the time on the RegionServer when the data was written,
but you can specify a different timestamp value when you put data into the cell.
120
Chapter 20. Conceptual View
You can read a very understandable explanation of the HBase data model in the blog post
Understanding HBase and BigTable by Jim R. Wilson. Another good explanation is available in the
PDF Introduction to Basic Schema Design by Amandeep Khurana.
It may help to read different perspectives to get a solid understanding of HBase schema design. The
linked articles cover the same ground as the information in this section.
The following example is a slightly modified form of the one on page 2 of the BigTable paper. There
is a table called webtable that contains two rows (com.cnn.www and com.example.www) and three
column families named contents, anchor, and people. In this example, for the first row (com.cnn.www),
anchor contains two columns (anchor:cssnsi.com, anchor:my.look.ca) and contents contains one
column (contents:html). This example contains 5 versions of the row with the row key com.cnn.www,
and one version of the row with the row key com.example.www. The contents:html column qualifier
contains the entire HTML of a given website. Qualifiers of the anchor column family each contain
the external site which links to the site represented by the row, along with the text it used in the
anchor of its link. The people column family represents people associated with the site.
Column Names
By convention, a column name is made of its column family prefix and a qualifier.
For example, the column contents:html is made up of the column family contents
and the html qualifier. The colon character (:) delimits the column family from the
column family qualifier.
Table 6. Table webtable
Row Key Time Stamp ColumnFamily
contents
ColumnFamily
anchor
ColumnFamily
people
"com.cnn.www" t9 anchor:cnnsi.com
= "CNN"
"com.cnn.www" t8 anchor:my.look.ca
= "CNN.com"
"com.cnn.www" t6 contents:html =
"<html>…"
"com.cnn.www" t5 contents:html =
"<html>…"
"com.cnn.www" t3 contents:html =
"<html>…"
"com.example.ww
w"
t5 contents:html =
"<html>…"
people:author =
"John Doe"
Cells in this table that appear to be empty do not take space, or in fact exist, in HBase. This is what
makes HBase "sparse." A tabular view is not the only possible way to look at data in HBase, or even
the most accurate. The following represents the same information as a multi-dimensional map. This
is only a mock-up for illustrative purposes and may not be strictly accurate.
121
{
Ê "com.cnn.www": {
Ê contents: {
Ê t6: contents:html: "<html>..."
Ê t5: contents:html: "<html>..."
Ê t3: contents:html: "<html>..."
Ê }
Ê anchor: {
Ê t9: anchor:cnnsi.com = "CNN"
Ê t8: anchor:my.look.ca = "CNN.com"
Ê }
Ê people: {}
Ê }
Ê "com.example.www": {
Ê contents: {
Ê t5: contents:html: "<html>..."
Ê }
Ê anchor: {}
Ê people: {
Ê t5: people:author: "John Doe"
Ê }
Ê }
}
122
Chapter 21. Physical View
Although at a conceptual level tables may be viewed as a sparse set of rows, they are physically
stored by column family. A new column qualifier (column_family:column_qualifier) can be added
to an existing column family at any time.
Table 7. ColumnFamily anchor
Row Key Time Stamp Column Family anchor
"com.cnn.www" t9 anchor:cnnsi.com = "CNN"
"com.cnn.www" t8 anchor:my.look.ca = "CNN.com"
Table 8. ColumnFamily contents
Row Key Time Stamp ColumnFamily contents:
"com.cnn.www" t6 contents:html = "<html>…"
"com.cnn.www" t5 contents:html = "<html>…"
"com.cnn.www" t3 contents:html = "<html>…"
The empty cells shown in the conceptual view are not stored at all. Thus a request for the value of
the contents:html column at time stamp t8 would return no value. Similarly, a request for an
anchor:my.look.ca value at time stamp t9 would return no value. However, if no timestamp is
supplied, the most recent value for a particular column would be returned. Given multiple
versions, the most recent is also the first one found, since timestamps are stored in descending
order. Thus a request for the values of all columns in the row com.cnn.www if no timestamp is
specified would be: the value of contents:html from timestamp t6, the value of anchor:cnnsi.com
from timestamp t9, the value of anchor:my.look.ca from timestamp t8.
For more information about the internals of how Apache HBase stores data, see regions.arch.
123
Chapter 22. Namespace
A namespace is a logical grouping of tables analogous to a database in relation database systems.
This abstraction lays the groundwork for upcoming multi-tenancy related features:
•Quota Management (HBASE-8410) - Restrict the amount of resources (i.e. regions, tables) a
namespace can consume.
•Namespace Security Administration (HBASE-9206) - Provide another level of security
administration for tenants.
•Region server groups (HBASE-6721) - A namespace/table can be pinned onto a subset of
RegionServers thus guaranteeing a coarse level of isolation.
22.1. Namespace management
A namespace can be created, removed or altered. Namespace membership is determined during
table creation by specifying a fully-qualified table name of the form:
<table namespace>:<table qualifier>
Example 7. Examples
#Create a namespace
create_namespace 'my_ns'
#create my_table in my_ns namespace
create 'my_ns:my_table', 'fam'
#drop namespace
drop_namespace 'my_ns'
#alter namespace
alter_namespace 'my_ns', {METHOD => 'set', 'PROPERTY_NAME' => 'PROPERTY_VALUE'}
22.2. Predefined namespaces
There are two predefined special namespaces:
•hbase - system namespace, used to contain HBase internal tables
•default - tables with no explicit specified namespace will automatically fall into this namespace
124
Example 8. Examples
#namespace=foo and table qualifier=bar
create 'foo:bar', 'fam'
#namespace=default and table qualifier=bar
create 'bar', 'fam'
125
Chapter 23. Table
Tables are declared up front at schema definition time.
126
Chapter 24. Row
Row keys are uninterpreted bytes. Rows are lexicographically sorted with the lowest order
appearing first in a table. The empty byte array is used to denote both the start and end of a tables'
namespace.
127
Chapter 25. Column Family
Columns in Apache HBase are grouped into column families. All column members of a column
family have the same prefix. For example, the columns courses:history and courses:math are both
members of the courses column family. The colon character (:) delimits the column family from the
column family qualifier. The column family prefix must be composed of printable characters. The
qualifying tail, the column family qualifier, can be made of any arbitrary bytes. Column families
must be declared up front at schema definition time whereas columns do not need to be defined at
schema time but can be conjured on the fly while the table is up and running.
Physically, all column family members are stored together on the filesystem. Because tunings and
storage specifications are done at the column family level, it is advised that all column family
members have the same general access pattern and size characteristics.
128
Chapter 26. Cells
A {row, column, version} tuple exactly specifies a cell in HBase. Cell content is uninterpreted bytes
129
Chapter 27. Data Model Operations
The four primary data model operations are Get, Put, Scan, and Delete. Operations are applied via
Table instances.
27.1. Get
Get returns attributes for a specified row. Gets are executed via Table.get
27.2. Put
Put either adds new rows to a table (if the key is new) or can update existing rows (if the key
already exists). Puts are executed via Table.put (non-writeBuffer) or Table.batch (non-writeBuffer)
27.3. Scans
Scan allow iteration over multiple rows for specified attributes.
The following is an example of a Scan on a Table instance. Assume that a table is populated with
rows with keys "row1", "row2", "row3", and then another set of rows with the keys "abc1", "abc2",
and "abc3". The following example shows how to set a Scan instance to return the rows beginning
with "row".
public static final byte[] CF = "cf".getBytes();
public static final byte[] ATTR = "attr".getBytes();
...
Table table = ... // instantiate a Table instance
Scan scan = new Scan();
scan.addColumn(CF, ATTR);
scan.setRowPrefixFilter(Bytes.toBytes("row"));
ResultScanner rs = table.getScanner(scan);
try {
Ê for (Result r = rs.next(); r != null; r = rs.next()) {
Ê // process result...
Ê }
} finally {
Ê rs.close(); // always close the ResultScanner!
}
Note that generally the easiest way to specify a specific stop point for a scan is by using the
InclusiveStopFilter class.
130
27.4. Delete
Delete removes a row from a table. Deletes are executed via Table.delete.
HBase does not modify data in place, and so deletes are handled by creating new markers called
tombstones. These tombstones, along with the dead values, are cleaned up on major compactions.
See version.delete for more information on deleting versions of columns, and see compaction for
more information on compactions.
131
Chapter 28. Versions
A {row, column, version} tuple exactly specifies a cell in HBase. It’s possible to have an unbounded
number of cells where the row and column are the same but the cell address differs only in its
version dimension.
While rows and column keys are expressed as bytes, the version is specified using a long integer.
Typically this long contains time instances such as those returned by java.util.Date.getTime() or
System.currentTimeMillis(), that is: the difference, measured in milliseconds, between the current
time and midnight, January 1, 1970 UTC.
The HBase version dimension is stored in decreasing order, so that when reading from a store file,
the most recent values are found first.
There is a lot of confusion over the semantics of cell versions, in HBase. In particular:
•If multiple writes to a cell have the same version, only the last written is fetchable.
•It is OK to write cells in a non-increasing version order.
Below we describe how the version dimension in HBase currently works. See HBASE-2406 for
discussion of HBase versions. Bending time in HBase makes for a good read on the version, or time,
dimension in HBase. It has more detail on versioning than is provided here.
As of this writing, the limitation Overwriting values at existing timestamps mentioned in the article
no longer holds in HBase. This section is basically a synopsis of this article by Bruno Dumon.
28.1. Specifying the Number of Versions to Store
The maximum number of versions to store for a given column is part of the column schema and is
specified at table creation, or via an alter command, via HColumnDescriptor.DEFAULT_VERSIONS. Prior
to HBase 0.96, the default number of versions kept was 3, but in 0.96 and newer has been changed
to 1.
Example 9. Modify the Maximum Number of Versions for a Column Family
This example uses HBase Shell to keep a maximum of 5 versions of all columns in column
family f1. You could also use HColumnDescriptor.
hbase> alter ‘t1′, NAME => ‘f1′, VERSIONS => 5
132
Example 10. Modify the Minimum Number of Versions for a Column Family
You can also specify the minimum number of versions to store per column family. By default,
this is set to 0, which means the feature is disabled. The following example sets the minimum
number of versions on all columns in column family f1 to 2, via HBase Shell. You could also use
HColumnDescriptor.
hbase> alter ‘t1′, NAME => ‘f1′, MIN_VERSIONS => 2
Starting with HBase 0.98.2, you can specify a global default for the maximum number of versions
kept for all newly-created columns, by setting hbase.column.max.version in hbase-site.xml. See
hbase.column.max.version.
28.2. Versions and HBase Operations
In this section we look at the behavior of the version dimension for each of the core HBase
operations.
28.2.1. Get/Scan
Gets are implemented on top of Scans. The below discussion of Get applies equally to Scans.
By default, i.e. if you specify no explicit version, when doing a get, the cell whose version has the
largest value is returned (which may or may not be the latest one written, see later). The default
behavior can be modified in the following ways:
•to return more than one version, see Get.setMaxVersions()
•to return versions other than the latest, see Get.setTimeRange()
To retrieve the latest version that is less than or equal to a given value, thus giving the 'latest'
state of the record at a certain point in time, just use a range from 0 to the desired version and
set the max versions to 1.
28.2.2. Default Get Example
The following Get will only retrieve the current version of the row
public static final byte[] CF = "cf".getBytes();
public static final byte[] ATTR = "attr".getBytes();
...
Get get = new Get(Bytes.toBytes("row1"));
Result r = table.get(get);
byte[] b = r.getValue(CF, ATTR); // returns current version of value
133
28.2.3. Versioned Get Example
The following Get will return the last 3 versions of the row.
public static final byte[] CF = "cf".getBytes();
public static final byte[] ATTR = "attr".getBytes();
...
Get get = new Get(Bytes.toBytes("row1"));
get.setMaxVersions(3); // will return last 3 versions of row
Result r = table.get(get);
byte[] b = r.getValue(CF, ATTR); // returns current version of value
List<KeyValue> kv = r.getColumn(CF, ATTR); // returns all versions of this column
28.2.4. Put
Doing a put always creates a new version of a cell, at a certain timestamp. By default the system
uses the server’s currentTimeMillis, but you can specify the version (= the long integer) yourself, on
a per-column level. This means you could assign a time in the past or the future, or use the long
value for non-time purposes.
To overwrite an existing value, do a put at exactly the same row, column, and version as that of the
cell you want to overwrite.
Implicit Version Example
The following Put will be implicitly versioned by HBase with the current time.
public static final byte[] CF = "cf".getBytes();
public static final byte[] ATTR = "attr".getBytes();
...
Put put = new Put(Bytes.toBytes(row));
put.add(CF, ATTR, Bytes.toBytes( data));
table.put(put);
Explicit Version Example
The following Put has the version timestamp explicitly set.
public static final byte[] CF = "cf".getBytes();
public static final byte[] ATTR = "attr".getBytes();
...
Put put = new Put( Bytes.toBytes(row));
long explicitTimeInMs = 555; // just an example
put.add(CF, ATTR, explicitTimeInMs, Bytes.toBytes(data));
table.put(put);
Caution: the version timestamp is used internally by HBase for things like time-to-live calculations.
134
It’s usually best to avoid setting this timestamp yourself. Prefer using a separate timestamp
attribute of the row, or have the timestamp as a part of the row key, or both.
28.2.5. Delete
There are three different types of internal delete markers. See Lars Hofhansl’s blog for discussion
of his attempt adding another, Scanning in HBase: Prefix Delete Marker.
•Delete: for a specific version of a column.
•Delete column: for all versions of a column.
•Delete family: for all columns of a particular ColumnFamily
When deleting an entire row, HBase will internally create a tombstone for each ColumnFamily (i.e.,
not each individual column).
Deletes work by creating tombstone markers. For example, let’s suppose we want to delete a row.
For this you can specify a version, or else by default the currentTimeMillis is used. What this means
is delete all cells where the version is less than or equal to this version. HBase never modifies data in
place, so for example a delete will not immediately delete (or mark as deleted) the entries in the
storage file that correspond to the delete condition. Rather, a so-called tombstone is written, which
will mask the deleted values. When HBase does a major compaction, the tombstones are processed
to actually remove the dead values, together with the tombstones themselves. If the version you
specified when deleting a row is larger than the version of any value in the row, then you can
consider the complete row to be deleted.
For an informative discussion on how deletes and versioning interact, see the thread Put
w/timestamp → Deleteall → Put w/ timestamp fails up on the user mailing list.
Also see keyvalue for more information on the internal KeyValue format.
Delete markers are purged during the next major compaction of the store, unless the
KEEP_DELETED_CELLS option is set in the column family (See Keeping Deleted Cells). To keep the
deletes for a configurable amount of time, you can set the delete TTL via the
hbase.hstore.time.to.purge.deletes property in hbase-site.xml. If hbase.hstore.time.to.purge.deletes
is not set, or set to 0, all delete markers, including those with timestamps in the future, are purged
during the next major compaction. Otherwise, a delete marker with a timestamp in the future is
kept until the major compaction which occurs after the time represented by the marker’s
timestamp plus the value of hbase.hstore.time.to.purge.deletes, in milliseconds.
This behavior represents a fix for an unexpected change that was introduced in
HBase 0.94, and was fixed in HBASE-10118. The change has been backported to
HBase 0.94 and newer branches.
28.3. Optional New Version and Delete behavior in
HBase-2.0.0
In hbase-2.0.0, the operator can specify an alternate version and delete treatment by setting the
135
column descriptor property NEW_VERSION_BEHAVIOR to true (To set a property on a column family
descriptor, you must first disable the table and then alter the column family descriptor; see Keeping
Deleted Cells for an example of editing an attribute on a column family descriptor).
The 'new version behavior', undoes the limitations listed below whereby a Delete ALWAYS
overshadows a Put if at the same location — i.e. same row, column family, qualifier and
timestamp — regardless of which arrived first. Version accounting is also changed as deleted
versions are considered toward total version count. This is done to ensure results are not changed
should a major compaction intercede. See HBASE-15968 and linked issues for discussion.
Running with this new configuration currently costs; we factor the Cell MVCC on every compare so
we burn more CPU. The slow down will depend. In testing we’ve seen between 0% and 25%
degradation.
If replicating, it is advised that you run with the new serial replication feature (See HBASE-9465; the
serial replication feature did NOT make it into hbase-2.0.0 but should arrive in a subsequent hbase-
2.x release) as now the order in which Mutations arrive is a factor.
28.4. Current Limitations
The below limitations are addressed in hbase-2.0.0. See the section above, Optional New Version
and Delete behavior in HBase-2.0.0.
28.4.1. Deletes mask Puts
Deletes mask puts, even puts that happened after the delete was entered. See HBASE-2256.
Remember that a delete writes a tombstone, which only disappears after then next major
compaction has run. Suppose you do a delete of everything ⇐ T. After this you do a new put with a
timestamp ⇐ T. This put, even if it happened after the delete, will be masked by the delete
tombstone. Performing the put will not fail, but when you do a get you will notice the put did have
no effect. It will start working again after the major compaction has run. These issues should not be
a problem if you use always-increasing versions for new puts to a row. But they can occur even if
you do not care about time: just do delete and put immediately after each other, and there is some
chance they happen within the same millisecond.
28.4.2. Major compactions change query results
…create three cell versions at t1, t2 and t3, with a maximum-versions setting of 2. So when getting all
versions, only the values at t2 and t3 will be returned. But if you delete the version at t2 or t3, the one
at t1 will appear again. Obviously, once a major compaction has run, such behavior will not be the
case anymore… (See Garbage Collection in Bending time in HBase.)
136
Chapter 29. Sort Order
All data model operations HBase return data in sorted order. First by row, then by ColumnFamily,
followed by column qualifier, and finally timestamp (sorted in reverse, so newest records are
returned first).
137
Chapter 30. Column Metadata
There is no store of column metadata outside of the internal KeyValue instances for a
ColumnFamily. Thus, while HBase can support not only a wide number of columns per row, but a
heterogeneous set of columns between rows as well, it is your responsibility to keep track of the
column names.
The only way to get a complete set of columns that exist for a ColumnFamily is to process all the
rows. For more information about how HBase stores data internally, see keyvalue.
138
Chapter 31. Joins
Whether HBase supports joins is a common question on the dist-list, and there is a simple answer: it
doesn’t, at not least in the way that RDBMS' support them (e.g., with equi-joins or outer-joins in
SQL). As has been illustrated in this chapter, the read data model operations in HBase are Get and
Scan.
However, that doesn’t mean that equivalent join functionality can’t be supported in your
application, but you have to do it yourself. The two primary strategies are either denormalizing the
data upon writing to HBase, or to have lookup tables and do the join between HBase tables in your
application or MapReduce code (and as RDBMS' demonstrate, there are several strategies for this
depending on the size of the tables, e.g., nested loops vs. hash-joins). So which is the best approach?
It depends on what you are trying to do, and as such there isn’t a single answer that works for
every use case.
139
HBase and Schema Design
A good introduction on the strength and weaknesses modelling on the various non-rdbms
datastores is to be found in Ian Varley’s Master thesis, No Relation: The Mixed Blessings of Non-
Relational Databases. It is a little dated now but a good background read if you have a moment on
how HBase schema modeling differs from how it is done in an RDBMS. Also, read keyvalue for how
HBase stores data internally, and the section on schema.casestudies.
The documentation on the Cloud Bigtable website, Designing Your Schema, is pertinent and nicely
done and lessons learned there equally apply here in HBase land; just divide any quoted values by
~10 to get what works for HBase: e.g. where it says individual values can be ~10MBs in size, HBase
can do similar — perhaps best to go smaller if you can — and where it says a maximum of 100
column families in Cloud Bigtable, think ~10 when modeling on HBase.
See also Robert Yokota’s HBase Application Archetypes (an update on work done by other HBasers),
for a helpful categorization of use cases that do well on top of the HBase model.
141
Chapter 33. Schema Creation
HBase schemas can be created or updated using the The Apache HBase Shell or by using Admin in
the Java API.
Tables must be disabled when making ColumnFamily modifications, for example:
Configuration config = HBaseConfiguration.create();
Admin admin = new Admin(conf);
TableName table = TableName.valueOf("myTable");
admin.disableTable(table);
HColumnDescriptor cf1 = ...;
admin.addColumn(table, cf1); // adding new ColumnFamily
HColumnDescriptor cf2 = ...;
admin.modifyColumn(table, cf2); // modifying existing ColumnFamily
admin.enableTable(table);
See client dependencies for more information about configuring client connections.
online schema changes are supported in the 0.92.x codebase, but the 0.90.x
codebase requires the table to be disabled.
33.1. Schema Updates
When changes are made to either Tables or ColumnFamilies (e.g. region size, block size), these
changes take effect the next time there is a major compaction and the StoreFiles get re-written.
See store for more information on StoreFiles.
142
Chapter 34. Table Schema Rules Of Thumb
There are many different data sets, with different access patterns and service-level expectations.
Therefore, these rules of thumb are only an overview. Read the rest of this chapter to get more
details after you have gone through this list.
•Aim to have regions sized between 10 and 50 GB.
•Aim to have cells no larger than 10 MB, or 50 MB if you use mob. Otherwise, consider storing
your cell data in HDFS and store a pointer to the data in HBase.
•A typical schema has between 1 and 3 column families per table. HBase tables should not be
designed to mimic RDBMS tables.
•Around 50-100 regions is a good number for a table with 1 or 2 column families. Remember that
a region is a contiguous segment of a column family.
•Keep your column family names as short as possible. The column family names are stored for
every value (ignoring prefix encoding). They should not be self-documenting and descriptive
like in a typical RDBMS.
•If you are storing time-based machine data or logging information, and the row key is based on
device ID or service ID plus time, you can end up with a pattern where older data regions never
have additional writes beyond a certain age. In this type of situation, you end up with a small
number of active regions and a large number of older regions which have no new writes. For
these situations, you can tolerate a larger number of regions because your resource
consumption is driven by the active regions only.
•If only one column family is busy with writes, only that column family accomulates memory. Be
aware of write patterns when allocating resources.
143
RegionServer Sizing Rules of Thumb
Lars Hofhansl wrote a great blog post about RegionServer memory sizing. The upshot is that you
probably need more memory than you think you need. He goes into the impact of region size,
memstore size, HDFS replication factor, and other things to check.
Personally I would place the maximum disk space per machine that can be
served exclusively with HBase around 6T, unless you have a very read-
heavy workload. In that case the Java heap should be 32GB (20G regions,
128M memstores, the rest defaults).
— Lars Hofhansl, http://hadoop-hbase.blogspot.com/2013/01/hbase-region-server-memory-sizing.html
144
Chapter 35. On the number of column
families
HBase currently does not do well with anything above two or three column families so keep the
number of column families in your schema low. Currently, flushing and compactions are done on a
per Region basis so if one column family is carrying the bulk of the data bringing on flushes, the
adjacent families will also be flushed even though the amount of data they carry is small. When
many column families exist the flushing and compaction interaction can make for a bunch of
needless i/o (To be addressed by changing flushing and compaction to work on a per column family
basis). For more information on compactions, see Compaction.
Try to make do with one column family if you can in your schemas. Only introduce a second and
third column family in the case where data access is usually column scoped; i.e. you query one
column family or the other but usually not both at the one time.
35.1. Cardinality of ColumnFamilies
Where multiple ColumnFamilies exist in a single table, be aware of the cardinality (i.e., number of
rows). If ColumnFamilyA has 1 million rows and ColumnFamilyB has 1 billion rows,
ColumnFamilyA’s data will likely be spread across many, many regions (and RegionServers). This
makes mass scans for ColumnFamilyA less efficient.
145
Chapter 36. Rowkey Design
36.1. Hotspotting
Rows in HBase are sorted lexicographically by row key. This design optimizes for scans, allowing
you to store related rows, or rows that will be read together, near each other. However, poorly
designed row keys are a common source of hotspotting. Hotspotting occurs when a large amount of
client traffic is directed at one node, or only a few nodes, of a cluster. This traffic may represent
reads, writes, or other operations. The traffic overwhelms the single machine responsible for
hosting that region, causing performance degradation and potentially leading to region
unavailability. This can also have adverse effects on other regions hosted by the same region server
as that host is unable to service the requested load. It is important to design data access patterns
such that the cluster is fully and evenly utilized.
To prevent hotspotting on writes, design your row keys such that rows that truly do need to be in
the same region are, but in the bigger picture, data is being written to multiple regions across the
cluster, rather than one at a time. Some common techniques for avoiding hotspotting are described
below, along with some of their advantages and drawbacks.
Salting
Salting in this sense has nothing to do with cryptography, but refers to adding random data to the
start of a row key. In this case, salting refers to adding a randomly-assigned prefix to the row key to
cause it to sort differently than it otherwise would. The number of possible prefixes correspond to
the number of regions you want to spread the data across. Salting can be helpful if you have a few
"hot" row key patterns which come up over and over amongst other more evenly-distributed rows.
Consider the following example, which shows that salting can spread write load across multiple
RegionServers, and illustrates some of the negative implications for reads.
146
Example 11. Salting Example
Suppose you have the following list of row keys, and your table is split such that there is one
region for each letter of the alphabet. Prefix 'a' is one region, prefix 'b' is another. In this table,
all rows starting with 'f' are in the same region. This example focuses on rows with keys like
the following:
foo0001
foo0002
foo0003
foo0004
Now, imagine that you would like to spread these across four different regions. You decide to
use four different salts: a, b, c, and d. In this scenario, each of these letter prefixes will be on a
different region. After applying the salts, you have the following rowkeys instead. Since you
can now write to four separate regions, you theoretically have four times the throughput when
writing that you would have if all the writes were going to the same region.
a-foo0003
b-foo0001
c-foo0004
d-foo0002
Then, if you add another row, it will randomly be assigned one of the four possible salt values
and end up near one of the existing rows.
a-foo0003
b-foo0001
c-foo0003
c-foo0004
d-foo0002
Since this assignment will be random, you will need to do more work if you want to retrieve
the rows in lexicographic order. In this way, salting attempts to increase throughput on writes,
but has a cost during reads.
Hashing
Instead of a random assignment, you could use a one-way hash that would cause a given row to
always be "salted" with the same prefix, in a way that would spread the load across the
RegionServers, but allow for predictability during reads. Using a deterministic hash allows the
client to reconstruct the complete rowkey and use a Get operation to retrieve that row as normal.
147
Example 12. Hashing Example
Given the same situation in the salting example above, you could instead apply a one-way hash
that would cause the row with key foo0003 to always, and predictably, receive the a prefix.
Then, to retrieve that row, you would already know the key. You could also optimize things so
that certain pairs of keys were always in the same region, for instance.
Reversing the Key
A third common trick for preventing hotspotting is to reverse a fixed-width or numeric row key so
that the part that changes the most often (the least significant digit) is first. This effectively
randomizes row keys, but sacrifices row ordering properties.
See https://communities.intel.com/community/itpeernetwork/datastack/blog/2013/11/10/discussion-
on-designing-hbase-tables, and article on Salted Tables from the Phoenix project, and the discussion
in the comments of HBASE-11682 for more information about avoiding hotspotting.
36.2. Monotonically Increasing Row Keys/Timeseries
Data
In the HBase chapter of Tom White’s book Hadoop: The Definitive Guide (O’Reilly) there is a an
optimization note on watching out for a phenomenon where an import process walks in lock-step
with all clients in concert pounding one of the table’s regions (and thus, a single node), then moving
onto the next region, etc. With monotonically increasing row-keys (i.e., using a timestamp), this will
happen. See this comic by IKai Lan on why monotonically increasing row keys are problematic in
BigTable-like datastores: monotonically increasing values are bad. The pile-up on a single region
brought on by monotonically increasing keys can be mitigated by randomizing the input records to
not be in sorted order, but in general it’s best to avoid using a timestamp or a sequence (e.g. 1, 2, 3)
as the row-key.
If you do need to upload time series data into HBase, you should study OpenTSDB as a successful
example. It has a page describing the schema it uses in HBase. The key format in OpenTSDB is
effectively [metric_type][event_timestamp], which would appear at first glance to contradict the
previous advice about not using a timestamp as the key. However, the difference is that the
timestamp is not in the lead position of the key, and the design assumption is that there are dozens
or hundreds (or more) of different metric types. Thus, even with a continual stream of input data
with a mix of metric types, the Puts are distributed across various points of regions in the table.
See schema.casestudies for some rowkey design examples.
36.3. Try to minimize row and column sizes
In HBase, values are always freighted with their coordinates; as a cell value passes through the
system, it’ll be accompanied by its row, column name, and timestamp - always. If your rows and
column names are large, especially compared to the size of the cell value, then you may run up
against some interesting scenarios. One such is the case described by Marc Limotte at the tail of
HBASE-3551 (recommended!). Therein, the indices that are kept on HBase storefiles (StoreFile
148
(HFile)) to facilitate random access may end up occupying large chunks of the HBase allotted RAM
because the cell value coordinates are large. Mark in the above cited comment suggests upping the
block size so entries in the store file index happen at a larger interval or modify the table schema so
it makes for smaller rows and column names. Compression will also make for larger indices. See
the thread a question storefileIndexSize up on the user mailing list.
Most of the time small inefficiencies don’t matter all that much. Unfortunately, this is a case where
they do. Whatever patterns are selected for ColumnFamilies, attributes, and rowkeys they could be
repeated several billion times in your data.
See keyvalue for more information on HBase stores data internally to see why this is important.
36.3.1. Column Families
Try to keep the ColumnFamily names as small as possible, preferably one character (e.g. "d" for
data/default).
See KeyValue for more information on HBase stores data internally to see why this is important.
36.3.2. Attributes
Although verbose attribute names (e.g., "myVeryImportantAttribute") are easier to read, prefer
shorter attribute names (e.g., "via") to store in HBase.
See keyvalue for more information on HBase stores data internally to see why this is important.
36.3.3. Rowkey Length
Keep them as short as is reasonable such that they can still be useful for required data access (e.g.
Get vs. Scan). A short key that is useless for data access is not better than a longer key with better
get/scan properties. Expect tradeoffs when designing rowkeys.
36.3.4. Byte Patterns
A long is 8 bytes. You can store an unsigned number up to 18,446,744,073,709,551,615 in those eight
bytes. If you stored this number as a String — presuming a byte per character — you need nearly 3x
the bytes.
Not convinced? Below is some sample code that you can run on your own.
149
// long
//
long l = 1234567890L;
byte[] lb = Bytes.toBytes(l);
System.out.println("long bytes length: " + lb.length); // returns 8
String s = String.valueOf(l);
byte[] sb = Bytes.toBytes(s);
System.out.println("long as string length: " + sb.length); // returns 10
// hash
//
MessageDigest md = MessageDigest.getInstance("MD5");
byte[] digest = md.digest(Bytes.toBytes(s));
System.out.println("md5 digest bytes length: " + digest.length); // returns 16
String sDigest = new String(digest);
byte[] sbDigest = Bytes.toBytes(sDigest);
System.out.println("md5 digest as string length: " + sbDigest.length); // returns
26
Unfortunately, using a binary representation of a type will make your data harder to read outside
of your code. For example, this is what you will see in the shell when you increment a value:
hbase(main):001:0> incr 't', 'r', 'f:q', 1
COUNTER VALUE = 1
hbase(main):002:0> get 't', 'r'
COLUMN CELL
Êf:q timestamp=1369163040570, value=\x00\x00
\x00\x00\x00\x00\x00\x01
1 row(s) in 0.0310 seconds
The shell makes a best effort to print a string, and it this case it decided to just print the hex. The
same will happen to your row keys inside the region names. It can be okay if you know what’s
being stored, but it might also be unreadable if arbitrary data can be put in the same cells. This is
the main trade-off.
36.4. Reverse Timestamps
Reverse Scan API
HBASE-4811 implements an API to scan a table or a range within a table in reverse,
reducing the need to optimize your schema for forward or reverse scanning. This
feature is available in HBase 0.98 and later. See Scan.setReversed() for more
information.
A common problem in database processing is quickly finding the most recent version of a value. A
150
technique using reverse timestamps as a part of the key can help greatly with a special case of this
problem. Also found in the HBase chapter of Tom White’s book Hadoop: The Definitive Guide
(O’Reilly), the technique involves appending (Long.MAX_VALUE - timestamp) to the end of any key, e.g.
[key][reverse_timestamp].
The most recent value for [key] in a table can be found by performing a Scan for [key] and
obtaining the first record. Since HBase keys are in sorted order, this key sorts before any older row-
keys for [key] and thus is first.
This technique would be used instead of using Number of Versions where the intent is to hold onto
all versions "forever" (or a very long time) and at the same time quickly obtain access to any other
version by using the same Scan technique.
36.5. Rowkeys and ColumnFamilies
Rowkeys are scoped to ColumnFamilies. Thus, the same rowkey could exist in each ColumnFamily
that exists in a table without collision.
36.6. Immutability of Rowkeys
Rowkeys cannot be changed. The only way they can be "changed" in a table is if the row is deleted
and then re-inserted. This is a fairly common question on the HBase dist-list so it pays to get the
rowkeys right the first time (and/or before you’ve inserted a lot of data).
36.7. Relationship Between RowKeys and Region Splits
If you pre-split your table, it is critical to understand how your rowkey will be distributed across
the region boundaries. As an example of why this is important, consider the example of using
displayable hex characters as the lead position of the key (e.g., "0000000000000000" to
"ffffffffffffffff"). Running those key ranges through Bytes.split (which is the split strategy used
when creating regions in Admin.createTable(byte[] startKey, byte[] endKey, numRegions) for 10
regions will generate the following splits…
48 48 48 48 48 48 48 48 48 48 48 48 48 48 48 48 // 0
54 -10 -10 -10 -10 -10 -10 -10 -10 -10 -10 -10 -10 -10 -10 -10 // 6
61 -67 -67 -67 -67 -67 -67 -67 -67 -67 -67 -67 -67 -67 -67 -68 // =
68 -124 -124 -124 -124 -124 -124 -124 -124 -124 -124 -124 -124 -124 -124 -126 // D
75 75 75 75 75 75 75 75 75 75 75 75 75 75 75 72 // K
82 18 18 18 18 18 18 18 18 18 18 18 18 18 18 14 // R
88 -40 -40 -40 -40 -40 -40 -40 -40 -40 -40 -40 -40 -40 -40 -44 // X
95 -97 -97 -97 -97 -97 -97 -97 -97 -97 -97 -97 -97 -97 -97 -102 // _
102 102 102 102 102 102 102 102 102 102 102 102 102 102 102 102 // f
(note: the lead byte is listed to the right as a comment.) Given that the first split is a '0' and the last
split is an 'f', everything is great, right? Not so fast.
The problem is that all the data is going to pile up in the first 2 regions and the last region thus
151
creating a "lumpy" (and possibly "hot") region problem. To understand why, refer to an ASCII Table.
'0' is byte 48, and 'f' is byte 102, but there is a huge gap in byte values (bytes 58 to 96) that will never
appear in this keyspace because the only values are [0-9] and [a-f]. Thus, the middle regions will
never be used. To make pre-splitting work with this example keyspace, a custom definition of splits
(i.e., and not relying on the built-in split method) is required.
Lesson #1: Pre-splitting tables is generally a best practice, but you need to pre-split them in such a
way that all the regions are accessible in the keyspace. While this example demonstrated the
problem with a hex-key keyspace, the same problem can happen with any keyspace. Know your
data.
Lesson #2: While generally not advisable, using hex-keys (and more generally, displayable data) can
still work with pre-split tables as long as all the created regions are accessible in the keyspace.
To conclude this example, the following is an example of how appropriate splits can be pre-created
for hex-keys:.
public static boolean createTable(Admin admin, HTableDescriptor table, byte[][]
splits)
throws IOException {
Ê try {
Ê admin.createTable( table, splits );
Ê return true;
Ê } catch (TableExistsException e) {
Ê logger.info("table " + table.getNameAsString() + " already exists");
Ê // the table already exists...
Ê return false;
Ê }
}
public static byte[][] getHexSplits(String startKey, String endKey, int numRegions) {
Ê byte[][] splits = new byte[numRegions-1][];
Ê BigInteger lowestKey = new BigInteger(startKey, 16);
Ê BigInteger highestKey = new BigInteger(endKey, 16);
Ê BigInteger range = highestKey.subtract(lowestKey);
Ê BigInteger regionIncrement = range.divide(BigInteger.valueOf(numRegions));
Ê lowestKey = lowestKey.add(regionIncrement);
Ê for(int i=0; i < numRegions-1;i++) {
Ê BigInteger key = lowestKey.add(regionIncrement.multiply(BigInteger.valueOf(i)));
Ê byte[] b = String.format("%016x", key).getBytes();
Ê splits[i] = b;
Ê }
Ê return splits;
}
152
Chapter 37. Number of Versions
37.1. Maximum Number of Versions
The maximum number of row versions to store is configured per column family via
HColumnDescriptor. The default for max versions is 1. This is an important parameter because as
described in Data Model section HBase does not overwrite row values, but rather stores different
values per row by time (and qualifier). Excess versions are removed during major compactions.
The number of max versions may need to be increased or decreased depending on application
needs.
It is not recommended setting the number of max versions to an exceedingly high level (e.g.,
hundreds or more) unless those old values are very dear to you because this will greatly increase
StoreFile size.
37.2. Minimum Number of Versions
Like maximum number of row versions, the minimum number of row versions to keep is
configured per column family via HColumnDescriptor. The default for min versions is 0, which
means the feature is disabled. The minimum number of row versions parameter is used together
with the time-to-live parameter and can be combined with the number of row versions parameter
to allow configurations such as "keep the last T minutes worth of data, at most N versions, but keep
at least M versions around" (where M is the value for minimum number of row versions, M<N). This
parameter should only be set when time-to-live is enabled for a column family and must be less
than the number of row versions.
153
Chapter 38. Supported Datatypes
HBase supports a "bytes-in/bytes-out" interface via Put and Result, so anything that can be
converted to an array of bytes can be stored as a value. Input could be strings, numbers, complex
objects, or even images as long as they can rendered as bytes.
There are practical limits to the size of values (e.g., storing 10-50MB objects in HBase would
probably be too much to ask); search the mailing list for conversations on this topic. All rows in
HBase conform to the Data Model, and that includes versioning. Take that into consideration when
making your design, as well as block size for the ColumnFamily.
38.1. Counters
One supported datatype that deserves special mention are "counters" (i.e., the ability to do atomic
increments of numbers). See Increment in Table.
Synchronization on counters are done on the RegionServer, not in the client.
154
Chapter 40. Time To Live (TTL)
ColumnFamilies can set a TTL length in seconds, and HBase will automatically delete rows once the
expiration time is reached. This applies to all versions of a row - even the current one. The TTL time
encoded in the HBase for the row is specified in UTC.
Store files which contains only expired rows are deleted on minor compaction. Setting
hbase.store.delete.expired.storefile to false disables this feature. Setting minimum number of
versions to other than 0 also disables this.
See HColumnDescriptor for more information.
Recent versions of HBase also support setting time to live on a per cell basis. See HBASE-10560 for
more information. Cell TTLs are submitted as an attribute on mutation requests (Appends,
Increments, Puts, etc.) using Mutation#setTTL. If the TTL attribute is set, it will be applied to all cells
updated on the server by the operation. There are two notable differences between cell TTL
handling and ColumnFamily TTLs:
•Cell TTLs are expressed in units of milliseconds instead of seconds.
•A cell TTLs cannot extend the effective lifetime of a cell beyond a ColumnFamily level TTL
setting.
156
Chapter 41. Keeping Deleted Cells
By default, delete markers extend back to the beginning of time. Therefore, Get or Scan operations
will not see a deleted cell (row or column), even when the Get or Scan operation indicates a time
range before the delete marker was placed.
ColumnFamilies can optionally keep deleted cells. In this case, deleted cells can still be retrieved, as
long as these operations specify a time range that ends before the timestamp of any delete that
would affect the cells. This allows for point-in-time queries even in the presence of deletes.
Deleted cells are still subject to TTL and there will never be more than "maximum number of
versions" deleted cells. A new "raw" scan options returns all deleted rows and the delete markers.
Change the Value of KEEP_DELETED_CELLS Using HBase Shell
hbase> hbase> alter ‘t1′, NAME => ‘f1′, KEEP_DELETED_CELLS => true
Example 13. Change the Value of KEEP_DELETED_CELLS Using the API
...
HColumnDescriptor.setKeepDeletedCells(true);
...
Let us illustrate the basic effect of setting the KEEP_DELETED_CELLS attribute on a table.
First, without:
157
create 'test', {NAME=>'e', VERSIONS=>2147483647}
put 'test', 'r1', 'e:c1', 'value', 10
put 'test', 'r1', 'e:c1', 'value', 12
put 'test', 'r1', 'e:c1', 'value', 14
delete 'test', 'r1', 'e:c1', 11
hbase(main):017:0> scan 'test', {RAW=>true, VERSIONS=>1000}
ROW COLUMN+CELL
Êr1 column=e:c1, timestamp=14, value
=value
Êr1 column=e:c1, timestamp=12, value
=value
Êr1 column=e:c1, timestamp=11, type
=DeleteColumn
Êr1 column=e:c1, timestamp=10, value
=value
1 row(s) in 0.0120 seconds
hbase(main):018:0> flush 'test'
0 row(s) in 0.0350 seconds
hbase(main):019:0> scan 'test', {RAW=>true, VERSIONS=>1000}
ROW COLUMN+CELL
Êr1 column=e:c1, timestamp=14, value
=value
Êr1 column=e:c1, timestamp=12, value
=value
Êr1 column=e:c1, timestamp=11, type
=DeleteColumn
1 row(s) in 0.0120 seconds
hbase(main):020:0> major_compact 'test'