Hibernate Developer Guide
User Manual: Pdf
Open the PDF directly: View PDF .
Page Count: 219
Download | ![]() |
Open PDF In Browser | View PDF |
Hibernate.orgCommunity Documentation Hibernate Developer Guide The Hibernate Team The JBoss Visual Design Team 4.1.6.Final Copyright © 2011 Red Hat, Inc. 2012-08-09 Table of Contents Preface 1. Get Involved 2. Getting Started Guide 1. Database access 1.1. Connecting 1.1.1. Configuration 1.1.2. Obtaining a JDBC connection 1.2. Connection pooling 1.2.1. c3p0 connection pool 1.2.2. Proxool connection pool 1.2.3. Obtaining connections from an application server, using JNDI 1.2.4. Other connection-specific configuration 1.2.5. Optional configuration properties 1.3. Dialects 1.3.1. Specifying the Dialect to use 1.3.2. Dialect resolution 1.4. Automatic schema generation with SchemaExport 1.4.1. Customizing the mapping files 1.4.2. Running the SchemaExport tool 2. Transactions and concurrency control 2.1. Defining Transaction 2.2. Physical Transactions 2.2.1. Physical Transactions - JDBC 2.2.2. Physical Transactions - JTA 2.2.3. Physical Transactions - CMT 2.2.4. Physical Transactions - Custom 2.2.5. Physical Transactions - Legacy 2.3. Hibernate Transaction Usage 2.4. Transactional patterns (and anti-patterns) 2.4.1. Session-per-operation anti-pattern 2.4.2. Session-per-request pattern 2.4.3. Conversations 2.4.4. Session-per-application 2.5. Object identity 2.6. Common issues 3. Persistence Contexts 3.1. Making entities persistent 3.2. Deleting entities 3.3. Obtain an entity reference without initializing its data 3.4. Obtain an entity with its data initialized 3.5. Obtain an entity by natural-id 3.6. Refresh entity state 3.7. Modifying managed/persistent state 3.8. Working with detached data 3.8.1. Reattaching detached data 3.8.2. Merging detached data 3.9. Checking persistent state 3.10. Accessing Hibernate APIs from JPA 4. Batch Processing 4.1. Batch inserts 4.2. Batch updates 4.3. StatelessSession 4.4. Hibernate Query Language for DML 4.4.1. HQL for UPDATE and DELETE 4.4.2. HQL syntax for INSERT 4.4.3. More information on HQL 5. Locking 5.1. Optimistic 5.1.1. Dedicated version number 5.1.2. Timestamp 5.2. Pessimistic 5.2.1. The LockMode class 6. Caching 6.1. The query cache 6.1.1. Query cache regions 6.2. Second-level cache providers 6.2.1. Configuring your cache providers 6.2.2. Caching strategies 6.2.3. Second-level cache providers for Hibernate 6.3. Managing the cache 6.3.1. Moving items into and out of the cache 7. Services 7.1. What are services? 7.2. Service contracts 7.3. Service dependencies 7.3.1. @org.hibernate.service.spi.InjectService 7.3.2. org.hibernate.service.spi.ServiceRegistryAwareService 7.4. ServiceRegistry 7.5. Standard services 7.5.1. org.hibernate.engine.jdbc.batch.spi.BatchBuilder 7.5.2. org.hibernate.service.config.spi.ConfigurationService 7.5.3. org.hibernate.service.jdbc.connections.spi.ConnectionProvider 7.5.4. org.hibernate.service.jdbc.dialect.spi.DialectFactory 7.5.5. org.hibernate.service.jdbc.dialect.spi.DialectResolver 7.5.6. org.hibernate.engine.jdbc.spi.JdbcServices 7.5.7. org.hibernate.service.jmx.spi.JmxService 7.5.8. org.hibernate.service.jndi.spi.JndiService 7.5.9. org.hibernate.service.jta.platform.spi.JtaPlatform 7.5.10. org.hibernate.service.jdbc.connections.spi.MultiTenantConnectionProvider 7.5.11. org.hibernate.persister.spi.PersisterClassResolver 7.5.12. org.hibernate.persister.spi.PersisterFactory 7.5.13. org.hibernate.cache.spi.RegionFactory 7.5.14. org.hibernate.service.spi.SessionFactoryServiceRegistryFactory 7.5.15. org.hibernate.stat.Statistics 7.5.16. org.hibernate.engine.transaction.spi.TransactionFactory 7.5.17. org.hibernate.tool.hbm2ddl.ImportSqlCommandExtractor 7.6. Custom services 7.7. Special service registries 7.7.1. Boot-strap registry 7.7.2. SessionFactory registry 7.8. Using services and registries 7.9. Integrators 7.9.1. Integrator use-cases 8. Data categorizations 8.1. Value types 8.1.1. Basic types 8.1.2. Composite types 8.1.3. Collection types 8.2. Entity Types 8.3. Implications of different data categorizations 9. Mapping entities 9.1. Hierarchies 10. Mapping associations 11. HQL and JPQL 11.1. Case Sensitivity 11.2. Statement types 11.2.1. Select statements 11.2.2. Update statements 11.2.3. Delete statements 11.2.4. Insert statements 11.3. The FROM clause 11.3.1. Identification variables 11.3.2. Root entity references 11.3.3. Explicit joins 11.3.4. Implicit joins (path expressions) 11.3.5. Collection member references 11.3.6. Polymorphism 11.4. Expressions 11.4.1. Identification variable 11.4.2. Path expressions 11.4.3. Literals 11.4.4. Parameters 11.4.5. Arithmetic 11.4.6. Concatenation (operation) 11.4.7. Aggregate functions 11.4.8. Scalar functions 11.4.9. Collection-related expressions 11.4.10. Entity type 11.4.11. CASE expressions 11.5. The SELECT clause 11.6. Predicates 11.6.1. Relational comparisons 11.6.2. Nullness predicate 11.6.3. Like predicate 11.6.4. Between predicate 11.6.5. In predicate 11.6.6. Exists predicate 11.6.7. Empty collection predicate 11.6.8. Member-of collection predicate 11.6.9. NOT predicate operator 11.6.10. AND predicate operator 11.6.11. OR predicate operator 11.7. The WHERE clause 11.8. Grouping 11.9. Ordering 11.10. Query API 12. Criteria 12.1. Typed criteria queries 12.1.1. Selecting an entity 12.1.2. Selecting an expression 12.1.3. Selecting multiple values 12.1.4. Selecting a wrapper 12.2. Tuple criteria queries 12.3. FROM clause 12.3.1. Roots 12.3.2. Joins 12.3.3. Fetches 12.4. Path expressions 12.5. Using parameters 13. Native SQL Queries 13.1. Using a SQLQuery 13.1.1. Scalar queries 13.1.2. Entity queries 13.1.3. Handling associations and collections 13.1.4. Returning multiple entities 13.1.5. Returning non-managed entities 13.1.6. Handling inheritance 13.1.7. Parameters 13.2. Named SQL queries 13.2.1. Using return-property to explicitly specify column/alias names 13.2.2. Using stored procedures for querying 13.3. Custom SQL for create, update and delete 13.4. Custom SQL for loading 14. JMX 15. Envers 15.1. Basics 15.2. Configuration 15.3. Additional mapping annotations 15.4. Choosing an audit strategy 15.5. Revision Log 15.5.1. Tracking entity names modified during revisions 15.6. Tracking entity changes at property level 15.7. Queries 15.7.1. Querying for entities of a class at a given revision 15.7.2. Querying for revisions, at which entities of a given class changed 15.7.3. Querying for revisions of entity that modified given property 15.7.4. Querying for entities modified in a given revision 15.8. Conditional auditing 15.9. Understanding the Envers Schema 15.10. Generating schema with Ant 15.11. Mapping exceptions 15.11.1. What isn't and will not be supported 15.11.2. What isn't and will be supported 15.11.3. @OneToMany+@JoinColumn 15.12. Advanced: Audit table partitioning 15.12.1. Benefits of audit table partitioning 15.12.2. Suitable columns for audit table partitioning 15.12.3. Audit table partitioning example 15.13. Envers links 16. Multi-tenancy 16.1. What is multi-tenancy? 16.2. Multi-tenant data approaches 16.2.1. Separate database 16.2.2. Separate schema 16.2.3. Partitioned (discriminator) data 16.3. Multi-tenancy in Hibernate 16.3.1. MultiTenantConnectionProvider 16.3.2. CurrentTenantIdentifierResolver 16.3.3. Caching 16.3.4. Odds and ends 16.4. Strategies for MultiTenantConnectionProvider implementors A. Configuration properties A.1. General Configuration A.2. Database configuration A.3. Connection pool properties B. Legacy Hibernate Criteria Queries B.1. Creating a Criteria instance B.2. Narrowing the result set B.3. Ordering the results B.4. Associations B.5. Dynamic association fetching B.6. Components B.7. Collections B.8. Example queries B.9. Projections, aggregation and grouping B.10. Detached queries and subqueries B.11. Queries by natural identifier List of Tables 1.1. Important configuration properties for the Proxool connection pool 1.2. Supported database dialects 1.3. Elements and attributes provided for customizing mapping files 1.4. SchemaExport Options 6.1. Possible values for Shared Cache Mode 8.1. Basic Type Mappings 13.1. Alias injection names 15.1. Envers Configuration Properties 15.2. Salaries table 15.3. Salaries - audit table A.1. JDBC properties A.2. Cache Properties A.3. Transactions properties A.4. Miscellaneous properties A.5. Proxool connection pool properties List of Examples 1.1. hibernate.properties for a c3p0 connection pool 1.2. hibernate.cfg.xml for a connection to the bundled HSQL database 1.3. Specifying the mapping files directly 1.4. Letting Hibernate find the mapping files for you 1.5. Specifying configuration properties 1.6. Specifying configuration properties 1.7. SchemaExport syntax 1.8. Embedding SchemaExport into your application 2.1. Database identity 2.2. JVM identity 3.1. Example of making an entity persistent 3.2. Example of deleting an entity 3.3. Example of obtaining an entity reference without initializing its data 3.4. Example of obtaining an entity reference with its data initialized 3.5. Example of simple natural-id access 3.6. Example of natural-id access 3.7. Example of refreshing entity state 3.8. Example of modifying managed state 3.9. Example of reattaching a detached entity 3.10. Visualizing merge 3.11. Example of merging a detached entity 3.12. Examples of verifying managed state 3.13. Examples of verifying laziness 3.14. Alternative JPA means to verify laziness 3.15. Usage of EntityManager.unwrap 4.1. Naive way to insert 100000 lines with Hibernate 4.2. Flushing and clearing the Session 4.3. Using scroll() 4.4. Using a StatelessSession 4.5. Psuedo-syntax for UPDATE and DELETE statements using HQL 4.6. Executing an HQL UPDATE, using the Query.executeUpdate() method 4.7. Updating the version of timestamp 4.8. A HQL DELETE statement 4.9. Pseudo-syntax for INSERT statements 4.10. HQL INSERT statement 5.1. The @Version annotation 5.2. Declaring a version property in hbm.xml 5.3. Using timestamps for optimistic locking 5.4. The timestamp element in hbm.xml 6.1. Method setCacheRegion 6.2. Configuring cache providers using annotations 6.3. Configuring cache providers using mapping files 6.4. Evicting an item from the first-level cache 6.5. Second-level cache eviction 6.6. Browsing the second-level cache entries via the Statistics API 7.1. Using BootstrapServiceRegistryBuilder 7.2. Registering event listeners 11.1. Example UPDATE query statements 11.2. Example INSERT query statements 11.3. Simple query example 11.4. Simple query using entity name for root entity reference 11.5. Simple query using multiple root entity references 11.6. Explicit inner join examples 11.7. Explicit left (outer) join examples 11.8. Fetch join example 11.9. with-clause join example 11.10. Simple implicit join example 11.11. Reused implicit join 11.12. Collection references example 11.13. Qualified collection references example 11.14. String literal examples 11.15. Numeric literal examples 11.16. Named parameter examples 11.17. Positional (JPQL) parameter examples 11.18. Numeric arithmetic examples 11.19. Concatenation operation example 11.20. Aggregate function examples 11.21. Collection-related expressions examples 11.22. Index operator examples 11.23. Entity type expression examples 11.24. Simple case expression example 11.25. Searched case expression example 11.26. NULLIF example 11.27. Dynamic instantiation example - constructor 11.28. Dynamic instantiation example - list 11.29. Dynamic instantiation example - map 11.30. Relational comparison examples 11.31. ALL subquery comparison qualifier example 11.32. Nullness checking examples 11.33. Like predicate examples 11.34. Between predicate examples 11.35. In predicate examples 11.36. Empty collection expression examples 11.37. Member-of collection expression examples 11.38. Group-by illustration 11.39. Having illustration 11.40. Order-by examples 12.1. Selecting the root entity 12.2. Selecting an attribute 12.3. Selecting an array 12.4. Selecting an array (2) 12.5. Selecting an wrapper 12.6. Selecting a tuple 12.7. Adding a root 12.8. Adding multiple roots 12.9. Example with Embedded and ManyToOne 12.10. Example with Collections 12.11. Example with Embedded and ManyToOne 12.12. Example with Collections 12.13. Using parameters 13.1. Named sql query using themaping element 13.2. Execution of a named query 13.3. Named sql query with association 13.4. Named query returning a scalar 13.5. mapping used to externalize mapping information 13.6. Programmatically specifying the result mapping information 13.7. Named SQL query using @NamedNativeQuery together with @SqlResultSetMapping 13.8. Implicit result set mapping 13.9. Using dot notation in @FieldResult for specifying associations 13.10. Scalar values via @ColumnResult 13.11. Custom CRUD via annotations 13.12. Custom CRUD XML 13.13. Overriding SQL statements for collections using annotations 13.14. Overriding SQL statements for secondary tables 13.15. Stored procedures and their return value 15.1. Example of storing username with revision 15.2. Custom implementation of tracking entity classes modified during revisions 16.1. Specifying tenant identifier from SessionFactory 16.2. Implementing MultiTenantConnectionProvider using different connection pools 16.3. Implementing MultiTenantConnectionProvider using single connection pool Preface Table of Contents 1. Get Involved 2. Getting Started Guide Working with both Object-Oriented software and Relational Databases can be cumbersome and time consuming. Development costs are significantly higher due to a paradigm mismatch between how data is represented in objects versus relational databases. Hibernate is an Object/Relational Mapping solution for Java environments. The term Object/Relational Mapping refers to the technique of mapping data from an object model representation to a relational data model representation (and visa versa). See http://en.wikipedia.org/wiki/Object-relational_mapping for a good high-level discussion. Note While having a strong background in SQL is not required to use Hibernate, having a basic understanding of the concepts can greatly help you understand Hibernate more fully and quickly. Probably the single best background is an understanding of data modeling principles. You might want to consider these resources as a good starting point: • • http://www.agiledata.org/essays/dataModeling101.html http://en.wikipedia.org/wiki/Data_modeling Hibernate not only takes care of the mapping from Java classes to database tables (and from Java data types to SQL data types), but also provides data query and retrieval facilities. It can significantly reduce development time otherwise spent with manual data handling in SQL and JDBC. Hibernate’s design goal is to relieve the developer from 95% of common data persistence-related programming tasks by eliminating the need for manual, hand-crafted data processing using SQL and JDBC. However, unlike many other persistence solutions, Hibernate does not hide the power of SQL from you and guarantees that your investment in relational technology and knowledge is as valid as always. Hibernate may not be the best solution for data-centric applications that only use stored-procedures to implement the business logic in the database, it is most useful with objectoriented domain models and business logic in the Java-based middle-tier. However, Hibernate can certainly help you to remove or encapsulate vendor-specific SQL code and will help with the common task of result set translation from a tabular representation to a graph of objects. 1. Get Involved • • • • • Use Hibernate and report any bugs or issues you find. See http://hibernate.org/issuetracker.html for details. Try your hand at fixing some bugs or implementing enhancements. Again, see http://hibernate.org/issuetracker.html. Engage with the community using mailing lists, forums, IRC, or other ways listed at http://hibernate.org/community.html. Help improve or translate this documentation. Contact us on the developer mailing list if you have interest. Spread the word. Let the rest of your organization know about the benefits of Hibernate. 2. Getting Started Guide New users may want to first look through the Hibernate Getting Started Guide for basic information as well as tutorials. Even seasoned veterans may want to considering perusing the sections pertaining to build artifacts for any changes. Chapter 1. Database access Table of Contents 1.1. Connecting 1.1.1. Configuration 1.1.2. Obtaining a JDBC connection 1.2. Connection pooling 1.2.1. c3p0 connection pool 1.2.2. Proxool connection pool 1.2.3. Obtaining connections from an application server, using JNDI 1.2.4. Other connection-specific configuration 1.2.5. Optional configuration properties 1.3. Dialects 1.3.1. Specifying the Dialect to use 1.3.2. Dialect resolution 1.4. Automatic schema generation with SchemaExport 1.4.1. Customizing the mapping files 1.4.2. Running the SchemaExport tool 1.1. Connecting Hibernate connects to databases on behalf of your application. It can connect through a variety of mechanisms, including: • • • • Stand-alone built-in connection pool javax.sql.DataSource Connection pools, including support for two different third-party opensource JDBC connection pools: o c3p0 o proxool Application-supplied JDBC connections. This is not a recommended approach and exists for legacy reasons Note The built-in connection pool is not intended for production environments. Hibernate obtains JDBC connections as needed though the org.hibernate.service.jdbc.connections.spi.ConnectionProvider interface which is a service contract. Applications may also supply their own org.hibernate.service.jdbc.connections.spi.ConnectionProvider implementation to define a custom approach for supplying connections to Hibernate (from a different connection pool implementation, for example). 1.1.1. Configuration You can configure database connections using a properties file, an XML deployment descriptor or programmatically. Example 1.1. hibernate.properties for a c3p0 connection pool hibernate.connection.driver_class = org.postgresql.Driver hibernate.connection.url = jdbc:postgresql://localhost/mydatabase hibernate.connection.username = myuser hibernate.connection.password = secret hibernate.c3p0.min_size=5 hibernate.c3p0.max_size=20 hibernate.c3p0.timeout=1800 hibernate.c3p0.max_statements=50 hibernate.dialect = org.hibernate.dialect.PostgreSQL82Dialect Example 1.2. hibernate.cfg.xml for a connection to the bundled HSQL database 1.1.1.1. Programatic configuration An instance of object org.hibernate.cfg.Configuration represents an entire set of mappings of an application's Java types to an SQL database. The org.hibernate.cfg.Configuration builds an immutable org.hibernate.SessionFactory, and compiles the mappings from various XML mapping files. You can specify the mapping files directly, or Hibernate can find them for you. Example 1.3. Specifying the mapping files directly You can obtain a org.hibernate.cfg.Configuration instance by instantiating it directly and specifying XML mapping documents. If the mapping files are in the classpath, use method addResource(). Configuration cfg = new Configuration() .addResource("Item.hbm.xml") .addResource("Bid.hbm.xml"); Example 1.4. Letting Hibernate find the mapping files for you The addClass() method directs Hibernate to search the CLASSPATH for the mapping files, eliminating hard-coded file names. In the following example, it searches for org/hibernate/auction/Item.hbm.xml and org/hibernate/auction/Bid.hbm.xml. Configuration cfg = new Configuration() .addClass(org.hibernate.auction.Item.class) .addClass(org.hibernate.auction.Bid.class); Example 1.5. Specifying configuration properties Configuration cfg = new Configuration() .addClass(org.hibernate.auction.Item.class) .addClass(org.hibernate.auction.Bid.class) .setProperty("hibernate.dialect", "org.hibernate.dialect.MySQLInnoDBDialect") .setProperty("hibernate.connection.datasource", "java:comp/env/jdbc/test") .setProperty("hibernate.order_updates", "true"); Other ways to configure Hibernate programmatically • • Pass an instance of java.util.Properties to Configuration.setProperties(). Set System properties using java -Dproperty=value 1.1.2. Obtaining a JDBC connection After you configure the Most important Hibernate JDBC properties, you can use method openSession of class org.hibernate.SessionFactory to open sessions. Sessions will obtain JDBC connections as needed based on the provided configuration. Example 1.6. Specifying configuration properties Session session = sessions.openSession(); Most important Hibernate JDBC properties • • • • hibernate.connection.driver_class hibernate.connection.url hibernate.connection.username hibernate.connection.password • hibernate.connection.pool_size All available Hibernate settings are defined as constants and discussed on the org.hibernate.cfg.AvailableSettings interface. See its source code or JavaDoc for details. 1.2. Connection pooling Hibernate's internal connection pooling algorithm is rudimentary, and is provided for development and testing purposes. Use a third-party pool for best performance and stability. To use a third-party pool, replace the hibernate.connection.pool_size property with settings specific to your connection pool of choice. This disables Hibernate's internal connection pool. 1.2.1. c3p0 connection pool C3P0 is an open source JDBC connection pool distributed along with Hibernate in the lib/ directory. Hibernate uses its org.hibernate.service.jdbc.connections.internal.C3P0ConnectionProvider for connection pooling if you set the hibernate.c3p0.* properties. properties. Important configuration properties for the c3p0 connection pool • • • • hibernate.c3p0.min_size hibernate.c3p0.max_size hibernate.c3p0.timeout hibernate.c3p0.max_statements 1.2.2. Proxool connection pool Proxool is another open source JDBC connection pool distributed along with Hibernate in the lib/ directory. Hibernate uses its org.hibernate.service.jdbc.connections.internal.ProxoolConnectionProvider for connection pooling if you set the hibernate.proxool.* properties. Unlike c3p0, proxool requires some additional configuration parameters, as described by the Proxool documentation available at http://proxool.sourceforge.net/configure.html. Table 1.1. Important configuration properties for the Proxool connection pool Property Description hibernate.proxool.xml Configure Proxool provider using an XML file (.xml is appended automatically) hibernate.proxool.properties Configure the Proxool provider using a properties file (.properties is appended automatically) hibernate.proxool.existing_pool Whether to configure the Proxool provider from an existing pool hibernate.proxool.pool_alias Proxool pool alias to use. Required. 1.2.3. Obtaining connections from an application server, using JNDI To use Hibernate inside an application server, configure Hibernate to obtain connections from an application server javax.sql.Datasource registered in JNDI, by setting at least one of the following properties: Important Hibernate properties for JNDI datasources • • • • • hibernate.connection.datasource (required) hibernate.jndi.url hibernate.jndi.class hibernate.connection.username hibernate.connection.password JDBC connections obtained from a JNDI datasource automatically participate in the container-managed transactions of the application server. 1.2.4. Other connection-specific configuration You can pass arbitrary connection properties by prepending hibernate.connection to the connection property name. For example, specify a charSet connection property as hibernate.connection.charSet. You can define your own plugin strategy for obtaining JDBC connections by implementing the interface org.hibernate.service.jdbc.connections.spi.ConnectionProvider and specifying your custom implementation with the hibernate.connection.provider_class property. 1.2.5. Optional configuration properties In addition to the properties mentioned in the previous sections, Hibernate includes many other optional properties. See ??? for a more complete list. 1.3. Dialects Although SQL is relatively standardized, each database vendor uses a subset of supported syntax. This is referred to as a dialect. Hibernate handles variations across these dialects through its org.hibernate.dialect.Dialect class and the various subclasses for each vendor dialect. Table 1.2. Supported database dialects Database Dialect DB2 org.hibernate.dialect.DB2Dialect DB2 AS/400 org.hibernate.dialect.DB2400Dialect DB2 OS390 org.hibernate.dialect.DB2390Dialect Firebird org.hibernate.dialect.FirebirdDialect FrontBase org.hibernate.dialect.FrontbaseDialect HypersonicSQL org.hibernate.dialect.HSQLDialect Informix org.hibernate.dialect.InformixDialect Interbase org.hibernate.dialect.InterbaseDialect Ingres org.hibernate.dialect.IngresDialect Microsoft SQL Server 2005 org.hibernate.dialect.SQLServer2005Dialect Microsoft SQL Server 2008 org.hibernate.dialect.SQLServer2008Dialect Mckoi SQL org.hibernate.dialect.MckoiDialect MySQL org.hibernate.dialect.MySQLDialect MySQL with InnoDB org.hibernate.dialect.MySQL5InnoDBDialect Database Dialect MySQL with MyISAM org.hibernate.dialect.MySQLMyISAMDialect Oracle 8i org.hibernate.dialect.Oracle8iDialect Oracle 9i org.hibernate.dialect.Oracle9iDialect Oracle 10g org.hibernate.dialect.Oracle10gDialect Pointbase org.hibernate.dialect.PointbaseDialect PostgreSQL 8.1 org.hibernate.dialect.PostgreSQL81Dialect PostgreSQL 8.2 and later org.hibernate.dialect.PostgreSQL82Dialect Progress org.hibernate.dialect.ProgressDialect SAP DB org.hibernate.dialect.SAPDBDialect Sybase ASE 15.5 org.hibernate.dialect.SybaseASE15Dialect Sybase ASE 15.7 org.hibernate.dialect.SybaseASE157Dialect Sybase Anywhere org.hibernate.dialect.SybaseAnywhereDialect 1.3.1. Specifying the Dialect to use The developer may manually specify the Dialect to use by setting the hibernate.dialect configuration property to the name of a specific org.hibernate.dialect.Dialect class to use. 1.3.2. Dialect resolution Assuming a org.hibernate.service.jdbc.connections.spi.ConnectionProvider has been set up, Hibernate will attempt to automatically determine the Dialect to use based on the java.sql.DatabaseMetaData reported by a java.sql.Connection obtained from that org.hibernate.service.jdbc.connections.spi.ConnectionProvider. This functionality is provided by a series of org.hibernate.service.jdbc.dialect.spi.DialectResolver instances registered with Hibernate internally. Hibernate comes with a standard set of recognitions. If your application requires extra Dialect resolution capabilities, it would simply register a custom implementation of org.hibernate.service.jdbc.dialect.spi.DialectResolver as follows: Registered org.hibernate.service.jdbc.dialect.spi.DialectResolver are prepended to an internal list of resolvers, so they take precedence before any already registered resolvers including the standard one. 1.4. Automatic schema generation with SchemaExport SchemaExport is a Hibernate utility which generates DDL from your mapping files. The generated schema includes referential integrity constraints, primary and foreign keys, for entity and collection tables. It also creates tables and sequences for mapped identifier generators. Note You must specify a SQL Dialect via the hibernate.dialect property when using this tool, because DDL is highly vendor-specific. See Section 1.3, “Dialects” for information. Before Hibernate can generate your schema, you must customize your mapping files. 1.4.1. Customizing the mapping files Hibernate provides several elements and attributes to customize your mapping files. They are listed in Table 1.3, “Elements and attributes provided for customizing mapping files”, and a logical order of customization is presented in Procedure 1.1, “Customizing the schema”. Table 1.3. Elements and attributes provided for customizing mapping files Name length Type of value number Description Column length Name Type of value Description precision number Decimal precision of column scale number Decimal scale of column not-null true or false Whether a column is allowed to hold null values unique true or false Whether values in the column must be unique index string The name of a multi-column index uniquekey string The name of a multi-column unique constraint foreignkey string The name of the foreign key constraint generated for an association. This applies to org.hsqldb.jdbcDriver jdbc:hsqldb:hsql://localhost sa 1 org.hibernate.dialect.HSQLDialect thread org.hibernate.cache.internal.NoCacheProvider true update , , , and mapping elements. inverse="true" sides are skipped by SchemaExport. sql-type string Overrides the default column type. This applies to the element only. default string Default value for the column check string An SQL check constraint on either a column or atable Procedure 1.1. Customizing the schema 1. Set the length, precision, and scale of mapping elements. Many Hibernate mapping elements define optional attributes named length, precision, and scale. 2. Set the not-null, UNIQUE, unique-key attributes. The not-null and UNIQUE attributes generate constraints on table columns. The unique-key attribute groups columns in a single, unique key constraint. Currently, the specified value of the unique-key attribute does not name the constraint in the generated DDL. It only groups the columns in the mapping file. 3. Set the index and foreign-key attributes. The index attribute specifies the name of an index for Hibernate to create using the mapped column or columns. You can group multiple columns into the same index by assigning them the same index name. A foreign-key attribute overrides the name of any generated foreign key constraint. 6. Set the sql-type attribure. Use the sql-type attribute to override the default mapping of a Hibernate type to SQL datatype.4. Set child elements. Many mapping elements accept one or more child elements. This is particularly useful for mapping types involving multiple columns. 5. Set the default attribute. The default attribute represents a default value for a column. Assign the same value to the mapped property before saving a new instance of the mapped class. 7. Set the check attribute. use the check attribute to specify a check constraint. ... 8. Addelements to your schema. Use the element to specify comments for the generated schema. 1.4.2. Running the SchemaExport tool The SchemaExport tool writes a DDL script to standard output, executes the DDL statements, or both. Example 1.7. SchemaExport syntax java -cp hibernate_classpaths org.hibernate.tool.hbm2ddl.SchemaExport options mapping_files Table 1.4. SchemaExport Options Option --quiet Description do not output the script to standard output --drop only drop the tables --create only create the tables --text do not export to the database --output=my_schema.ddl output the ddl script to a file --naming=eg.MyNamingStrategy select a NamingStrategy --config=hibernate.cfg.xml read Hibernate configuration from an XML file --properties=hibernate.properties read database properties from a file --format format the generated SQL nicely in the script --delimiter=; set an end-of-line delimiter for the script Example 1.8. Embedding SchemaExport into your application Configuration cfg = ....; new SchemaExport(cfg).create(false, true); Chapter 2. Transactions and concurrency control Table of Contents 2.1. Defining Transaction 2.2. Physical Transactions 2.2.1. Physical Transactions - JDBC 2.2.2. Physical Transactions - JTA 2.2.3. Physical Transactions - CMT 2.2.4. Physical Transactions - Custom 2.2.5. Physical Transactions - Legacy 2.3. Hibernate Transaction Usage 2.4. Transactional patterns (and anti-patterns) 2.4.1. Session-per-operation anti-pattern 2.4.2. Session-per-request pattern 2.4.3. Conversations 2.4.4. Session-per-application 2.5. Object identity 2.6. Common issues 2.1. Defining Transaction It is important to understand that the term transaction has many different yet related meanings in regards to persistence and Object/Relational Mapping. In most use-cases these definitions align, but that is not always the case. • • • Might refer to the physical transaction with the database. Might refer to the logical notion of a transaction as related to a persistence context. Might refer to the application notion of a Unit-of-Work, as defined by the archetypal pattern. Note This documentation largely treats the physical and logic notions of transaction as one-in-the-same. 2.2. Physical Transactions Hibernate uses the JDBC API for persistence. In the world of Java there are 2 well defined mechanism for dealing with transactions in JDBC: JDBC itself and JTA. Hibernate supports both mechanisms for integrating with transactions and allowing applications to manage physical transactions. The first concept in understanding Hibernate transaction support is the org.hibernate.engine.transaction.spi.TransactionFactory interface which serves 2 main functions: • • It allows Hibernate to understand the transaction semantics of the environment. Are we operating in a JTA environment? Is a physical transaction already currently active? etc. It acts as a factory for org.hibernate.Transaction instances which are used to allow applications to manage and check the state of transactions. org.hibernate.Transaction is Hibernate's notion of a logical transaction. JPA has a similar notion in the javax.persistence.EntityTransaction interface. Note javax.persistence.EntityTransaction is only available when using resource-local transactions. Hibernate allows access to org.hibernate.Transaction regardless of environment. org.hibernate.engine.transaction.spi.TransactionFactory is a standard Hibernate service. “org.hibernate.engine.transaction.spi.TransactionFactory” for details. 2.2.1. Physical Transactions - JDBC See Section 7.5.16, JDBC-based transaction management leverages the JDBC defined methods java.sql.Connection.commit() and java.sql.Connection.rollback() (JDBC does not define an explicit method of beginning a transaction). In Hibernate, this approach is represented by the org.hibernate.engine.transaction.internal.jdbc.JdbcTransactionFactory class. 2.2.2. Physical Transactions - JTA JTA-based transaction approach which leverages the javax.transaction.UserTransaction interface as obtained from org.hibernate.service.jta.platform.spi.JtaPlatform API. This approach is represented by the org.hibernate.engine.transaction.internal.jta.JtaTransactionFactory class. See Section 7.5.9, “org.hibernate.service.jta.platform.spi.JtaPlatform” for information on integration with the underlying JTA system. 2.2.3. Physical Transactions - CMT Another JTA-based transaction approach which leverages the JTA javax.transaction.TransactionManager interface as obtained from org.hibernate.service.jta.platform.spi.JtaPlatform API. This approach is represented by the org.hibernate.engine.transaction.internal.jta.CMTTransactionFactory class. In an actual JEE CMT environment, access to the javax.transaction.UserTransaction is restricted. Note The term CMT is potentially misleading here. The important point simply being that the physical JTA transactions are being managed by something other than the Hibernate transaction API. See Section 7.5.9, “org.hibernate.service.jta.platform.spi.JtaPlatform” for information on integration with the underlying JTA system. 2.2.4. Physical Transactions - Custom Its is also possible to plug in a custom transaction approach by implementing the org.hibernate.engine.transaction.spi.TransactionFactory contract. The default service initiator has built-in support for understanding custom transaction approaches via the hibernate.transaction.factory_class which can name either: • • The instance of org.hibernate.engine.transaction.spi.TransactionFactory to use. The name of a class implementing org.hibernate.engine.transaction.spi.TransactionFactory to use. The expectation is that the implementation class have a noargument constructor. 2.2.5. Physical Transactions - Legacy During development of 4.0, most of these classes named here were moved to new packages. To help facilitate upgrading, Hibernate will also recognize the legacy names here for a short period of time. • • • org.hibernate.transaction.JDBCTransactionFactory is mapped to org.hibernate.engine.transaction.internal.jdbc.JdbcTransactionFactory org.hibernate.transaction.JTATransactionFactory is mapped to org.hibernate.engine.transaction.internal.jta.JtaTransactionFactory org.hibernate.transaction.CMTTransactionFactory is mapped to org.hibernate.engine.transaction.internal.jta.CMTTransactionFactory 2.3. Hibernate Transaction Usage Hibernate uses JDBC connections and JTA resources directly, without adding any additional locking behavior. It is important for you to become familiar with the JDBC, ANSI SQL, and transaction isolation specifics of your database management system. Hibernate does not lock objects in memory. The behavior defined by the isolation level of your database transactions does not change when you use Hibernate. The Hibernate org.hibernate.Session acts as a transaction-scoped cache providing repeatable reads for lookup by identifier and queries that result in loading entities. Important To reduce lock contention in the database, the physical database transaction needs to be as short as possible. Long database transactions prevent your application from scaling to a highly-concurrent load. Do not hold a database transaction open during end-user-level work, but open it after the end-user-level work is finished. This is concept is referred to as transactional write-behind. 2.4. Transactional patterns (and anti-patterns) 2.4.1. Session-per-operation anti-pattern This is an anti-pattern of opening and closing a Session for each database call in a single thread. It is also an anti-pattern in terms of database transactions. Group your database calls into a planned sequence. In the same way, do not auto-commit after every SQL statement in your application. Hibernate disables, or expects the application server to disable, auto-commit mode immediately. Database transactions are never optional. All communication with a database must be encapsulated by a transaction. Avoid auto-commit behavior for reading data, because many small transactions are unlikely to perform better than one clearly-defined unit of work, and are more difficult to maintain and extend. Note Using auto-commit does not circumvent database transactions. Instead, when in auto-commit mode, JDBC drivers simply perform each call in an implicit transaction call. It is as if your application called commit after each and every JDBC call. 2.4.2. Session-per-request pattern This is the most common transaction pattern. The term request here relates to the concept of a system that reacts to a series of requests from a client/user. Web applications are a prime example of this type of system, though certainly not the only one. At the beginning of handling such a request, the application opens a Hibernate Session, starts a transaction, performs all data related work, ends the transaction and closes the Session. The crux of the pattern is the one-to-one relationship between the transaction and the Session. Within this pattern there is a common technique of defining a current session to simplify the need of passing this Session around to all the application components that may need access to it. Hibernate provides support for this technique through the getCurrentSession method of the SessionFactory. The concept of a "current" session has to have a scope that defines the bounds in which the notion of "current" is valid. This is purpose of the org.hibernate.context.spi.CurrentSessionContext contract. There are 2 reliable defining scopes: • • First is a JTA transaction because it allows a callback hook to know when it is ending which gives Hibernate a chance to close the Session and clean up. This is represented by the org.hibernate.context.internal.JTASessionContext implementation of the org.hibernate.context.spi.CurrentSessionContext contract. Using this implementation, a Session will be opened the first time getCurrentSession is called within that transaction. Secondly is this application request cycle itself. This is best represented with the org.hibernate.context.internal.ManagedSessionContext implementation of the org.hibernate.context.spi.CurrentSessionContext contract. Here an external component is responsible for managing the lifecycle and scoping of a "current" session. At the start of such a scope, ManagedSessionContext's bind method is called passing in the Session. At the end, its unbind method is called. Some common examples of such "external components" include: o o o implementation AOP interceptor with a pointcut on the service methods A proxy/interception container javax.servlet.Filter Important The getCurrentSession() method has one downside in a JTA environment. If you use it, after_statement connection release mode is also used by default. Due to a limitation of the JTA specification, Hibernate cannot automatically clean up any unclosed ScrollableResults or Iterator instances returned by scroll() or iterate(). Release the underlying database cursor by calling ScrollableResults.close() or Hibernate.close(Iterator) explicitly from a finally block. 2.4.3. Conversations The session-per-request pattern is not the only valid way of designing units of work. Many business processes require a whole series of interactions with the user that are interleaved with database accesses. In web and enterprise applications, it is not acceptable for a database transaction to span a user interaction. Consider the following example: Procedure 2.1. An example of a long-running conversation 1. The first screen of a dialog opens. The data seen by the user is loaded in a particular Session and database transaction. The user is free to modify the objects. 2. The user uses a UI element to save their work after five minutes of editing. The modifications are made persistent. The user also expects to have exclusive access to the data during the edit session. Even though we have multiple databases access here, from the point of view of the user, this series of steps represents a single unit of work. There are many ways to implement this in your application. A first naive implementation might keep the Session and database transaction open while the user is editing, using database-level locks to prevent other users from modifying the same data and to guarantee isolation and atomicity. This is an anti-pattern, because lock contention is a bottleneck which will prevent scalability in the future. Several database transactions are used to implement the conversation. In this case, maintaining isolation of business processes becomes the partial responsibility of the application tier. A single conversation usually spans several database transactions. These multiple database accesses can only be atomic as a whole if only one of these database transactions (typically the last one) stores the updated data. All others only read data. A common way to receive this data is through a wizard-style dialog spanning several request/response cycles. Hibernate includes some features which make this easy to implement. Automatic Versioning Hibernate can perform automatic optimistic concurrency control for you. It can automatically detect if a concurrent modification occurred during user think time. Check for this at the end of the conversation. Detached Objects If you decide to use the session-per-request pattern, all loaded instances will be in the detached state during user think time. Hibernate allows you to reattach the objects and persist the modifications. The pattern is called session-per-request-with-detached-objects. Automatic versioning is used to isolate concurrent modifications. Extended Session The Hibernate Session can be disconnected from the underlying JDBC connection after the database transaction has been committed and reconnected when a new client request occurs. This pattern is known as session-per-conversation and makes even reattachment unnecessary. Automatic versioning is used to isolate concurrent modifications and the Session will not be allowed to flush automatically, only explicitly. Session-per-request-with-detached-objects and session-per-conversation each have advantages and disadvantages. 2.4.4. Session-per-application Discussion coming soon.. 2.5. Object identity An application can concurrently access the same persistent state (database row) in two different Sessions. However, an instance of a persistent class is never shared between two Session instances. Two different notions of identity exist and come into play here: Database identity and JVM identity. Example 2.1. Database identity foo.getId().equals( bar.getId() ) Example 2.2. JVM identity foo==bar For objects attached to a particular Session, the two notions are equivalent, and JVM identity for database identity is guaranteed by Hibernate. The application might concurrently access a business object with the same identity in two different sessions, the two instances are actually different, in terms of JVM identity. Conflicts are resolved using an optimistic approach and automatic versioning at flush/commit time. This approach places responsibility for concurrency on Hibernate and the database. It also provides the best scalability, since expensive locking is not needed to guarantee identity in single-threaded units of work. The application does not need to synchronize on any business object, as long as it maintains a single thread per anti-patterns. While not recommended, within a Session the application could safely use the == operator to compare objects. However, an application that uses the == operator outside of a Session may introduce problems.. If you put two detached instances into the same Set, they might use the same database identity, which means they represent the same row in the database. They would not be guaranteed to have the same JVM identity if they are in a detached state. Override the equals and hashCode methods in persistent classes, so that they have their own notion of object equality. Never use the database identifier to implement equality. Instead, use a business key that is a combination of unique, typically immutable, attributes. The database identifier changes if a transient object is made persistent. If the transient instance, together with detached instances, is held in a Set, changing the hash-code breaks the contract of the Set. Attributes for business keys can be less stable than database primary keys. You only need to guarantee stability as long as the objects are in the same Set.This is not a Hibernate issue, but relates to Java's implementation of object identity and equality. 2.6. Common issues Both the session-per-user-session and session-per-application anti-patterns are susceptible to the following issues. Some of the issues might also arise within the recommended patterns, so ensure that you understand the implications before making a design decision: • • • A Session is not thread-safe. Things that work concurrently, like HTTP requests, session beans, or Swing workers, will cause race conditions if a Session instance is shared. If you keep your Hibernate Session in your javax.servlet.http.HttpSession (this is discussed later in the chapter), you should consider synchronizing access to your HttpSession; otherwise, a user that clicks reload fast enough can use the same Session in two concurrently running threads. An exception thrown by Hibernate means you have to rollback your database transaction and close the Session immediately (this is discussed in more detail later in the chapter). If your Session is bound to the application, you have to stop the application. Rolling back the database transaction does not put your business objects back into the state they were at the start of the transaction. This means that the database state and the business objects will be out of sync. Usually this is not a problem, because exceptions are not recoverable and you will have to start over after rollback anyway. The Session caches every object that is in a persistent state (watched and checked for changes by Hibernate). If you keep it open for a long time or simply load too much data, it will grow endlessly until you get an OutOfMemoryException. One solution is to call clear() and evict() to manage the Session cache, but you should consider an alternate means of dealing with large amounts of data such as a Stored Procedure. Java is simply not the right tool for these kind of operations. Some solutions are shown in Chapter 4, Batch Processing. Keeping a Session open for the duration of a user session also means a higher probability of stale data. Chapter 3. Persistence Contexts Table of Contents 3.1. Making entities persistent 3.2. Deleting entities 3.3. Obtain an entity reference without initializing its data 3.4. Obtain an entity with its data initialized 3.5. Obtain an entity by natural-id 3.6. Refresh entity state 3.7. Modifying managed/persistent state 3.8. Working with detached data 3.8.1. Reattaching detached data 3.8.2. Merging detached data 3.9. Checking persistent state 3.10. Accessing Hibernate APIs from JPA Both the org.hibernate.Session API and javax.persistence.EntityManager API represent a context context. Persistent data has a state in relation to both a persistence context and the underlying database. for dealing with persistent data. This concept is called a persistence Entity states • • • • new, or transient - the entity has just been instantiated and is not associated with a persistence context. It has no persistent representation in the database and no identifier value has been assigned. managed, or persistent - the entity has an associated identifier and is associated with a persistence context. detached - the entity has an associated identifier, but is no longer associated with a persistence context (usually because the persistence context was closed or the instance was evicted from the context) removed - the entity has an associated identifier and is associated with a persistence context, however it is scheduled for removal from the database. In Hibernate native APIs, the persistence context is defined as the org.hibernate.Session. In JPA, the persistence context is defined by javax.persistence.EntityManager. Much of the org.hibernate.Session and javax.persistence.EntityManager methods deal with moving entities between these states. 3.1. Making entities persistent Once you've created a new entity instance (using the standard new operator) it is in new state. You can make it persistent by associating it to either a org.hibernate.Session or javax.persistence.EntityManager Example 3.1. Example of making an entity persistent DomesticCat fritz = new DomesticCat(); fritz.setColor( Color.GINGER ); fritz.setSex( 'M' ); fritz.setName( "Fritz" ); session.save( fritz ); DomesticCat fritz = new DomesticCat(); fritz.setColor( Color.GINGER ); fritz.setSex( 'M' ); fritz.setName( "Fritz" ); entityManager.persist( fritz ); org.hibernate.Session org.hibernate.Session also has a method named persist which follows the exact semantic defined in the JPA specification for the persist method. It is this method on to which the Hibernate javax.persistence.EntityManager implementation delegates. If the DomesticCat entity type has a generated identifier, the value is associated to the instance when the save or persist is called. If the identifier is not automatically generated, the application-assigned (usually natural) key value has to be set on the instance before save or persist is called. 3.2. Deleting entities Entities can also be deleted. Example 3.2. Example of deleting an entity session.delete( fritz ); entityManager.remove( fritz ); It is important to note that Hibernate itself can handle deleting detached state. JPA, however, disallows it. The implication here is that the entity instance passed to the org.hibernate.Session delete method can be either in managed or detached state, while the entity instance passed to remove on javax.persistence.EntityManager must be in managed state. 3.3. Obtain an entity reference without initializing its data Sometimes referred to as lazy loading, the ability to obtain a reference to an entity without having to load its data is hugely important. The most common case being the need to create an association between an entity and another, existing entity. Example 3.3. Example of obtaining an entity reference without initializing its data Book book = new book.setAuthor( Book book = new book.setAuthor( Book(); session.byId( Author.class ).getReference( authorId ) ); Book(); entityManager.getReference( Author.class, authorId ) ); The above works on the assumption that the entity is defined to allow lazy loading, generally through use of runtime proxies. For more information see ???. In both cases an exception will be thrown later if the given entity does not refer to actual database state if and when the application attempts to use the returned proxy in any way that requires access to its data. 3.4. Obtain an entity with its data initialized It is also quite common to want to obtain an entity along with with its data, for display for example. Example 3.4. Example of obtaining an entity reference with its data initialized session.byId( Author.class ).load( authorId ); entityManager.find( Author.class, authorId ); In both cases null is returned if no matching database row was found. 3.5. Obtain an entity by natural-id In addition to allowing to load by identifier, Hibernate allows applications to load by declared natural identifier. Example 3.5. Example of simple natural-id access @Entity public class User { @Id @GeneratedValue Long id; @NaturalId String userName; ... } // use getReference() to create associations... Resource aResource = (Resource) session.byId( Resource.class ).getReference( 123 ); User aUser = (User) session.bySimpleNaturalId( User.class ).getReference( "steve" ); aResource.assignTo( user ); // use load() to pull initialzed data return session.bySimpleNaturalId( User.class ).load( "steve" ); Example 3.6. Example of natural-id access import java.lang.String; @Entity public class User { @Id @GeneratedValue Long id; @NaturalId String system; @NaturalId String userName; ... } // use getReference() to create associations... Resource aResource = (Resource) session.byId( Resource.class ).getReference( 123 ); User aUser = (User) session.byNaturalId( User.class ) .using( "system", "prod" ) .using( "userName", "steve" ) .getReference(); aResource.assignTo( user ); // use load() to pull initialzed data return session.byNaturalId( User.class ) .using( "system", "prod" ) .using( "userName", "steve" ) .load(); Just like we saw above, access entity data by natural id allows both the load and getReference forms, with the same semantics. Accessing persistent data by identifier and by natural-id is consistent in the Hibernate API. Each defines the same 2 data access methods: getReference Should be used in cases where the identifier is assumed to exist, where non-existence would be an actual error. Should never be used to test existence. That is because this method will prefer to create and return a proxy if the data is not already associated with the Session rather than hit the database. The quintessential use-case for using this method is to create foreign-key based associations. load Will return the persistent data associated with the given identifier value or null if that identifier does not exist. In addition to those 2 methods, each also defines the method with accepting a org.hibernate.LockOptions argument. Locking is discussed in a separate chapter. 3.6. Refresh entity state You can reload an entity instance and it's collections at any time. Example 3.7. Example of refreshing entity state Cat cat = session.get( Cat.class, catId ); ... session.refresh( cat ); Cat cat = entityManager.find( Cat.class, catId ); ... entityManager.refresh( cat ); One case where this is useful is when it is known that the database state has changed since the data was read. Refreshing allows the current database state to be pulled into the entity instance and the persistence context. Another case where this might be useful is when database triggers are used to initialize some of the properties of the entity. Note that only the entity instance and its collections are refreshed unless you specify REFRESH as a cascade style of any associations. However, please note that Hibernate has the capability to handle this automatically through its notion of generated properties. See ??? for information. 3.7. Modifying managed/persistent state Entities in managed/persistent state may be manipulated by the application and any changes will be automatically detected and persisted when the persistence context is flushed. There is no need to call a particular method to make your modifications persistent. Example 3.8. Example of modifying managed state Cat cat = session.get( Cat.class, catId ); cat.setName( "Garfield" ); session.flush(); // generally this is not explicitly needed Cat cat = entityManager.find( Cat.class, catId ); cat.setName( "Garfield" ); entityManager.flush(); // generally this is not explicitly needed 3.8. Working with detached data Detachment is the process of working with data outside the scope of any persistence context. Data becomes detached in a number of ways. Once the persistence context is closed, all data that was associated with it becomes detached. Clearing the persistence context has the same effect. Evicting a particular entity from the persistence context makes it detached. And finally, serialization will make the deserialized form be detached (the original instance is still managed). Detached data can still be manipulated, however the persistence context will no longer automatically know about these modification and the application will need to intervene to make the changes persistent. 3.8.1. Reattaching detached data Reattachment is the process of taking an incoming entity instance that is in detached state and re-associating it with the current persistence context. Important JPA does not provide for this model. This is only available through Hibernate org.hibernate.Session. Example 3.9. Example of reattaching a detached entity session.saveOrUpdate( someDetachedCat ); The method name update is a bit misleading here. It does not mean that an SQL UPDATE is immediately performed. It does, however, mean that an SQL UPDATE will be performed when the persistence context is flushed since Hibernate does not know its previous state against which to compare for changes. Unless the entity is mapped with select-beforeupdate, in which case Hibernate will pull the current state from the database and see if an update is needed. Provided the entity is detached, update and saveOrUpdate operate exactly the same. 3.8.2. Merging detached data Merging is the process of taking an incoming entity instance that is in detached state and copying its data over onto a new instance that is in managed state. Example 3.10. Visualizing merge Object detached = ...; Object managed = entityManager.find( detached.getClass(), detached.getId() ); managed.setXyz( detached.getXyz() ); ... return managed; That is not exactly what happens, but its a good visualization. Example 3.11. Example of merging a detached entity Cat theManagedInstance = session.merge( someDetachedCat ); Cat theManagedInstance = entityManager.merge( someDetachedCat ); 3.9. Checking persistent state An application can verify the state of entities and collections in relation to the persistence context. Example 3.12. Examples of verifying managed state assert session.contains( cat ); assert entityManager.contains( cat ); Example 3.13. Examples of verifying laziness if ( Hibernate.isInitialized( customer.getAddress() ) { //display address if loaded } if ( Hibernate.isInitialized( customer.getOrders()) ) ) { //display orders if loaded } if (Hibernate.isPropertyInitialized( customer, "detailedBio" ) ) { //display property detailedBio if loaded } javax.persistence.PersistenceUnitUtil jpaUtil = entityManager.getEntityManagerFactory().getPersistenceUnitUtil(); if ( jpaUtil.isLoaded( customer.getAddress() ) { //display address if loaded } if ( jpaUtil.isLoaded( customer.getOrders()) ) ) { //display orders if loaded } if (jpaUtil.isLoaded( customer, "detailedBio" ) ) { //display property detailedBio if loaded } In JPA there is an alternative means to check laziness using the following javax.persistence.PersistenceUtil pattern. However, the javax.persistence.PersistenceUnitUtil is recommended where ever possible Example 3.14. Alternative JPA means to verify laziness javax.persistence.PersistenceUtil jpaUtil = javax.persistence.Persistence.getPersistenceUtil(); if ( jpaUtil.isLoaded( customer.getAddress() ) { //display address if loaded } if ( jpaUtil.isLoaded( customer.getOrders()) ) ) { //display orders if loaded } if (jpaUtil.isLoaded(customer, "detailedBio") ) { //display property detailedBio if loaded } 3.10. Accessing Hibernate APIs from JPA JPA defines an incredibly useful method to allow applications access to the APIs of the underlying provider. Example 3.15. Usage of EntityManager.unwrap Session session = entityManager.unwrap( Session.class ); SessionImplementor sessionImplementor = entityManager.unwrap( SessionImplementor.class ); Chapter 4. Batch Processing Table of Contents 4.1. Batch inserts 4.2. Batch updates 4.3. StatelessSession 4.4. Hibernate Query Language for DML 4.4.1. HQL for UPDATE and DELETE 4.4.2. HQL syntax for INSERT 4.4.3. More information on HQL The following example shows an antipattern for batch inserts. Example 4.1. Naive way to insert 100000 lines with Hibernate Session session = sessionFactory.openSession(); Transaction tx = session.beginTransaction(); for ( int i=0; i<100000; i++ ) { Customer customer = new Customer(.....); session.save(customer); } tx.commit(); session.close(); This fails with exception OutOfMemoryException after around 50000 rows on most systems. The reason is that Hibernate caches all the newly inserted Customer instances in the session-level cache. There are several ways to avoid this problem. Before batch processing, enable JDBC batching. To enable JDBC batching, set the property hibernate.jdbc.batch_size to an integer between 10 and 50. Note Hibernate disables insert batching at the JDBC level transparently if you use an identity identifier generator. If the above approach is not appropriate, you can disable the second-level cache, by setting hibernate.cache.use_second_level_cache to false. 4.1. Batch inserts When you make new objects persistent, employ methods flush() and clear() to the session regularly, to control the size of the first-level cache. Example 4.2. Flushing and clearing the Session Session session = sessionFactory.openSession(); Transaction tx = session.beginTransaction(); for ( int i=0; i<100000; i++ ) { Customer customer = new Customer(.....); session.save(customer); if ( i % 20 == 0 ) { //20, same as the JDBC batch size //flush a batch of inserts and release memory: session.flush(); session.clear(); } } tx.commit(); session.close(); 4.2. Batch updates When you retriev and update data, flush() and clear() the session regularly. In addition, use method scroll() to take advantage of server-side cursors for queries that return many rows of data. Example 4.3. Using scroll() Session session = sessionFactory.openSession(); Transaction tx = session.beginTransaction(); ScrollableResults customers = session.getNamedQuery("GetCustomers") .setCacheMode(CacheMode.IGNORE) .scroll(ScrollMode.FORWARD_ONLY); int count=0; while ( customers.next() ) { Customer customer = (Customer) customers.get(0); customer.updateStuff(...); if ( ++count % 20 == 0 ) { //flush a batch of updates and release memory: session.flush(); session.clear(); } } tx.commit(); session.close(); 4.3. StatelessSession is a command-oriented API provided by Hibernate. Use it to stream data to and from the database in the form of detached objects. A StatelessSession has no persistence context associated with it and does not provide many of the higher-level life cycle semantics. Some of the things not provided by a StatelessSession include: StatelessSession Features and behaviors not provided by StatelessSession • • • a first-level cache interaction with any second-level or query cache transactional write-behind or automatic dirty checking Limitations of StatelessSession • • • • • Operations performed using a stateless session never cascade to associated instances. Collections are ignored by a stateless session. Operations performed via a stateless session bypass Hibernate's event model and interceptors. Due to the lack of a first-level cache, Stateless sessions are vulnerable to data aliasing effects. A stateless session is a lower-level abstraction that is much closer to the underlying JDBC. Example 4.4. Using a StatelessSession StatelessSession session = sessionFactory.openStatelessSession(); Transaction tx = session.beginTransaction(); ScrollableResults customers = session.getNamedQuery("GetCustomers") .scroll(ScrollMode.FORWARD_ONLY); while ( customers.next() ) { Customer customer = (Customer) customers.get(0); customer.updateStuff(...); session.update(customer); } tx.commit(); session.close(); The Customer instances returned by the query are immediately detached. They are never associated with any persistence context. The insert(), update(), and delete() operations defined by the StatelessSession interface operate directly on database rows. They cause the corresponding SQL operations to be executed immediately. They have different semantics from the save(), saveOrUpdate(), and delete() operations defined by the Session interface. 4.4. Hibernate Query Language for DML DML, or Data Markup Language, refers to SQL statements such as INSERT, UPDATE, and DELETE. Hibernate provides methods for bulk SQL-style DML statement execution, in the form of Hibernate Query Language (HQL). 4.4.1. HQL for UPDATE and DELETE Example 4.5. Psuedo-syntax for UPDATE and DELETE statements using HQL ( UPDATE | DELETE ) FROM? EntityName (WHERE where_conditions)? The ? suffix indications an optional parameter. The FROM and WHERE clauses are each optional. The FROM clause can only refer to a single entity, which can be aliased. If the entity name is aliased, any property references must be qualified using that alias. If the entity name is not aliased, then it is illegal for any property references to be qualified. Joins, either implicit or explicit, are prohibited in a bulk HQL query. You can use sub-queries in the WHERE clause, and the sub-queries themselves can contain joins. Example 4.6. Executing an HQL UPDATE, using the Query.executeUpdate() method Session session = sessionFactory.openSession(); Transaction tx = session.beginTransaction(); String hqlUpdate = "update Customer c set c.name = :newName where c.name = :oldName"; // or String hqlUpdate = "update Customer set name = :newName where name = :oldName"; int updatedEntities = s.createQuery( hqlUpdate ) .setString( "newName", newName ) .setString( "oldName", oldName ) .executeUpdate(); tx.commit(); session.close(); In keeping with the EJB3 specification, HQL UPDATE statements, by default, do not effect the version or the timestamp property values for the affected entities. You can use a versioned update to force Hibernate to reset the version or timestamp property values, by adding the VERSIONED keyword after the UPDATE keyword. Example 4.7. Updating the version of timestamp Session session = sessionFactory.openSession(); Transaction tx = session.beginTransaction(); String hqlVersionedUpdate = "update versioned Customer set name = :newName where name = :oldName"; int updatedEntities = s.createQuery( hqlUpdate ) .setString( "newName", newName ) .setString( "oldName", oldName ) .executeUpdate(); tx.commit(); session.close(); Note If you use the VERSIONED statement, you cannot use custom version types, which use class org.hibernate.usertype.UserVersionType. Example 4.8. A HQL DELETE statement Session session = sessionFactory.openSession(); Transaction tx = session.beginTransaction(); String hqlDelete = "delete Customer c where c.name = :oldName"; // or String hqlDelete = "delete Customer where name = :oldName"; int deletedEntities = s.createQuery( hqlDelete ) .setString( "oldName", oldName ) .executeUpdate(); tx.commit(); session.close(); Method Query.executeUpdate() returns an int value, which indicates the number of entities effected by the operation. This may or may not correlate to the number of rows effected in the database. An HQL bulk operation might result in multiple SQL statements being executed, such as for joined-subclass. In the example of joined-subclass, a DELETE against one of the subclasses may actually result in deletes in the tables underlying the join, or further down the inheritance hierarchy. 4.4.2. HQL syntax for INSERT Example 4.9. Pseudo-syntax for INSERT statements INSERT INTO EntityName properties_list select_statement Only the INSERT INTO ... SELECT ... form is supported. You cannot specify explicit values to insert. The properties_list is analogous to the column specification in the SQL INSERT statement. For entities involved in mapped inheritance, you can only use properties directly defined on that given class-level in the properties_list. Superclass properties are not allowed and subclass properties are irrelevant. In other words, INSERT statements are inherently non-polymorphic. The select_statement can be any valid HQL select query, but the return types must match the types expected by the INSERT. Hibernate verifies the return types during query compilation, instead of expecting the database to check it. Problems might result from Hibernate types which are equivalent, rather than equal. One such example is a mismatch between a property defined as an org.hibernate.type.DateType and a property defined as an org.hibernate.type.TimestampType, even though the database may not make a distinction, or may be capable of handling the conversion. If id property is not specified in the properties_list, Hibernate generates a value automatically. Automatic generation is only available if you use ID generators which operate on the database. Otherwise, Hibernate throws an exception during parsing. Available in-database generators are org.hibernate.id.SequenceGenerator and its subclasses, and objects which implement org.hibernate.id.PostInsertIdentifierGenerator. The most notable exception is org.hibernate.id.TableHiLoGenerator, which does not expose a selectable way to get its values. For properties mapped as either version or timestamp, the insert statement gives you two options. You can either specify the property in the properties_list, in which case its value is taken from the corresponding select expressions, or omit it from the properties_list, in which case the seed value defined by the org.hibernate.type.VersionType is used. Example 4.10. HQL INSERT statement Session session = sessionFactory.openSession(); Transaction tx = session.beginTransaction(); String hqlInsert = "insert into DelinquentAccount (id, name) select c.id, c.name from Customer c where ..."; int createdEntities = s.createQuery( hqlInsert ) .executeUpdate(); tx.commit(); session.close(); 4.4.3. More information on HQL This section is only a brief overview of HQL. For more information, see ???. Chapter 5. Locking Table of Contents 5.1. Optimistic 5.1.1. Dedicated version number 5.1.2. Timestamp 5.2. Pessimistic 5.2.1. The LockMode class Locking refers to actions taken to prevent data in a relational database from changing between the time it is read and the time that it is used. Your locking strategy can be either optimistic or pessimistic. Locking strategies Optimistic Optimistic locking ssumes that multiple transactions can complete without affecting each other, and that therefore transactions can proceed without locking the data resources that they affect. Before committing, each transaction verifies that no other transaction has modified its data. If the check reveals conflicting modifications, the committing transaction rolls back[1]. Pessimistic Pessimistic locking assumes that concurrent transactions will conflict with each other, and requires resources to be locked after they are read and only unlocked after the application has finished using the data. Hibernate provides mechanisms for implementing both types of locking in your applications. 5.1. Optimistic When your application uses long transactions or conversations that span several database transactions, you can store versioning data, so that if the same entity is updated by two conversations, the last to commit changes is informed of the conflict, and does not override the other conversation's work. This approach guarantees some isolation, but scales well and works particularly well in Read-Often Write-Sometimes situations. Hibernate provides two different mechanisms for storing versioning information, a dedicated version number or a timestamp. Version number Timestamp Note A version or timestamp property can never be null for a detached instance. Hibernate detects any instance with a null version or timestamp as transient, regardless of other unsaved-value strategies that you specify. Declaring a nullable version or timestamp property is an easy way to avoid problems with transitive reattachment in Hibernate, especially useful if you use assigned identifiers or composite keys. 5.1.1. Dedicated version number The version number mechanism for optimistic locking is provided through a @Version annotation. Example 5.1. The @Version annotation @Entity public class Flight implements Serializable { ... @Version @Column(name="OPTLOCK") public Integer getVersion() { ... } } Here, the version property is mapped to the OPTLOCK column, and the entity manager uses it to detect conflicting updates, and prevent the loss of updates that would be overwritten by a last-commit-wins strategy. The version column can be any kind of type, as long as you define and implement the appropriate UserVersionType. Your application is forbidden from altering the version number set by Hibernate. To artificially increase the version number, see the documentation for properties LockModeType.OPTIMISTIC_FORCE_INCREMENT or LockModeType.PESSIMISTIC_FORCE_INCREMENTcheck in the Hibernate Entity Manager reference documentation. Database-generated version numbers If the version number is generated by the database, such as a trigger, use the annotation @org.hibernate.annotations.Generated(GenerationTime.ALWAYS). Example 5.2. Declaring a version property in hbm.xml Current customers only ...column The name of the column holding the version number. Optional, defaults to the property name. name The name of a property of the persistent class. type The type of the version number. Optional, defaults to integer. access Hibernate's strategy for accessing the property value. Optional, defaults to property. unsavedvalue Indicates that an instance is newly instantiated and thus unsaved. This distinguishes it from detached instances that were saved or loaded in a previous session. The default value, undefined, indicates that the identifier property value should be used. Optional. generated Indicates that the version property value is generated by the database. Optional, defaults to never. insert Whether or not to include the version column in SQL insert statements. Defaults to true, but you can set it to false if the database column is defined with a default value of 0. 5.1.2. Timestamp Timestamps are a less reliable way of optimistic locking than version numbers, but can be used by applications for other purposes as well. Timestamping is automatically used if you the @Version annotation on a Date or Calendar. Example 5.3. Using timestamps for optimistic locking @Entity public class Flight implements Serializable { ... @Version public Date getLastUpdate() { ... } } Hibernate can retrieve the timestamp value from the database or the JVM, by reading the value you specify for the @org.hibernate.annotations.Source annotation. The value can be either org.hibernate.annotations.SourceType.DB or org.hibernate.annotations.SourceType.VM. The default behavior is to use the database, and is also used if you don't specify the annotation at all. The timestamp can also be generated by the database instead of Hibernate, if you use the @org.hibernate.annotations.Generated(GenerationTime.ALWAYS) annotation. Example 5.4. The timestamp element in hbm.xml column The name of the column which holds the timestamp. Optional, defaults to the property namel name The name of a JavaBeans style property of Java type Date or Timestamp of the persistent class. access The strategy Hibernate uses to access the property value. Optional, defaults to property. unsavedvalue A version property which indicates than instance is newly instantiated, and unsaved. This distinguishes it from detached instances that were saved or loaded in a previous session. The default value of undefined indicates that Hibernate uses the identifier property value. source Whether Hibernate retrieves the timestamp from the database or the current JVM. Database-based timestamps incur an overhead because Hibernate needs to query the database each time to determine the incremental next value. However, database-derived timestamps are safer to use in a clustered environment. Not all database dialects are known to support the retrieval of the database's current timestamp. Others may also be unsafe for locking, because of lack of precision. generated Whether the timestamp property value is generated by the database. Optional, defaults to never. 5.2. Pessimistic Typically, you only need to specify an isolation level for the JDBC connections and let the database handle locking issues. If you do need to obtain exclusive pessimistic locks or re-obtain locks at the start of a new transaction, Hibernate gives you the tools you need. Note Hibernate always uses the locking mechanism of the database, and never lock objects in memory. 5.2.1. The LockMode class The LockMode class defines the different lock levels that Hibernate can acquire. LockMode.WRITE acquired automatically when Hibernate updates or inserts a row. LockMode.UPGRADE acquired upon explicit user request using SELECT ... FOR UPDATE on databases which support that syntax. LockMode.UPGRADE_NOWAIT acquired upon explicit user request using a SELECT ... FOR UPDATE NOWAIT in Oracle. LockMode.READ acquired automatically when Hibernate reads data under Repeatable Read or Serializable isolation level. It can be re-acquired by explicit user request. LockMode.NONE The absence of a lock. All objects switch to this lock mode at the end of a Transaction. Objects associated with the session via a call to update() or saveOrUpdate() also start out in this lock mode. The explicit user request mentioned above occurs as a consequence of any of the following actions: • • • A call to Session.load(), specifying a LockMode. A call to Session.lock(). A call to Query.setLockMode(). If you call Session.load() with option UPGRADE or UPGRADE_NOWAIT, and the requested object is not already loaded by the session, the object is loaded using SELECT ... FOR If you call load() for an object that is already loaded with a less restrictive lock than the one you request, Hibernate calls lock() for that object. UPDATE. Session.lock() performs a version FOR UPDATE syntax is used. number check if the specified lock mode is READ, UPGRADE, or UPGRADE_NOWAIT. In the case of UPGRADE or UPGRADE_NOWAIT, SELECT ... If the requested lock mode is not supported by the database, Hibernate uses an appropriate alternate mode instead of throwing an exception. This ensures that applications are portable. [ 1] http://en.wikipedia.org/wiki/Optimistic_locking Chapter 6. Caching Table of Contents 6.1. The query cache 6.1.1. Query cache regions 6.2. Second-level cache providers 6.2.1. Configuring your cache providers 6.2.2. Caching strategies 6.2.3. Second-level cache providers for Hibernate 6.3. Managing the cache 6.3.1. Moving items into and out of the cache 6.1. The query cache If you have queries that run over and over, with the same parameters, query caching provides performance gains. Caching introduces overhead in the area of transactional processing. For example, if you cache results of a query against an object, Hibernate needs to keep track of whether any changes have been committed against the object, and invalidate the cache accordingly. In addition, the benefit from caching query results is limited, and highly dependent on the usage patterns of your application. For these reasons, Hibernate disables the query cache by default. Procedure 6.1. Enabling the query cache 1. Set the hibernate.cache.use_query_cache property to true. This setting creates two new cache regions: o o org.hibernate.cache.internal.StandardQueryCache holds the cached query results. org.hibernate.cache.spi.UpdateTimestampsCache holds timestamps of the most recent updates to queryable tables. These timestamps validate results served from the query cache. 2. Adjust the cache timeout of the underlying cache region If you configure your underlying cache implementation to use expiry or timeouts, set the cache timeout of the underlying cache region for the UpdateTimestampsCache to a higher value than the timeouts of any of the query caches. It is possible, and recommended, to set the UpdateTimestampsCache region never to expire. To be specific, a LRU (Least Recently Used) cache expiry policy is never appropriate. 3. Enable results caching for specific queries Since most queries do not benefit from caching of their results, you need to enable caching for individual queries, e ven after enabling query caching overall. To enable results caching for a particular query, call org.hibernate.Query.setCacheable(true). This call allows the query to look for existing cache results or add its results to the cache when it is executed. The query cache does not cache the state of the actual entities in the cache. It caches identifier values and results of value type. Therefore, always use the query cache in conjunction with the second-level cache for those entities which should be cached as part of a query result cache. 6.1.1. Query cache regions For fine-grained control over query cache expiration policies, specify a named cache region for a particular query by calling Query.setCacheRegion(). Example 6.1. Method setCacheRegion List blogs = sess.createQuery("from Blog blog where blog.blogger = :blogger") .setEntity("blogger", blogger) .setMaxResults(15) .setCacheable(true) .setCacheRegion("frontpages") .list(); To force the query cache to refresh one of its regions and disregard any cached results in the region, call org.hibernate.Query.setCacheMode(CacheMode.REFRESH). In conjunction with the region defined for the given query, Hibernate selectively refreshes the results cached in that particular region. This is much more efficient than bulk eviction of the region via org.hibernate.SessionFactory.evictQueries(). 6.2. Second-level cache providers Hibernate is compatible with several second-level cache providers. None of the providers support all of Hibernate's possible caching strategies. Section 6.2.3, “Second-level cache providers for Hibernate” lists the providers, along with their interfaces and supported caching strategies. For definitions of caching strategies, see Section 6.2.2, “Caching strategies”. 6.2.1. Configuring your cache providers You can configure your cache providers using either annotations or mapping files. Entities. By default, entities are not part of the second-level cache, and their use is not recommended. If you absolutely must use entities, set the shared-cache-mode element in persistence.xml, or use property javax.persistence.sharedCache.mode in your configuration. Use one of the values in Table 6.1, “Possible values for Shared Cache Mode”. Table 6.1. Possible values for Shared Cache Mode Value Description ENABLE_SELECTIVE Entities are not cached unless you explicitly mark them as cachable. This is the default and recommended value. DISABLE_SELECTIVE Entities are cached unless you explicitly mark them as not cacheable. ALL All entities are always cached even if you mark them as not cacheable. NONE No entities are cached even if you mark them as cacheable. This option basically disables second-level caching. Set the global default cache concurrency strategy The cache concurrency strategy with the hibernate.cache.default_cache_concurrency_strategy configuration property. See Section 6.2.2, “Caching strategies” for possible values. Note When possible, define the cache concurrency strategy per entity rather than globally. Use the @org.hibernate.annotations.Cache annotation. Example 6.2. Configuring cache providers using annotations @Entity @Cacheable @Cache(usage = CacheConcurrencyStrategy.NONSTRICT_READ_WRITE) public class Forest { ... } You can cache the content of a collection or the identifiers, if the collection contains other entities. Use the @Cache annotation on the Collection property. @Cache can take several attributes. Attributes of @Cache annotation usage The given cache concurrency strategy, which may be: • • • • • NONE READ_ONLY NONSTRICT_READ_WRITE READ_WRITE TRANSACTIONAL region The cache region. This attribute is optional, and defaults to the fully-qualified class name of the class, or the qually-qualified role name of the collection. include Whether or not to include all properties.. Optional, and can take one of two possible values. • • A value of all includes all properties. This is the default. A value of non-lazy only includes non-lazy properties. Example 6.3. Configuring cache providers using mapping files Just as in the Example 6.2, “Configuring cache providers using annotations”, you can provide attributes in the mapping file. There are some specific differences in the syntax for the attributes in a mapping file. usage The caching strategy. This attribute is required, and can be any of the following values. • • • • transactional read-write nonstrict-read-write read-only region The name of the second-level cache region. This optional attribute defaults to the class or collection role name. include Whether properties of the entity mapped with lazy=true can be cached when attribute-level lazy fetching is enabled. Defaults to all and can also be non-lazy. Instead of , you can use and elements in hibernate.cfg.xml. 6.2.2. Caching strategies read-only A read-only cache is good for data that needs to be read often but not modified. It is simple, performs well, and is safe to use in a clustered environment. nonstrict read-write Some applications only rarely need to modify data. This is the case if two transactions are unlikely to try to update the same item simultaneously. In this case, you do not need strict transaction isolation, and a nonstrict-read-write cache might be appropriate. If the cache is used in a JTA environment, you must specify hibernate.transaction.manager_lookup_class. In other environments, ensore that the transaction is complete before you call Session.close() or Session.disconnect(). read-write A read-write cache is appropriate for an application which needs to update data regularly. Do not use a read-write strategy if you need serializable transaction isolation. In a JTA environment, specify a strategy for obtaining the JTA TransactionManager by setting the property hibernate.transaction.manager_lookup_class. In non-JTA environments, be sure the transaction is complete before you call Session.close() or Session.disconnect(). Note To use the read-write strategy in a clustered environment, the underlying cache implementation must support locking. The build-in cache providers do not support locking. transactional The transactional cache strategy provides support for transactional cache providers such as JBoss TreeCache. You can only use such a cache in a JTA environment, and you must first specify hibernate.transaction.manager_lookup_class. 6.2.3. Second-level cache providers for Hibernate Cache Interface Supported strategies • • • read-only nontrict read-write read-write • • • read-only nontrict read-write read-write • • • read-only nontrict read-write read-write SwarmCache • • read-only nontrict read-write JBoss Cache 1.x • • read-only transactional JBoss Cache 2.x • • read-only transactional HashTable (testing only) EHCache OSCache 6.3. Managing the cache 6.3.1. Moving items into and out of the cache Actions that add an item to internal cache of the Session Saving or updating an item • • • save() update() saveOrUpdate() Retrieving an item • • • • • load() get() list() iterate() scroll() Syncing or removing a cached item. The state of an object is synchronized with the database when you call method flush(). To avoid this synchronization, you can remove the object and all collections from the first-level cache with the evict() method. To remove all items from the Session cache, use method Session.clear(). Example 6.4. Evicting an item from the first-level cache ScrollableResult cats = sess.createQuery("from Cat as cat").scroll(); //a huge result set while ( cats.next() ) { Cat cat = (Cat) cats.get(0); doSomethingWithACat(cat); sess.evict(cat); } Determining whether an item belongs to the Session cache. The Session provides a contains() method to determine if an instance belongs to the session cache. Example 6.5. Second-level cache eviction You can evict the cached state of an instance, entire class, collection instance or entire collection role, using methods of SessionFactory. sessionFactory.getCache().containsEntity(Cat.class, catId); // is this particular Cat currently in the cache sessionFactory.getCache().evictEntity(Cat.class, catId); // evict a particular Cat sessionFactory.getCache().evictEntityRegion(Cat.class); sessionFactory.getCache().evictEntityRegions(); // evict all Cats // evict all entity data sessionFactory.getCache().containsCollection("Cat.kittens", catId); // is this particular collection currently in the cache sessionFactory.getCache().evictCollection("Cat.kittens", catId); // evict a particular collection of kittens sessionFactory.getCache().evictCollectionRegion("Cat.kittens"); // evict all kitten collections sessionFactory.getCache().evictCollectionRegions(); // evict all collection data 6.3.1.1. Interactions between a Session and the second-level cache The CacheMode controls how a particular session interacts with the second-level cache. CacheMode.NORMAL reads items from and writes them to the second-level cache. CacheMode.GET reads items from the second-level cache, but does not write to the second-level cache except to update data. CacheMode.PUT writes items to the second-level cache. It does not read from the second-level cache. It bypasses the effect of hibernate.cache.use_minimal_puts and forces a refresh of the second-level cache for all items read from the database. 6.3.1.2. Browsing the contents of a second-level or query cache region After enabling statistics, you can browse the contents of a second-level cache or query cache region. Procedure 6.2. Enabling Statistics 1. Set hibernate.generate_statistics to true. 2. Optionally, set hibernate.cache.use_structured_entries to true, to cause Hibernate to store the cache entries in a human-readable format. Example 6.6. Browsing the second-level cache entries via the Statistics API Map cacheEntries = sessionFactory.getStatistics() .getSecondLevelCacheStatistics(regionName) .getEntries(); Chapter 7. Services Table of Contents 7.1. What are services? 7.2. Service contracts 7.3. Service dependencies 7.3.1. @org.hibernate.service.spi.InjectService 7.3.2. org.hibernate.service.spi.ServiceRegistryAwareService 7.4. ServiceRegistry 7.5. Standard services 7.5.1. org.hibernate.engine.jdbc.batch.spi.BatchBuilder 7.5.2. org.hibernate.service.config.spi.ConfigurationService 7.5.3. org.hibernate.service.jdbc.connections.spi.ConnectionProvider 7.5.4. org.hibernate.service.jdbc.dialect.spi.DialectFactory 7.5.5. org.hibernate.service.jdbc.dialect.spi.DialectResolver 7.5.6. org.hibernate.engine.jdbc.spi.JdbcServices 7.5.7. org.hibernate.service.jmx.spi.JmxService 7.5.8. org.hibernate.service.jndi.spi.JndiService 7.5.9. org.hibernate.service.jta.platform.spi.JtaPlatform 7.5.10. org.hibernate.service.jdbc.connections.spi.MultiTenantConnectionProvider 7.5.11. org.hibernate.persister.spi.PersisterClassResolver 7.5.12. org.hibernate.persister.spi.PersisterFactory 7.5.13. org.hibernate.cache.spi.RegionFactory 7.5.14. org.hibernate.service.spi.SessionFactoryServiceRegistryFactory 7.5.15. org.hibernate.stat.Statistics 7.5.16. org.hibernate.engine.transaction.spi.TransactionFactory 7.5.17. org.hibernate.tool.hbm2ddl.ImportSqlCommandExtractor 7.6. Custom services 7.7. Special service registries 7.7.1. Boot-strap registry 7.7.2. SessionFactory registry 7.8. Using services and registries 7.9. Integrators 7.9.1. Integrator use-cases 7.1. What are services? Services are classes that provide Hibernate with pluggable implementations of various types of functionality. Specifically they are implementations of certain service contract interfaces. The interface is known as the service role; the implementation class is know as the service implementation. Generally speaking, users can plug in alternate implementations of all standard service roles (overriding); they can also define additional services beyond the base set of service roles (extending). 7.2. Service contracts The basic requirement for a service is to implement the marker interface org.hibernate.service.Service. Hibernate uses this internally for some basic type safety. Optionally, the service can also implement the org.hibernate.service.spi.Startable and org.hibernate.service.spi.Stoppable interfaces to receive notifications of being started and stopped. Another optional service contract is org.hibernate.service.spi.Manageable which marks the service as manageable in JMX provided the JMX integration is enabled. 7.3. Service dependencies Services are allowed to declare dependencies on other services using either of 2 approaches. 7.3.1. @org.hibernate.service.spi.InjectService Any method on the service implementation class accepting a single parameter and annotated with @InjectService is considered requesting injection of another service. By default the type of the method parameter is expected to be the service role to be injected. If the parameter type is different than the service role, the serviceRole attribute of the InjectService should be used to explicitly name the role. By default injected services are considered required, that is the start up will fail if a named dependent service is missing. If the service to be injected is optional, the required attribute of the InjectService should be declared as false (default is true). 7.3.2. org.hibernate.service.spi.ServiceRegistryAwareService The second approach is a pull approach where the service implements the optional service interface org.hibernate.service.spi.ServiceRegistryAwareService which declares a single injectServices method. During startup, Hibernate will inject the org.hibernate.service.ServiceRegistry itself into services which implement this interface. The service can then use the ServiceRegistry reference to locate any additional services it needs. 7.4. ServiceRegistry The central service API, aside from the services themselves, is the org.hibernate.service.ServiceRegistry interface. The main purpose of a service registry is to hold, manage and provide access to services. Service registries are hierarchical. Services in one registry can depend on and utilize services in that same registry as well as any parent registries. Use org.hibernate.service.ServiceRegistryBuilder to build a org.hibernate.service.ServiceRegistry instance. 7.5. Standard services 7.5.1. org.hibernate.engine.jdbc.batch.spi.BatchBuilder Notes Defines strategy for how Hibernate manages JDBC statement batching Initiator org.hibernate.engine.jdbc.batch.internal.BatchBuilderInitiator Implementations org.hibernate.engine.jdbc.batch.internal.BatchBuilderImpl 7.5.2. org.hibernate.service.config.spi.ConfigurationService Notes Provides access to the configuration settings, combining those explicitly provided as well as those contributed by any registered org.hibernate.integrator.spi.Integrator implementations Initiator org.hibernate.service.config.internal.ConfigurationServiceInitiator Implementations org.hibernate.service.config.internal.ConfigurationServiceImpl 7.5.3. org.hibernate.service.jdbc.connections.spi.ConnectionProvider Notes Defines the means in which Hibernate can obtain and release java.sql.Connection instances for its use. Initiator org.hibernate.service.jdbc.connections.internal.ConnectionProviderInitiator Implementations - provides connection pooling based on integration with the C3P0 • org.hibernate.service.jdbc.connections.internal.C3P0ConnectionProvider • org.hibernate.service.jdbc.connections.internal.DatasourceConnectionProviderImpl - provides connection managed delegated to a javax.sql.DataSource org.hibernate.service.jdbc.connections.internal.DriverManagerConnectionProviderImpl - provides rudimentary connection pooling based connection pooling library • on simple custom pool. Note intended production use! • org.hibernate.service.jdbc.connections.internal.ProxoolConnectionProvider connection pooling library - provides connection pooling based on integration with the proxool • org.hibernate.service.jdbc.connections.internal.UserSuppliedConnectionProviderImpl - Provides no connection support. Indicates the user will supply connections to Hibernate directly. Not recommended for use. 7.5.4. org.hibernate.service.jdbc.dialect.spi.DialectFactory Notes Contract for Hibernate to obtain org.hibernate.dialect.Dialect instance to use. This is either explicitly defined by the hibernate.dialect property or determined by the Section 7.5.5, “org.hibernate.service.jdbc.dialect.spi.DialectResolver” service which is a delegate to this service. Initiator org.hibernate.service.jdbc.dialect.internal.DialectFactoryInitiator Implementations org.hibernate.service.jdbc.dialect.internal.DialectFactoryImpl 7.5.5. org.hibernate.service.jdbc.dialect.spi.DialectResolver Notes Provides resolution of org.hibernate.dialect.Dialect to use based on information extracted from JDBC metadata. The standard resolver implementation acts as a chain, delegating to a series of individual resolvers. The standard Hibernate resolution behavior is contained in org.hibernate.service.jdbc.dialect.internal.StandardDialectResolver. org.hibernate.service.jdbc.dialect.internal.DialectResolverInitiator also consults with the hibernate.dialect_resolvers setting for any custom resolvers. Initiator org.hibernate.service.jdbc.dialect.internal.DialectResolverInitiator Implementations org.hibernate.service.jdbc.dialect.internal.DialectResolverSet 7.5.6. org.hibernate.engine.jdbc.spi.JdbcServices Notes Special type of service that aggregates together a number of other services and provides a higher-level set of functionality. Initiator org.hibernate.engine.jdbc.internal.JdbcServicesInitiator Implementations org.hibernate.engine.jdbc.internal.JdbcServicesImpl 7.5.7. org.hibernate.service.jmx.spi.JmxService Notes Provides simplified access to JMX related features needed by Hibernate. Initiator org.hibernate.service.jmx.internal.JmxServiceInitiator Implementations • • org.hibernate.service.jmx.internal.DisabledJmxServiceImpl - A no-op implementation when JMX functionality is org.hibernate.service.jmx.internal.JmxServiceImpl - Standard implementation of JMX handling disabled. 7.5.8. org.hibernate.service.jndi.spi.JndiService Notes Provides simplified access to JNDI related features needed by Hibernate. Initiator org.hibernate.service.jndi.internal.JndiServiceInitiator Implementations org.hibernate.service.jndi.internal.JndiServiceImpl 7.5.9. org.hibernate.service.jta.platform.spi.JtaPlatform Notes Provides an abstraction from the underlying JTA platform when JTA features are used. Initiator org.hibernate.service.jta.platform.internal.JtaPlatformInitiator Important JtaPlatformInitiator provides mapping against the legacy, now-deprecated org.hibernate.transaction.TransactionManagerLookup names internally for the Hibernate-provided org.hibernate.transaction.TransactionManagerLookup implementations. Implementations • org.hibernate.service.jta.platform.internal.BitronixJtaPlatform - Integration with the Bitronix stand-alone transaction manager. • org.hibernate.service.jta.platform.internal.BorlandEnterpriseServerJtaPlatform • org.hibernate.service.jta.platform.internal.JBossAppServerJtaPlatform - Integration with the transaction manager as deployed within a Borland Enterprise Server - Integration with the transaction manager as deployed within a JBoss Application Server • • • • • • org.hibernate.service.jta.platform.internal.JBossStandAloneJtaPlatform - Integration with the JBoss Transactions stand-alone transaction manager org.hibernate.service.jta.platform.internal.JOTMJtaPlatform - Integration with the JOTM stand-alone transaction manager org.hibernate.service.jta.platform.internal.JOnASJtaPlatform - Integration with the JOnAS transaction manager. org.hibernate.service.jta.platform.internal.JRun4JtaPlatform - Integration with the transaction manager as deployed in a JRun 4 application server. org.hibernate.service.jta.platform.internal.NoJtaPlatform - No-op version when no JTA set up is configured org.hibernate.service.jta.platform.internal.OC4JJtaPlatform - Integration with transaction manager as deployed in an OC4J (Oracle) application server. • • • org.hibernate.service.jta.platform.internal.OrionJtaPlatform - Integration with transaction manager as deployed in an Orion application server. org.hibernate.service.jta.platform.internal.ResinJtaPlatform - Integration with transaction manager as deployed in a Resin application server. org.hibernate.service.jta.platform.internal.SunOneJtaPlatform - Integration with transaction manager as deployed in a Sun ONE (7 and above) application server. • org.hibernate.service.jta.platform.internal.TransactionManagerLookupBridge - Provides a bridge to legacy (and deprecated) org.hibernate.transaction.TransactionManagerLookup implementations org.hibernate.service.jta.platform.internal.WebSphereExtendedJtaPlatform - Integration with transaction manager as deployed in • org.hibernate.service.jta.platform.internal.WebSphereJtaPlatform • a WebSphere Application Server (6 and above). - Integration with transaction manager as deployed in a WebSphere Application Server (4, 5.0 and 5.1). • org.hibernate.service.jta.platform.internal.WeblogicJtaPlatform - Integration with transaction manager as deployed in a Weblogic application server. 7.5.10. org.hibernate.service.jdbc.connections.spi.MultiTenantConnectionProvider Notes A variation of Section 7.5.3, “org.hibernate.service.jdbc.connections.spi.ConnectionProvider” providing access to JDBC connections in multi-tenant environments. Initiator N/A Implementations Intended that users provide appropriate implementation if needed. 7.5.11. org.hibernate.persister.spi.PersisterClassResolver Notes Contract for determining the appropriate org.hibernate.persister.entity.EntityPersister or org.hibernate.persister.collection.CollectionPersister implementation class to use given an entity or collection mapping. Initiator org.hibernate.persister.internal.PersisterClassResolverInitiator Implementations org.hibernate.persister.internal.StandardPersisterClassResolver 7.5.12. org.hibernate.persister.spi.PersisterFactory Notes Factory for creating org.hibernate.persister.entity.EntityPersister and org.hibernate.persister.collection.CollectionPersister instances. Initiator org.hibernate.persister.internal.PersisterFactoryInitiator Implementations org.hibernate.persister.internal.PersisterFactoryImpl 7.5.13. org.hibernate.cache.spi.RegionFactory Notes Integration point for Hibernate's second level cache support. Initiator org.hibernate.cache.internal.RegionFactoryInitiator Implementations • • • • • org.hibernate.cache.ehcache.EhCacheRegionFactory org.hibernate.cache.infinispan.InfinispanRegionFactory org.hibernate.cache.infinispan.JndiInfinispanRegionFactory org.hibernate.cache.internal.NoCachingRegionFactory org.hibernate.cache.ehcache.SingletonEhCacheRegionFactory 7.5.14. org.hibernate.service.spi.SessionFactoryServiceRegistryFactory Notes Factory for creating org.hibernate.service.spi.SessionFactoryServiceRegistry instances which acts as a specialized org.hibernate.service.ServiceRegistry for org.hibernate.SessionFactory scoped services. See Section 7.7.2, “SessionFactory registry” for more details. Initiator org.hibernate.service.internal.SessionFactoryServiceRegistryFactoryInitiator Implementations org.hibernate.service.internal.SessionFactoryServiceRegistryFactoryImpl 7.5.15. org.hibernate.stat.Statistics Notes Contract for exposing collected statistics. The statistics are collected through the org.hibernate.stat.spi.StatisticsImplementor contract. Initiator org.hibernate.stat.internal.StatisticsInitiator Defines a hibernate.stats.factory setting to allow configuring the org.hibernate.stat.spi.StatisticsFactory to use internally when building the actual org.hibernate.stat.Statistics instance. Implementations org.hibernate.stat.internal.ConcurrentStatisticsImpl The default org.hibernate.stat.spi.StatisticsFactory implementation builds a org.hibernate.stat.internal.ConcurrentStatisticsImpl instance. 7.5.16. org.hibernate.engine.transaction.spi.TransactionFactory Notes Strategy defining how Hibernate's org.hibernate.Transaction API maps to the underlying transaction approach. Initiator org.hibernate.stat.internal.StatisticsInitiator Defines a hibernate.stats.factory setting to allow configuring the org.hibernate.stat.spi.StatisticsFactory to use internally when building the actual org.hibernate.stat.Statistics instance. Implementations • org.hibernate.engine.transaction.internal.jta.CMTTransactionFactory - A JTA-based strategy in which Hibernate is not controlling the transactions. An important distinction here is that interaction with the underlying JTA implementation is done through the javax.transaction.TransactionManager org.hibernate.engine.transaction.internal.jdbc.JdbcTransactionFactory - A non-JTA strategy in which the transactions are managed using the JDBC • java.sql.Connection org.hibernate.engine.transaction.internal.jta.JtaTransactionFactory • - A JTA-based strategy in which Hibernate *may* be controlling the transactions. An important distinction here is that interaction with the underlying JTA implementation is done through the javax.transaction.UserTransaction 7.5.17. org.hibernate.tool.hbm2ddl.ImportSqlCommandExtractor Notes Contract for extracting statements from import.sql scripts. Initiator org.hibernate.tool.hbm2ddl.ImportSqlCommandExtractorInitiator Implementations • org.hibernate.tool.hbm2ddl.SingleLineSqlCommandExtractor treats each line as a complete SQL statement. Comment lines shall start with --, // or /* character sequence. • org.hibernate.tool.hbm2ddl.MultipleLinesSqlCommandExtractor statement must end with semicolon. 7.6. Custom services supports instructions/comments and quoted strings spread over multiple lines. Each Once a org.hibernate.service.ServiceRegistry is built it is considered immutable; the services themselves might accept re-configuration, but immutability here means adding/replacing services. So another role provided by the org.hibernate.service.ServiceRegistryBuilder is to allow tweaking of the services that will be contained in the org.hibernate.service.ServiceRegistry generated from it. There are 2 means to tell a org.hibernate.service.ServiceRegistryBuilder about custom services. • • Implement a org.hibernate.service.spi.BasicServiceInitiator class to control on-demand construction of the service class and add it to the org.hibernate.service.ServiceRegistryBuilder via its addInitiator method. Just instantiate the service class and add it to the org.hibernate.service.ServiceRegistryBuilder via its addService method. Either approach the adding a service approach or the adding an initiator approach are valid for extending a registry (adding new service roles) and overriding services (replacing service implementations). 7.7. Special service registries 7.7.1. Boot-strap registry The boot-strap registry holds services that absolutely have to be available for most things to work. The main service here is the Section 7.7.1.1.1, “org.hibernate.service.classloading.spi.ClassLoaderService” which is a perfect example. Even resolving configuration files needs access to class loading services (resource look ups). This is the root registry (no parent) in normal use. Instances of boot-strap registries are built using the org.hibernate.service.BootstrapServiceRegistryBuilder class. Example 7.1. Using BootstrapServiceRegistryBuilder BootstrapServiceRegistry bootstrapServiceRegistry = new BootstrapServiceRegistryBuilder() // pass in org.hibernate.integrator.spi.Integrator instances which are not // auto-discovered (for whatever reason) but which should be included .with( anExplicitIntegrator ) // pass in a class-loader Hibernate should use to load application classes .withApplicationClassLoader( anExplicitClassLoaderForApplicationClasses ) // pass in a class-loader Hibernate should use to load resources .withResourceClassLoader( anExplicitClassLoaderForResources ) // see BootstrapServiceRegistryBuilder for rest of available methods ... // finally, build the bootstrap registry with all the above options .build(); 7.7.1.1. Bootstrap registry services 7.7.1.1.1. org.hibernate.service.classloading.spi.ClassLoaderService Hibernate needs to interact with ClassLoaders. However, the manner in which Hibernate (or any library) should interact with ClassLoaders varies based on the runtime environment which is hosting the application. Application servers, OSGi containers, and other modular class loading systems impose very specific class-loading requirements. This service is provides Hibernate an abstraction from this environmental complexity. And just as importantly, it does so in a single-swappable-component manner. In terms of interacting with a ClassLoader, Hibernate needs the following capabilities: • • • • the ability to locate application classes the ability to locate integration classes the ability to locate resources (properties files, xml files, etc) the ability to load java.util.ServiceLoader Note Currently, the ability to load application classes and the ability to load integration classes are combined into a single "load class" capability on the service. That may change in a later release. 7.7.1.1.2. org.hibernate.integrator.spi.IntegratorService Applications, add-ons and others all need to integrate with Hibernate which used to require something, usually the application, to coordinate registering the pieces of each integration needed on behalf of each integrator. The intent of this service is to allow those integrators to be discovered and to have them integrate themselves with Hibernate. This service focuses on the discovery aspect. It leverages the standard Java java.util.ServiceLoader capability provided by the org.hibernate.service.classloading.spi.ClassLoaderService in order to discover implementations of the org.hibernate.integrator.spi.Integrator contract. Integrators would simply define a file named /META-INF/services/org.hibernate.integrator.spi.Integrator and make it available on the classpath. java.util.ServiceLoader covers the format of this file in detail, but essentially it list classes by FQN that implement the org.hibernate.integrator.spi.Integrator one per line. See Section 7.9, “Integrators” 7.7.2. SessionFactory registry While it is best practice to treat instances of all the registry types as targeting a given org.hibernate.SessionFactory, the instances of services in this group explicitly belong to a single org.hibernate.SessionFactory. The difference is a matter of timing in when they need to be initiated. Generally they need access to the org.hibernate.SessionFactory to be initiated. This special registry is org.hibernate.service.spi.SessionFactoryServiceRegistry 7.7.2.1. org.hibernate.event.service.spi.EventListenerRegistry Notes Service for managing event listeners. Initiator org.hibernate.event.service.internal.EventListenerServiceInitiator Implementations org.hibernate.event.service.internal.EventListenerRegistryImpl 7.8. Using services and registries Coming soon... 7.9. Integrators The org.hibernate.integrator.spi.Integrator is intended to provide a simple means for allowing developers to hook into the process of building a functioning SessionFactory. The The org.hibernate.integrator.spi.Integrator interface defines 2 methods of interest: integrate allows us to hook into the building process; disintegrate allows us to hook into a SessionFactory shutting down. Note There is a 3rd method defined on org.hibernate.integrator.spi.Integrator, an overloaded form of integrate accepting a org.hibernate.metamodel.source.MetadataImplementor instead of org.hibernate.cfg.Configuration. This form is intended for use with the new metamodel code scheduled for completion in 5.0 See Section 7.7.1.1.2, “org.hibernate.integrator.spi.IntegratorService” In addition to the discovery approach provided by the IntegratorService, applications can manually register Integrator implementations when building the BootstrapServiceRegistry. See Example 7.1, “Using BootstrapServiceRegistryBuilder” 7.9.1. Integrator use-cases The main use cases for an org.hibernate.integrator.spi.Integrator right now are registering event listeners and providing services (see org.hibernate.integrator.spi.ServiceContributingIntegrator). With 5.0 we plan on expanding that to allow altering the metamodel describing the mapping between object and relational models. Example 7.2. Registering event listeners public class MyIntegrator implements org.hibernate.integrator.spi.Integrator { public void integrate( Configuration configuration, SessionFactoryImplementor sessionFactory, SessionFactoryServiceRegistry serviceRegistry) { // As you might expect, an EventListenerRegistry is the thing with which event listeners are registered It is a // service so we look it up using the service registry final EventListenerRegistry eventListenerRegistry = serviceRegistry.getService( EventListenerRegistry.class ); // If you wish to have custom determination and handling of "duplicate" listeners, you would have to add an // implementation of the org.hibernate.event.service.spi.DuplicationStrategy contract like this eventListenerRegistry.addDuplicationStrategy( myDuplicationStrategy ); // EventListenerRegistry defines 3 ways to register listeners: // 1) This form overrides any existing registrations with eventListenerRegistry.setListeners( EventType.AUTO_FLUSH, myCompleteSetOfListeners ); // 2) This form adds the specified listener(s) to the beginning of the listener chain eventListenerRegistry.prependListeners( EventType.AUTO_FLUSH, myListenersToBeCalledFirst ); // 3) This form adds the specified listener(s) to the end of the listener chain eventListenerRegistry.prependListeners( EventType.AUTO_FLUSH, myListenersToBeCalledLast ); } } Chapter 8. Data categorizations Table of Contents 8.1. Value types 8.1.1. Basic types 8.1.2. Composite types 8.1.3. Collection types 8.2. Entity Types 8.3. Implications of different data categorizations Hibernate understands both the Java and JDBC representations of application data. The ability to read and write object data to a database is called marshalling, and is the function of a Hibernate type. A type is an implementation of the org.hibernate.type.Type interface. A Hibernate type describes various aspects of behavior of the Java type such as how to check for equality and how to clone values. Usage of the word type A Hibernate type is neither a Java type nor a SQL datatype. It provides information about both of these. When you encounter the term type in regards to Hibernate, it may refer to the Java type, the JDBC type, or the Hibernate type, depending on context. Hibernate categorizes types into two high-level groups: Section 8.1, “Value types” and Section 8.2, “Entity Types”. 8.1. Value types A value type does not define its own lifecycle. It is, in effect, owned by an Section 8.2, “Entity Types”, which defines its lifecycle. Value types are further classified into three subcategories. • • • Section 8.1.1, “Basic types” Section 8.1.2, “Composite types” Section 8.1.3, “Collection types” 8.1.1. Basic types Basic value types usually map a single database value, or column, to a single, non-aggregated Java type. Hibernate provides a number of built-in basic types, which follow the natural mappings recommended in the JDBC specifications. You can override these mappings and provide and use alternative mappings. These topics are discussed further on. Table 8.1. Basic Type Mappings Hibernate type org.hibernate.type.StringType Database type string JDBC type VARCHAR string, java.lang.String org.hibernate.type.MaterializedClob string CLOB materialized_clob org.hibernate.type.TextType string LONGVARCHAR text org.hibernate.type.CharacterType char, java.lang.Character CHAR char, java.lang.Character org.hibernate.type.BooleanType boolean boolean, java.lang.Boolean BIT Type registry Hibernate type Database type JDBC type Type registry org.hibernate.type.NumericBooleanType boolean INTEGER, 0 is false, 1 is true numeric_boolean org.hibernate.type.YesNoType boolean CHAR, 'N'/'n' is false, 'Y'/'y' is true. The yes_no uppercase value is written to the database. org.hibernate.type.TrueFalseType boolean CHAR, 'F'/'f' is false, 'T'/'t' is true. The true_false uppercase value is written to the database. org.hibernate.type.ByteType byte, java.lang.Byte TINYINT byte, java.lang.Byte org.hibernate.type.ShortType short, java.lang.Short SMALLINT short, java.lang.Short org.hibernate.type.IntegerTypes int, java.lang.Integer INTEGER int, java.lang.Integer org.hibernate.type.LongType long, java.lang.Long BIGINT long, java.lang.Long org.hibernate.type.FloatType float, java.lang.Float FLOAT float, java.lang.Float org.hibernate.type.DoubleType double, java.lang.Double DOUBLE double, java.lang.Double org.hibernate.type.BigIntegerType java.math.BigInteger NUMERIC big_integer org.hibernate.type.BigDecimalType java.math.BigDecimal NUMERIC big_decimal, java.math.bigDecimal org.hibernate.type.TimestampType java.sql.Timestamp TIMESTAMP timestamp, java.sql.Timestamp org.hibernate.type.TimeType java.sql.Time TIME time, java.sql.Time org.hibernate.type.DateType java.sql.Date DATE date, java.sql.Date org.hibernate.type.CalendarType java.util.Calendar TIMESTAMP calendar, java.util.Calendar org.hibernate.type.CalendarDateType java.util.Calendar DATE calendar_date org.hibernate.type.CurrencyType java.util.Currency VARCHAR currency, java.util.Currency org.hibernate.type.LocaleType java.util.Locale VARCHAR locale, java.utility.locale org.hibernate.type.TimeZoneType java.util.TimeZone VARCHAR, using the TimeZone ID timezone, java.util.TimeZone org.hibernate.type.UrlType java.net.URL VARCHAR url, java.net.URL org.hibernate.type.ClassType java.lang.Class VARCHAR, using the class name class, java.lang.Class org.hibernate.type.BlobType java.sql.Blob BLOB blog, java.sql.Blob Hibernate type Database type JDBC type Type registry org.hibernate.type.ClobType java.sql.Clob CLOB clob, java.sql.Clob org.hibernate.type.BinaryType primitive byte[] VARBINARY binary, byte[] org.hibernate.type.MaterializedBlobType primitive byte[] BLOB materized_blob org.hibernate.type.ImageType primitive byte[] LONGVARBINARY image org.hibernate.type.BinaryType java.lang.Byte[] VARBINARY wrapper-binary org.hibernate.type.CharArrayType char[] VARCHAR characters, char[] org.hibernate.type.CharacterArrayType java.lang.Character[] VARCHAR wrapper-characters, Character[], java.lang.Character[] org.hibernate.type.UUIDBinaryType java.util.UUID BINARY uuid-binary, java.util.UUID org.hibernate.type.UUIDCharType java.util.UUID CHAR, can also read VARCHAR uuid-char org.hibernate.type.PostgresUUIDType java.util.UUID PostgreSQL UUID, through Types#OTHER, which complies to the PostgreSQL JDBC driver definition pg-uuid VARBINARY Unlike the other value types, multiple instances of this type are registered. It is registered once under java.io.Serializable, and registered under the specific java.io.Serializable implementation class names. org.hibernate.type.SerializableType implementors of java.lang.Serializable 8.1.2. Composite types Composite types, or embedded types, as they are called by the Java Persistence API, have traditionally been called components in Hibernate. All of these terms mean the same thing. Components represent aggregations of values into a single Java type. An example is an Address class, which aggregates street, city, state, and postal code. A composite type behaves in a similar way to an entity. They are each classes written specifically for an application. They may both include references to other application-specific classes, as well as to collections and simple JDK types. The only distinguishing factors are that a component does not have its own lifecycle or define an identifier. 8.1.3. Collection types A collection type refers to the data type itself, not its contents. A Collection denotes a one-to-one or one-to-many relationship between tables of a database. Refer to the chapter on Collections for more information on collections. 8.2. Entity Types Entities are application-specific classes which correlate to rows in a table, using a unique identifier. Because of the requirement for a unique identifier, ntities exist independently and define their own lifecycle. As an example, deleting a Membership should not delete the User or the Group. For more information, see the chapter on Persistent Classes. 8.3. Implications of different data categorizations NEEDS TO BE WRITTEN Chapter 9. Mapping entities Table of Contents 9.1. Hierarchies 9.1. Hierarchies Chapter 10. Mapping associations The most basic form of mapping in Hibernate is mapping a persistent entity class to a database table. You can expand on this concept by mapping associated classes together. ??? shows a Person class with a Chapter 11. HQL and JPQL Table of Contents 11.1. Case Sensitivity 11.2. Statement types 11.2.1. Select statements 11.2.2. Update statements 11.2.3. Delete statements 11.2.4. Insert statements 11.3. The FROM clause 11.3.1. Identification variables 11.3.2. Root entity references 11.3.3. Explicit joins 11.3.4. Implicit joins (path expressions) 11.3.5. Collection member references 11.3.6. Polymorphism 11.4. Expressions 11.4.1. Identification variable 11.4.2. Path expressions 11.4.3. Literals 11.4.4. Parameters 11.4.5. Arithmetic 11.4.6. Concatenation (operation) 11.4.7. Aggregate functions 11.4.8. Scalar functions 11.4.9. Collection-related expressions 11.4.10. Entity type 11.4.11. CASE expressions 11.5. The SELECT clause 11.6. Predicates 11.6.1. Relational comparisons 11.6.2. Nullness predicate 11.6.3. Like predicate 11.6.4. Between predicate 11.6.5. In predicate 11.6.6. Exists predicate 11.6.7. Empty collection predicate 11.6.8. Member-of collection predicate 11.6.9. NOT predicate operator 11.6.10. AND predicate operator 11.6.11. OR predicate operator 11.7. The WHERE clause 11.8. Grouping 11.9. Ordering 11.10. Query API The Hibernate Query Language (HQL) and Java Persistence Query Language (JPQL) are both object model focused query languages similar in nature to SQL. JPQL is a heavilyinspired-by subset of HQL. A JPQL query is always a valid HQL query, the reverse is not true however. Both HQL and JPQL are non-type-safe ways to perform query operations. Criteria queries offer a type-safe approach to querying. See ??? for more information. 11.1. Case Sensitivity With the exception of names of Java classes and properties, queries are case-insensitive. So SeLeCT is the same as sELEct is the same as SELECT, but org.hibernate.eg.FOO and org.hibernate.eg.Foo are different, as are foo.barSet and foo.BARSET. Note This documentation uses lowercase keywords as convention in examples. 11.2. Statement types Both HQL and JPQL allow SELECT, UPDATE and DELETE statements to be performed. HQL additionally allows INSERT statements, in a form similar to a SQL INSERT-SELECT. Important Care should be taken as to when a UPDATE or DELETE statement is executed. Caution should be used when executing bulk update or delete operations because they may result in inconsistencies between the database and the entities in the active persistence context. In general, bulk update and delete operations should only be performed within a transaction in a new persistence con- text or before fetching or accessing entities whose state might be affected by such operations. --Section 4.10 of the JPA 2.0 Specification 11.2.1. Select statements The BNF for SELECT statements in HQL is: select_statement :: = [select_clause] from_clause [where_clause] [groupby_clause] [having_clause] [orderby_clause] The simplest possible HQL SELECT statement is of the form: from com.acme.Cat The select statement in JPQL is exactly the same as for HQL except that JPQL requires a select_clause, whereas HQL does not. Even though HQL does not require the presence of a select_clause, it is generally good practice to include one. For simple queries the intent is clear and so the intended result of the select_clause is east to infer. But on more complex queries that is not always the case. It is usually better to explicitly specify intent. Hibernate does not actually enforce that a select_clause be present even when parsing JPQL queries, however applications interested in JPA portability should take heed of this. 11.2.2. Update statements The BNF for UPDATE statements is the same in HQL and JPQL: update_statement ::= update_clause [where_clause] update_clause ::= UPDATE entity_name [[AS] identification_variable] SET update_item {, update_item}* update_item ::= [identification_variable.]{state_field | single_valued_object_field} = new_value new_value ::= scalar_expression | simple_entity_expression | NULL UPDATE statements, by default, do not effect the version or the timestamp attribute values for the affected entities. However, you can force Hibernate to set the version or timestamp attribute values through the use of a versioned update. This is achieved by adding the VERSIONED keyword after the UPDATE keyword. Note, however, that this is a Hibernate specific feature and will not work in a portable manner. Custom version types, org.hibernate.usertype.UserVersionType, are not allowed in conjunction with a update versioned statement. An UPDATE statement is executed using the executeUpdate of either org.hibernate.Query or javax.persistence.Query. The method is named for those familiar with the JDBC executeUpdate on java.sql.PreparedStatement. The int value returned by the executeUpdate() method indicates the number of entities effected by the operation. This may or may not correlate to the number of rows effected in the database. An HQL bulk operation might result in multiple actual SQL statements being executed (for joinedsubclass, for example). The returned number indicates the number of actual entities affected by the statement. Using a JOINED inheritance hierarchy, a delete against one of the subclasses may actually result in deletes against not just the table to which that subclass is mapped, but also the "root" table and tables “in between” Example 11.1. Example UPDATE query statements String hqlUpdate = "update Customer c " + "set c.name = :newName " + "where c.name = :oldName"; int updatedEntities = session.createQuery( hqlUpdate ) .setString( "newName", newName ) .setString( "oldName", oldName ) .executeUpdate(); String jpqlUpdate = "update Customer c " + "set c.name = :newName " + "where c.name = :oldName"; int updatedEntities = entityManager.createQuery( jpqlUpdate ) .setString( "newName", newName ) .setString( "oldName", oldName ) .executeUpdate(); String hqlVersionedUpdate = "update versioned Customer c " + "set c.name = :newName " + "where c.name = :oldName"; int updatedEntities = s.createQuery( hqlUpdate ) .setString( "newName", newName ) .setString( "oldName", oldName ) .executeUpdate(); Important Neither UPDATE nor DELETE statements are allowed to result in what is called an implicit join. Their form already disallows explicit joins. 11.2.3. Delete statements The BNF for DELETE statements is the same in HQL and JPQL: delete_statement ::= delete_clause [where_clause] delete_clause ::= DELETE FROM entity_name [[AS] identification_variable] A DELETE statement is also executed using the executeUpdate method of either org.hibernate.Query or javax.persistence.Query. 11.2.4. Insert statements HQL adds the ability to define INSERT statements as well. There is no JPQL equivalent to this. The BNF for an HQL INSERT statement is: insert_statement ::= insert_clause select_statement insert_clause ::= INSERT INTO entity_name (attribute_list) attribute_list ::= state_field[, state_field ]* The attribute_list is analogous to the column specification in the SQL INSERT statement. For entities involved in mapped inheritance, only attributes directly defined on the named entity can be used in the attribute_list. Superclass properties are not allowed and subclass properties do not make sense. In other words, INSERT statements are inherently non-polymorphic. can be any valid HQL select query, with the caveat that the return types must match the types expected by the insert. Currently, this is checked during query compilation rather than allowing the check to relegate to the database. This may cause problems between Hibernate Types which are equivalent as opposed to equal. For example, this might cause lead to issues with mismatches between an attribute mapped as a org.hibernate.type.DateType and an attribute defined as a org.hibernate.type.TimestampType, even though the database might not make a distinction or might be able to handle the conversion. select_statement For the id attribute, the insert statement gives you two options. You can either explicitly specify the id property in the attribute_list, in which case its value is taken from the corresponding select expression, or omit it from the attribute_list in which case a generated value is used. This latter option is only available when using id generators that operate “in the database”; attempting to use this option with any “in memory” type generators will cause an exception during parsing. For optimistic locking attributes, the insert statement again gives you two options. You can either specify the attribute in the attribute_list in which case its value is taken from the corresponding select expressions, or omit it from the attribute_list in which case the seed value defined by the corresponding org.hibernate.type.VersionType is used. Example 11.2. Example INSERT query statements String hqlInsert = "insert into DelinquentAccount (id, name) select c.id, c.name from Customer c where ..."; int createdEntities = s.createQuery( hqlInsert ).executeUpdate(); 11.3. The FROM clause The FROM clause is responsible defining the scope of object model types available to the rest of the query. It also is responsible for defining all the “identification variables” available to the rest of the query. 11.3.1. Identification variables Identification variables are often referred to as aliases. References to object model classes in the FROM clause can be associated with an identification variable that can then be used to refer to that type thoughout the rest of the query. In most cases declaring an identification variable is optional, though it is usually good practice to declare them. An identification variable must follow the rules for Java identifier validity. According to JPQL, identification variables must be treated as case insensitive. Good practice says you should use the same case throughout a query to refer to a given identification variable. In other words, JPQL says they can be case insensitive and so Hibernate must be able to treat them as such, but this does not make it good practice. 11.3.2. Root entity references A root entity reference, or what JPA calls a range variable declaration, is specifically a reference to a mapped entity type from the application. It cannot name component/ embeddable types. And associations, including collections, are handled in a different manner discussed later. The BNF for a root entity reference is: root_entity_reference ::= entity_name [AS] identification_variable Example 11.3. Simple query example select c from com.acme.Cat c We see that the query is defining a root entity reference to the com.acme.Cat object model type. Additionally, it declares an alias of c to that com.acme.Cat reference; this is the identification variable. Usually the root entity reference just names the entity name rather than the entity class FQN. By default the entity name is the unqualified entity class name, here Cat Example 11.4. Simple query using entity name for root entity reference select c from Cat c Multiple root entity references can also be specified. Even naming the same entity! Example 11.5. Simple query using multiple root entity references // build a product between customers and active mailing campaigns so we can spam! select distinct cust, camp from Customer cust, Campaign camp where camp.type = 'mail' and current_timestamp() between camp.activeRange.start and camp.activeRange.end // retrieve all customers with headquarters in the same state as Acme's headquarters select distinct c1 from Customer c1, Customer c2 where c1.address.state = c2.address.state and c2.name = 'Acme' 11.3.3. Explicit joins The FROM clause can also contain explicit relationship joins using the join keyword. These joins can be either inner or left outer style joins. Example 11.6. Explicit inner join examples select c from Customer c join c.chiefExecutive ceo where ceo.age < 25 // same query but specifying join type as 'inner' explicitly select c from Customer c inner join c.chiefExecutive ceo where ceo.age < 25 Example 11.7. Explicit left (outer) join examples // get customers who have orders worth more than $5000 // or who are in "preferred" status select distinct c from Customer c left join c.orders o where o.value > 5000.00 or c.status = 'preferred' // functionally the same query but using the // 'left outer' phrase select distinct c from Customer c left outer join c.orders o where o.value > 5000.00 or c.status = 'preferred' An important use case for explicit joins is to define FETCH JOINS which override the laziness of the joined association. As an example, given an entity named Customer with a collection-valued association named orders Example 11.8. Fetch join example select c from Customer c left join fetch c.orders o As you can see from the example, a fetch join is specified by injecting the keyword fetch after the keyword join. In the example, we used a left outer join because we want to return customers who have no orders also. Inner joins can also be fetched. But inner joins still filter. In the example, using an inner join instead would have resulted in customers without any orders being filtered out of the result. Important Fetch joins are not valid in sub-queries. Care should be taken when fetch joining a collection-valued association which is in any way further restricted; the fetched collection will be restricted too! For this reason it is usually considered best practice to not assign an identification variable to fetched joins except for the purpose of specifying nested fetch joins. Fetch joins should not be used in paged queries (aka, setFirstResult/ setMaxResults). Nor should they be used with the HQL scroll or iterate features. HQL also defines a WITH clause to qualify the join conditions. Again, this is specific to HQL; JPQL does not define this feature. Example 11.9. with-clause join example select distinct c from Customer c left join c.orders o with o.value > 5000.00 The important distinction is that in the generated SQL the conditions of the with clause are made part of the on clause in the generated SQL as opposed to the other queries in this section where the HQL/JPQL conditions are made part of the where clause in the generated SQL. The distinction in this specific example is probably not that significant. The with clause is sometimes necessary in more complicated queries. Explicit joins may reference association or component/embedded attributes. For further information about collection-valued association references, see Section 11.3.5, “Collection member references”. In the case of component/embedded attributes, the join is simply logical and does not correlate to a physical (SQL) join. 11.3.4. Implicit joins (path expressions) Another means of adding to the scope of object model types available to the query is through the use of implicit joins, or path expressions. Example 11.10. Simple implicit join example select c from Customer c where c.chiefExecutive.age < 25 // same as select c from Customer c inner join c.chiefExecutive ceo where ceo.age < 25 An implicit join always starts from an identification variable, followed by the navigation operator (.), followed by an attribute for the object model type referenced by the initial identification variable. In the example, the initial identification variable is c which refers to the Customer entity. The c.chiefExecutive reference then refers to the chiefExecutive attribute of the Customer entity. chiefExecutive is an association type so we further navigate to its age attribute. Important If the attribute represents an entity association (non-collection) or a component/embedded, that reference can be further navigated. Basic values and collectionvalued associations cannot be further navigated. As shown in the example, implicit joins can appear outside the FROM clause. However, they affect the FROM clause. Implicit joins are always treated as inner joins. Multiple references to the same implicit join always refer to the same logical and physical (SQL) join. Example 11.11. Reused implicit join select c from Customer c where c.chiefExecutive.age < 25 and c.chiefExecutive.address.state = 'TX' // same as select c from Customer c inner join c.chiefExecutive ceo where ceo.age < 25 and ceo.address.state = 'TX' // same as select c from Customer c inner join c.chiefExecutive ceo inner join ceo.address a where ceo.age < 25 and a.state = 'TX' Just as with explicit joins, implicit joins may reference association or component/embedded attributes. For further information about collection-valued association references, see Section 11.3.5, “Collection member references”. In the case of component/embedded attributes, the join is simply logical and does not correlate to a physical (SQL) join. Unlike explicit joins, however, implicit joins may also reference basic state fields as long as the path expression ends there. 11.3.5. Collection member references References to collection-valued associations actually refer to the values of that collection. Example 11.12. Collection references example select c from Customer c join c.orders o join o.lineItems l join l.product p where o.status = 'pending' and p.status = 'backorder' // alternate syntax select c from Customer c, in(c.orders) o, in(o.lineItems) l join l.product p where o.status = 'pending' and p.status = 'backorder' In the example, the identification variable o actually refers to the object model type Order which is the type of the elements of the Customer#orders association. The example also shows the alternate syntax for specifying collection association joins using the IN syntax. Both forms are equivalent. Which form an application chooses to use is simply a matter of taste. 11.3.5.1. Special case - qualified path expressions We said earlier that collection-valued associations actually refer to the values of that collection. Based on the type of collection, there are also available a set of explicit qualification expressions. Example 11.13. Qualified collection references example // Product.images is a Map : key = a name, value = file path // select all the image file paths (the map value) for Product#123 select i from Product p join p.images i where p.id = 123 // same as above select value(i) from Product p join p.images i where p.id = 123 // select all the image names (the map key) for Product#123 select key(i) from Product p join p.images i where p.id = 123 // select all the image names and file paths (the 'Map.Entry') for Product#123 select entry(i) from Product p join p.images i where p.id = 123 // total the value of the initial line items for all orders for a customer select sum( li.amount ) from Customer c join c.orders o join o.lineItems li where c.id = 123 and index(li) = 1 VALUE Refers to the collection value. Same as not specifying a qualifier. Useful to explicitly show intent. Valid for any type of collection-valued reference. INDEX According to HQL rules, this is valid for both Maps and Lists which specify a javax.persistence.OrderColumn annotation to refer to the Map key or the List position (aka the OrderColumn value). JPQL however, reserves this for use in the List case and adds KEY for the MAP case. Applications interested in JPA provider portability should be aware of this distinction. KEY Valid only for Maps. Refers to the map's key. If the key is itself an entity, can be further navigated. ENTRY Only valid only for Maps. Refers to the Map's logical java.util.Map.Entry tuple (the combination of its key and value). ENTRY is only valid as a terminal path and only valid in the select clause. See Section 11.4.9, “Collection-related expressions” for additional details on collection related expressions. 11.3.6. Polymorphism HQL and JPQL queries are inherently polymorphic. select p from Payment p This query names the Payment entity explicitly. However, all subclasses of Payment are also available to the query. So if the CreditCardPayment entity and WireTransferPayment entity each extend from Payment all three types would be available to the query. And the query would return instances of all three. The logical extreme The HQL query from java.lang.Object is totally valid! It returns every object of every type defined in your application. This can be altered by using either the org.hibernate.annotations.Polymorphism annotation (global, and Hibernate-specific) or limiting them using in the query itself using an entity type expression. 11.4. Expressions Essentially expressions are references that resolve to basic or tuple values. 11.4.1. Identification variable See Section 11.3, “The FROM clause”. 11.4.2. Path expressions Again, see Section 11.3, “The FROM clause”. 11.4.3. Literals String literals are enclosed in single-quotes. To escape a single-quote within a string literal, use double single-quotes. Example 11.14. String literal examples select c from Customer c where c.name = 'Acme' select c from Customer c where c.name = 'Acme''s Pretzel Logic' Numeric literals are allowed in a few different forms. Example 11.15. Numeric literal examples // simple integer literal select o from Order o where o.referenceNumber = 123 // simple integer literal, typed as a long select o from Order o where o.referenceNumber = 123L // decimal notation select o from Order o where o.total > 5000.00 // decimal notation, typed as a float select o from Order o where o.total > 5000.00F // scientific notation select o from Order o where o.total > 5e+3 // scientific notation, typed as a float select o from Order o where o.total > 5e+3F In the scientific notation form, the E is case insensitive. Specific typing can be achieved through the use of the same suffix approach specified by Java. So, L denotes a long; D denotes a double; F denotes a float. The actual suffix is case insensitive. The boolean literals are TRUE and FALSE, again case-insensitive. Enums can even be referenced as literals. The fully-qualified enum class name must be used. HQL can also handle constants in the same manner, though JPQL does not define that as supported. Entity names can also be used as literal. See Section 11.4.10, “Entity type”. Date/time literals can be specified using the JDBC escape syntax: {d 'yyyy-mm-dd'} for dates, {t 'hh:mm:ss'} for times and {ts 'yyyy-mm-dd hh:mm:ss[.millis]'} (millis optional) for timestamps. These literals only work if you JDBC drivers supports them. 11.4.4. Parameters HQL supports all 3 of the following forms. JPQL does not support the HQL-specific positional parameters notion. It is good practice to not mix forms in a given query. 11.4.4.1. Named parameters Named parameters are declared using a colon followed by an identifier - :aNamedParameter. The same named parameter can appear multiple times in a query. Example 11.16. Named parameter examples String queryString = "select c " + "from Customer c " + "where c.name = :name " + " or c.nickName = :name"; // HQL List customers = session.createQuery( queryString ) .setParameter( "name", theNameOfInterest ) .list(); // JPQL List customers = entityManager.createQuery( queryString, Customer.class ) .setParameter( "name", theNameOfInterest ) .getResultList(); 11.4.4.2. Positional (JPQL) parameters JPQL-style positional parameters are declared using a question mark followed by an ordinal - ?1, ?2. The ordinals start with 1. Just like with named parameters, positional parameters can also appear multiple times in a query. Example 11.17. Positional (JPQL) parameter examples String queryString = "select c " + "from Customer c " + "where c.name = ?1 " + " or c.nickName = ?1"; // HQL - as you can see, handled just like named parameters // in terms of API List customers = session.createQuery( queryString ) .setParameter( "1", theNameOfInterest ) .list(); // JPQL List customers = entityManager.createQuery( queryString, Customer.class ) .setParameter( 1, theNameOfInterest ) .getResultList(); 11.4.4.3. Positional (HQL) parameters HQL-style positional parameters follow JDBC positional parameter syntax. They are declared using ? without a following ordinal. There is no way to relate two such positional parameters as being "the same" aside from binding the same value to each. This form should be considered deprecated and may be removed in the near future. 11.4.5. Arithmetic Arithmetic operations also represent valid expressions. Example 11.18. Numeric arithmetic examples select year( current_date() ) - year( c.dateOfBirth ) from Customer c select c from Customer c where year( current_date() ) - year( c.dateOfBirth ) < 30 select o.customer, o.total + ( o.total * :salesTax ) from Order o The following rules apply to the result of arithmetic operations: • • • • • • If either of the operands is Double/double, the result is a Double; else, if either of the operands is Float/float, the result is a Float; else, if either operand is BigDecimal, the result is BigDecimal; else, if either operand is BigInteger, the result is BigInteger (except for division, in which case the result type is not further defined); else, if either operand is Long/long, the result is Long (except for division, in which case the result type is not further defined); else, (the assumption being that both operands are of integral type) the result is Integer (except for division, in which case the result type is not further defined); Date arithmetic is also supported, albeit in a more limited fashion. This is due partially to differences in database support and partially to the lack of support for INTERVAL definition in the query language itself. 11.4.6. Concatenation (operation) HQL defines a concatenation operator in addition to supporting the concatenation (CONCAT) function. This is not defined by JPQL, so portable applications should avoid it use. The concatenation operator is taken from the SQL concatenation operator - ||. Example 11.19. Concatenation operation example select 'Mr. ' || c.name.first || ' ' || c.name.last from Customer c where c.gender = Gender.MALE See Section 11.4.8, “Scalar functions” for details on the concat() function 11.4.7. Aggregate functions Aggregate functions are also valid expressions in HQL and JPQL. The semantic is the same as their SQL counterpart. The supported aggregate functions are: • • • • • COUNT (including distinct/all qualifiers) - The result type is always Long. AVG - The result type is always Double. MIN - The result type is the same as the argument type. MAX - The result type is the same as the argument type. SUM - The result type of the avg() function depends on the type of the values being averaged. For integral values (other than BigInteger), the result type is Long. For floating point values (other than BigDecimal) the result type is Double. For BigInteger values, the result type is BigInteger. For BigDecimal values, the result type is BigDecimal. Example 11.20. Aggregate function examples select count(*), sum( o.total ), avg( o.total ), min( o.total ), max( o.total ) from Order o select count( distinct c.name ) from Customer c select c.id, c.name, sum( o.total ) from Customer c left join c.orders o group by c.id, c.name Aggregations often appear with grouping. For information on grouping see Section 11.8, “Grouping” 11.4.8. Scalar functions Both HQL and JPQL define some standard functions that are available regardless of the underlying database in use. HQL can also understand additional functions defined by the Dialect as well as the application. 11.4.8.1. Standardized functions - JPQL Here are the list of functions defined as supported by JPQL. Applications interested in remaining portable between JPA providers should stick to these functions. CONCAT String concatenation function. Variable argument length of 2 or more string values to be concatenated together. SUBSTRING Extracts a portion of a string value. substring( string_expression, numeric_expression [, numeric_expression] ) The second argument denotes the starting position. The third (optional) argument denotes the length. UPPER Upper cases the specified string LOWER Lower cases the specified string TRIM Follows the semantics of the SQL trim function. LENGTH Returns the length of a string. LOCATE Locates a string within another string. locate( string_expression, string_expression[, numeric_expression] ) The third argument (optional) is used to denote a position from which to start looking. ABS Calculates the mathematical absolute value of a numeric value. MOD Calculates the remainder of dividing the first argument by the second. SQRT Calculates the mathematical square root of a numeric value. CURRENT_DATE Returns the database current date. CURRENT_TIME Returns the database current time. CURRENT_TIMESTAMP Returns the database current timestamp. 11.4.8.2. Standardized functions - HQL Beyond the JPQL standardized functions, HQL makes some additional functions available regardless of the underlying database in use. BIT_LENGTH Returns the length of binary data. CAST Performs a SQL cast. The cast target should name the Hibernate mapping type to use. See the chapter on data types for more information. EXTRACT Performs a SQL extraction on datetime values. An extraction extracts parts of the datetime (the year, for example). See the abbreviated forms below. SECOND Abbreviated extract form for extracting the second. MINUTE Abbreviated extract form for extracting the minute. HOUR Abbreviated extract form for extracting the hour. DAY Abbreviated extract form for extracting the day. MONTH Abbreviated extract form for extracting the month. YEAR Abbreviated extract form for extracting the year. STR Abbreviated form for casting a value as character data. 11.4.8.3. Non-standardized functions Hibernate Dialects can register additional functions known to be available for that particular database product. These functions are also available in HQL (and JPQL, though only when using Hibernate as the JPA provider obviously). However, they would only be available when using that database/Dialect. Applications that aim for database portability should avoid using functions in this category. Application developers can also supply their own set of functions. This would usually represent either custom SQL functions or aliases for snippets of SQL. Such function declarations are made by using the addSqlFunction method of org.hibernate.cfg.Configuration 11.4.9. Collection-related expressions There are a few specialized expressions for working with collection-valued associations. Generally these are just abbreviated forms or other expressions for the sake of conciseness. SIZE Calculate the size of a collection. Equates to a subquery! MAXELEMENT Available for use on collections of basic type. Refers to the maximum value as determined by applying the max SQL aggregation. MAXINDEX Available for use on indexed collections. Refers to the maximum index (key/position) as determined by applying the max SQL aggregation. MINELEMENT Available for use on collections of basic type. Refers to the minimum value as determined by applying the min SQL aggregation. MININDEX Available for use on indexed collections. Refers to the minimum index (key/position) as determined by applying the min SQL aggregation. ELEMENTS Used to refer to the elements of a collection as a whole. Only allowed in the where clause. Often used in conjunction with ALL, ANY or SOME restrictions. INDICES Similar to elements except that indices refers to the collections indices (keys/positions) as a whole. Example 11.21. Collection-related expressions examples select cal from Calendar cal where maxelement(cal.holidays) > current_date() select o from Order o where maxindex(o.items) > 100 select o from Order o where minelement(o.items) > 10000 select m from Cat as m, Cat as kit where kit in elements(m.kittens) // the above query can be re-written in jpql standard way: select m from Cat as m, Cat as kit where kit member of m.kittens select p from NameList l, Person p where p.name = some elements(l.names) select cat from Cat cat where exists elements(cat.kittens) select p from Player p where 3 > all elements(p.scores) select show from Show show where 'fizard' in indices(show.acts) Elements of indexed collections (arrays, lists, and maps) can be referred to by index operator. Example 11.22. Index operator examples select o from Order o where o.items[0].id = 1234 select p from Person p, Calendar c where c.holidays['national day'] = p.birthDay and p.nationality.calendar = c select i from Item i, Order o where o.items[ o.deliveredItemIndices[0] ] = i and o.id = 11 select i from Item i, Order o where o.items[ maxindex(o.items) ] = i and o.id = 11 select i from Item i, Order o where o.items[ size(o.items) - 1 ] = i See also Section 11.3.5.1, “Special case - qualified path expressions” as there is a good deal of overlap. 11.4.10. Entity type We can also refer to the type of an entity as an expression. This is mainly useful when dealing with entity inheritance hierarchies. The type can expressed using a TYPE function used to refer to the type of an identification variable representing an entity. The name of the entity also serves as a way to refer to an entity type. Additionally the entity type can be parametrized, in which case the entity's Java Class reference would be bound as the parameter value. Example 11.23. Entity type expression examples select p from Payment p where type(p) = CreditCardPayment select p from Payment p where type(p) = :aType HQL also has a legacy form of referring to an entity type, though that legacy form is considered deprecated in favor of TYPE. The legacy form would have used p.class in the examples rather than type(p). It is mentioned only for completeness. 11.4.11. CASE expressions Both the simple and searched forms are supported, as well as the 2 SQL defined abbreviated forms (NULLIF and COALESCE) 11.4.11.1. Simple CASE expressions The simple form has the following syntax: CASE {operand} WHEN {test_value} THEN {match_result} ELSE {miss_result} END Example 11.24. Simple case expression example select case c.nickName when null then ' ' else c.nickName end from Customer c // This NULL checking is such a common case that most dbs // define an abbreviated CASE form. For example: select nvl( c.nickName, ' ' ) from Customer c // or: select isnull( c.nickName, ' ' ) from Customer c // the standard coalesce abbreviated form can be used // to achieve the same result: select coalesce( c.nickName, ' ' ) from Customer c 11.4.11.2. Searched CASE expressions The searched form has the following syntax: CASE [ WHEN {test_conditional} THEN {match_result} ]* ELSE {miss_result} END Example 11.25. Searched case expression example select case when c.name.first is not null then c.name.first when c.nickName is not null then c.nickName else ' ' end from Customer c // Again, the abbreviated form coalesce can handle this a // little more succinctly select coalesce( c.name.first, c.nickName, ' ' ) from Customer c 11.4.11.3. NULLIF expressions NULLIF is an abbreviated CASE expression that returns NULL if its operands are considered equal. Example 11.26. NULLIF example // return customers who have changed their last name select nullif( c.previousName.last, c.name.last ) from Customer c // equivalent CASE expression select case when c.previousName.last = c.name.last then null else c.previousName.last end from Customer c 11.4.11.4. COALESCE expressions COALESCE is an abbreviated CASE expression that returns the first non-null operand. We have seen a number of COALESCE examples above. 11.5. The SELECT clause The SELECT clause identifies which objects and values to return as the query results. The expressions discussed in Section 11.4, “Expressions” are all valid select expressions, except where otherwise noted. See the section Section 11.10, “Query API” for information on handling the results depending on the types of values specified in the SELECT clause. There is a particular expression type that is only valid in the select clause. Hibernate calls this “dynamic instantiation”. JPQL supports some of that feature and calls it a “constructor expression” Example 11.27. Dynamic instantiation example - constructor select new Family( mother, mate, offspr ) from DomesticCat as mother join mother.mate as mate left join mother.kittens as offspr So rather than dealing with the Object[] (again, see Section 11.10, “Query API”) here we are wrapping the values in a type-safe java object that will be returned as the results of the query. The class reference must be fully qualified and it must have a matching constructor. The class here need not be mapped. If it does represent an entity, the resulting instances are returned in the NEW state (not managed!). That is the part JPQL supports as well. HQL supports additional “dynamic instantiation” features. First, the query can specify to return a List rather than an Object[] for scalar results: Example 11.28. Dynamic instantiation example - list select new list(mother, offspr, mate.name) from DomesticCat as mother inner join mother.mate as mate left outer join mother.kittens as offspr The results from this query will be a List as opposed to a List