Recently I was asked about getQueryHitCount method of the ViewObjectImpl class. The method is commonly used in various implementations of programmatically populated view object. JDeveloper generates this method in VO's implementation class and it's supposed to be overridden with some custom implementation returning an estimated row count of the VO's rowset. The developer was playing with getQueryHitCount method returning different fixed values. And sometimes it worked for him, but sometimes it didn't. So, sometimes a table component rendered exact count of rows that he needed, but sometimes the returning value of getQueryHitCount didn't matter at all. The table, no matter what, rendered wrong number of rows and it didn't take into account the value of getQueryHitCount. So, I was asked whether I could explain this behavior.
The secret is that the framework, in some cases, doesn't execute getQueryHitCount method at all. A table component renders its rows by portions in a set of subsequent requests. The size of a portion depends on the RangeSize attribute of an iterator bindings. The default value is 25. When a table is rendering a portion of rows it is asking a view object about estimated row count. And if the view object has fetched all its rows it's going just to return the number of rows in its query collection and getQueryHitCount is not going to be invoked. In other words if iterator's range size is greater than total number of rows of the VO, then getQueryHitCount will not be taken into account and this total set of rows will be rendered by the table.
But if the fetching is not complete and the VO is being asked about an estimated row count, the VO is really trying to estimate the row count by invoking getQueryHitCount method. The default implementation of the method generates a SQL query like "select count(*) ..." in order to force the database to evaluate the exact value. For programmatically populated view objects we have to create our custom implementation of the method depending on the data source we use. And sometimes this issue is getting quite complicated. On the other hand we can just return -1 in getQueryHitCount. The question is - what is better, to evaluate a real number of rows or just return -1? In most cases the difference is only in table rendering, actually in scroller rendering. When a table knows a real number of rows in the collection it renders its scroller at the very beginning matching to this number by size and position. Otherwise size and position of the scroller are going to be evaluated according to the number of rows fetched by the table and they're going to be reevaluated each time whenever number of fetched rows is growing up. So the scroller is going to get smaller while a user is scrolling the table and fetching more and more rows.
That's it!
The secret is that the framework, in some cases, doesn't execute getQueryHitCount method at all. A table component renders its rows by portions in a set of subsequent requests. The size of a portion depends on the RangeSize attribute of an iterator bindings. The default value is 25. When a table is rendering a portion of rows it is asking a view object about estimated row count. And if the view object has fetched all its rows it's going just to return the number of rows in its query collection and getQueryHitCount is not going to be invoked. In other words if iterator's range size is greater than total number of rows of the VO, then getQueryHitCount will not be taken into account and this total set of rows will be rendered by the table.
But if the fetching is not complete and the VO is being asked about an estimated row count, the VO is really trying to estimate the row count by invoking getQueryHitCount method. The default implementation of the method generates a SQL query like "select count(*) ..." in order to force the database to evaluate the exact value. For programmatically populated view objects we have to create our custom implementation of the method depending on the data source we use. And sometimes this issue is getting quite complicated. On the other hand we can just return -1 in getQueryHitCount. The question is - what is better, to evaluate a real number of rows or just return -1? In most cases the difference is only in table rendering, actually in scroller rendering. When a table knows a real number of rows in the collection it renders its scroller at the very beginning matching to this number by size and position. Otherwise size and position of the scroller are going to be evaluated according to the number of rows fetched by the table and they're going to be reevaluated each time whenever number of fetched rows is growing up. So the scroller is going to get smaller while a user is scrolling the table and fetching more and more rows.
That's it!