As long as the virtualization of the grid elements is sufficient to have the grid’s performance scale well with huge datasets, I see no point of virtualizing the data. In fact, this might even prove contraproductive as certain datasources could potentially introduce a whole different set of performance issues when retrieving data.
I do see your point with sorting and grouping. In this case, I can’t immediately think of any way you’re going to be able to do this efficiently without at first retrieving ALL the data as a base for these operations. And then somehow be able to only get the updates made to a datasource and perform these on your base, thereby letting you resort, regroup or rewhateveryouwant the grid with that information. However, this essentially means that you’re keeping a copy of the datasource in memory (on disk would be silly), which is pretty much just like not virtualizing any data in the first place.
I’m wondering, what would this feature be useful for? I haven’t been in a situation where I would want/need to request subsets of subsets and so on and so forth. Also, this feature would be more or less useless on datasets since they are infact an in-memory offline version of a database or a subset of a database. I therefore presume that this would be useful only when in direct connection with the database?
I’m sorry if all this ranting seems like incoherent banter, it probably is. It just seems to me like it’s premature optimization, especially if virtualizing the grid elements provides sufficient optimization in itself.
Imported from legacy forums. Posted by macke (had 487 views)