Make CachedUpdates work on master-detail datasets without requiring LocalMasterDetail
Scenario: 100K customer (master rows) with an average of 10 contact (detail) rows each... totaling about 1M detail rows.
Without CachedUpdates I set LocalMasterDetail to false, and each time I scroll on the master dataset UniDAC fetches the 10 or so detail rows from the database. I can insert, modify and delete rows and all is saved immediately (on Delete or Post).
With CachedUpdates I have to set LocalMasterDetail to true. This means UniDAC will fetch 1M detail rows and filter them out locally. With large networked databases this is really not feasible as it generates too much network traffic and too much workload on the client application.
Someone suggested to use transactions instead... start a transaction, work without CachedUpdates, and then commit it (instead of applying the updates) or roll it back (instead of canceling the updates). But I think this opens up some whole new cans of worms with generators and autoinc values on the one hand, and with nested transactions on the other.
To fix this properly I think the master-detail datasets should be set up as without the CachedUpdates, with LocalMasterDetail set to False. Scrolling on the master dataset should rexecute the detail query, and fetch the data from the server, but the detail dataset itself should keep track of pending updates and merge them back in the resulting dataset in case the same master row is selected a second time. So the update cache should hold detail rows that may very well refer to more than one master row.