Table of Contents |
---|
...
postgresClient.selectSingle(sql.toString(), Tuple.of(type.getSequenceName()))
passing in a custom query that calls the nextVal() function which has to be executed on the DB write node. In fact, all of the following RMB methods and their variants (selectSingle(), selectStream(), and select()) all accept custom SQL statements that the client could pass in an UPDATE or SELECT nextVal() call. Consequently a new set of "read" only methods (selectSingleRead(), selectStreamRead(), selectRead()) were created for the clients to future calls to take advantage of querying the readonly DB node.
After the fixes were maderolling back the selectSingle method so that it can query the write node again, DI jobs of 25K were rerun and here are their results:
...
A couple of things to observe:
- The read/write split RMB does have some impact on DI, but not much, if any, positive impacts.
- Duration of the tests. The 1K and 25K imports are long-drawn out. The The 1K import's CPU % have 3 spikes whereas the 25K had 2 spikes. When the CPU was not spiking, there was lull in the import and the import's completion percentage did not increase. Perhaps some DI code is waiting for the reader to catch up?
- There are is little CPU activities on the DB read node and most of the spikes are on the DB write node. This means that the current implementation of Data Import does not use the RMB methods that are already converted to query the DB read node. In the future, it'd be great if DI can use the new readonly or existing "read" methods in RMB that point to the DB Read node for efficiency and performance gains.
...
- Not all modules in the RTAC workflow had this RMB change
- One crucial RMB call that RTAC/mod-inventory-storage calls, selectStream(), was not using the read-only DB node. It is not suitable to make this selectStream() method to use the read-only DB node because as in the case of DI a custom query (that may require using the DB write node) may be passed in. Therefore a new method selectStreamRead() was created for it to use.
Failover Testing
In a High Availability environment where there are at least a write DB node and a read DB node, when there is a failover situation, the read DB node becomes the write node and a new read node is spun up. This is all managed by AWS (if hosted on AWS cloud). The PTF environment is hosted on AWS Cloud and has a DB write and read node. For this testing, a CICO test was executed and failover was triggered. In a normal situation, here is what the outcome of the failover looks like:
...