I’m currently working on a system which places arbitrary limits on the code in order to try to maintain performance. One such limit is the amount of statements that can be executed in a single transaction.
The code must truncate a List. Looking at the list’s methods it appears to be an Array List. (Or if I wanted to provide a list that supported indexing and I was basing my environment on Java I’d use ArrayList. It would also be pointless to allow the programmer to pre-allocate space if the underlying storage was a LinkedList.) The list does not provide a truncate method. The logic as as follows:
while(list.size() > REQUIRED) { list.remove(REQUIRED); }
Anyone used to writing efficient code would be screaming at that. Each operation requires all of the items at the end of the array to be shuffled down one place. If we remove 1000 items from the array then that requires 1000 memmove() operations, assuming the underlying system is using native optimisations. If the underlying system is not using native optimisations then it requires ∑(1 to 1000) copy operations, over half a million!
The system has a limit on the number of statements that can be executed. The above code will execute 2000 statements to truncate the array. If we remove from the end then we’ll use 3001 statements. If we copy then we’ll use more heap space which is also limited.
This will work, but is horrible code. I’d hope that the APEX governor system takes the pre-decrement and compare as a single “Statement”
for(n = array.length() – 1; –n > REQUIRED;) { array.remove(n); }
It is horrible because looking at it now I cannot tell you if it will actually work!
The governors I’ve been working with recently have made a lot of sense. They deal with callouts to external systems and do force what is good practice in a large scalable system. Some of the results may be less palatable to the business people who want the actions to happen immediately, something that would necessitate hooking into triggers on this platform, but the force to work with asynchronous batch processing does make sense on a large system.
The governors are changing. Now the limit is based on CPU time, not statements. This completely changes the optimisation, admittedly to a more sensible one. The loop above will now be penalised because it is the least optimal way of solving the problem.