- adding RAM to improve system memory limitations
- altering network parameters to reduce unnecessary communication overheads
- altering process priorities to increase the amount of time allocated by the CPU to selected processes
- altering specifications to eliminate performance conflicts forced by inappropriate specifications
- altering the data structure to improve the performance of a class without changing its external interface
- anticipating data requirements to transfer extra data together with required data in distributed applications
- applying loop optimizations to speedup runtime processing
- applying many small optimizations to speedup overall execution
- avoiding growing files to reduce operating system overheads
- avoiding ‘new’ to reduce object creation and garbage collection overheads
- avoiding access control to speedup method invocations
- avoiding blocking the paint() method to eliminate interface blocking and maintain responsiveness
- avoiding casts to speedup runtime processing
- avoiding creating copies to reduce object creation and garbage collection overheads
- avoiding decompression to speedup runtime processing
- avoiding dependencies to eliminate blocking and maintain responsiveness
- avoiding garbage collection to reduce garbage collection overheads
- avoiding initialization to eliminate unnecessary overheads
- avoiding locks to reduce synchronization overheads and to eliminate blocking and maintain responsiveness
- avoiding logging to eliminate unnecessary i/o and method call overheads
- avoiding method calls to reduce runtime overheads
- avoiding object creation to reduce object creation and garbage collection overheads
- avoiding serialized execution to support multiple processors and reduce synchronization overheads
- avoiding speculative casts to reduce runtime overheads
- avoiding String creation to reduce object creation and garbage collection overheads
- avoiding synchronization to reduce synchronization overheads
- avoiding system paging to improve runtime performance
- avoiding temporary objects to reduce object creation and garbage collection overheads
- avoiding too much parallelism that might incur excessive runtime overheads
- avoiding tuning: one of the simplest tuning techniques
- avoiding unnecessary assignments to reduce runtime overheads
- avoiding wrapping primitives to reduce runtime overheads, object creation overheads and garbage collection overheads
- basic tuning techniques: the simplest techniques to try first
- batching to reduce unnecessary communication overheads in distributed applications and to improve performance by combining activities
- batching data to reduce unnecessary communication overheads in distributed applications
- buffering i/o to reduce i/o overheads
- bypassing serialization overheads to eliminate unnecessary runtime overheads
- bypassing shared resources to reduce performance costs
- caching to improve the performance of repeated access and calculations
- caching distributed data to reduce unnecessary communication overheads in distributed applications and to speedup repeated access of distributed data
- caching frequently accessed elements to improve the performance of repeated access and calculations
- caching i/o to reduce unnecessary communication overheads in distributed applications and to speedup repeated access of distributed objects
- caching InetAddress to reduce unnecessary communication overheads in address lookups
- caching intermediate results to improve the performance of calculations
- canonicalizing objects to reduce object creation and garbage collection overheads and to speedup comparisons of objects
- choosing faster collections to improve the performance of collection objects
- clustering files together to reduce i/o and operating system overheads
- clustering objects to reduce i/o and conversion overheads
- combining messages to reduce communication overheads in distributed applications
- comparing Strings by identity to speedup comparisons
- compressing data to speedup network transfers
- converting recursion to iteration to speedup runtime processing
- cutting dead code to eliminate unnecessary runtime overheads
- decoupling i/o from other activities to eliminate blocking and maintain responsiveness
- designing applets to improve applet download time
- desynchronizing classes to reduce synchronization overheads
- duplicating data to reduce communication overheads in distributed applications
- eliminating common expressions to reduce runtime overheads
- eliminating error checking to reduce runtime overheads
- eliminating logging to reduce i/o and runtime overheads
- eliminating null tests to reduce runtime overheads
- eliminating prints to reduce i/o and runtime overheads
- eliminating unnecessary variables to reduce runtime overheads
- enumerating constants to speedup comparisons of objects and improve memory requirements
- externalizing instead of serializing to speedup serialization
- faster conversions to strings to improve runtime performance
- faster data conversion to improve runtime performance and speedup serialization
- faster formatting to improve runtime performance
- faster hostname translation to reduce unnecessary communication overheads in address lookups
- faster i/o using cached filesystems
- faster manipulation of array elements to improve runtime performance
- faster manipulation of variables to improve runtime performance
- faster startup from cached filesystems
- faster startup from disk sweet spots
- faster tests to improve runtime performance
- flattening objects to reduce object creation and garbage collection overheads
- flexible method entry points to support faster methods
- focusing on object creation to reduce garbage collection and object creation overheads
- focusing on shared resources to eliminate blocking and maintain responsiveness
- identifying performance limitations to eliminate performance conflicts
- improving case-insensitive searches to speedup comparisons of objects
- improving low level connections to reduce unnecessary communication overheads
- improving search strategies to speedup runtime processing
- improving the user interface to improve the user’s perception of application performance
- increasing swap space to improve system memory limitations
- initializing variables once only, to eliminate unnecessary runtime overheads
- inlining to speedup runtime execution
- inlining in bottlenecks for targetted speedup of runtime processing
- inserting delays to stabilize the user’s perception of the performance
- isolating swap to improve system i/o
- journaling to speedup i/o
- keeping files open to reduce i/o overheads
- keeping spare capacity to reduce system overheads
- load balancing to improve runtime performance
- load balancing TCP/IP to improve network server performance
- locking memory to specify the amount of memory allocated by the system to selected processes
- managing threads to reduce runtime overheads
- measuring network speeds to improve the user’s perception of the performance
- minimizing communication to reduce unnecessary communication overheads in distributed applications
- minimizing CPU contention to reduce operating system overheads
- minimizing server down-time to improve the user’s perception of the performance
- minimizing transaction time to reduce blocking and maintain responsiveness and to improve i/o throughput
- monitoring the application to identify performance changes
- monitoring the system to identify performance changes
- monitoring threads to improve performance
- moving loops to native routines to speedup runtime processing
- moving object creation time to speedup runtime processing
- multiplexing to reduce unnecessary communication overheads in distributed applications
- multiplexing i/o using select() to reduce runtime overheads
- multithreading stateful singletons to support multiple processors and reduce synchronization overheads, and to reduce object creation and garbage collection overheads
- optimizing array matching algorithms to speedup comparisons of objects
- optimizing collections to improve the performance of a class without changing its external interface
- optimizing comparisons in sorts to speedup the sort
- optimizing CPU utilization
- optimizing for update or access to improve the performance of a class without changing its external interface
- optimizing load balancing to improve runtime performance
- optimizing loop termination tests to improve runtime performance
- optimizing network packet sizes to reduce unnecessary communication overheads
- optimizing sorting
- overiding default serialization to speedup serialization
- packaging classfiles to reduce i/o overheads
- parallelizing i/o to speedup i/o
- partially reading objects to speedup i/o and data conversions
- partitioning applications to reduce unnecessary communication overheads in distributed applications
- partitioning data to reduce unnecessary communication overheads in distributed applications
- partitioning system resources to allocate determinate resources
- preallocating objects to speedup runtime processing
- predicting performance for analysis and design phases
- pre-sizing collections to speedup runtime processing and to reduce object creation and garbage collection overheads
- putting i/o in the background to reduce blocking, maintain responsiveness and to improve i/o throughput
- reading forwards to speedup i/o
- recording all changes to identify performance changes
- redesigning for less communications to reduce unnecessary communication overheads in distributed applications
- reducing dropped packets to reduce network retransmissions
- reducing features to reduce performance overheads
- reducing method call frequency to speedup runtime processing
- reducing overheads at the design stage to improve performance
- reducing total transmissions to reduce unnecessary communication overheads in distributed applications
- reducing unnecessary communication overheads in distributed applications
- removing unnecessary transactions to reduce blocking, maintain responsiveness and to improve i/o throughput
- removing unused fields to reduce object creation and garbage collection overheads
- renaming to shorter names to reduce class loading and network transfer times
- replacing classes to eliminate extraneous overheads
- replacing object collections with arrays to reduce object creation and garbage collection overheads
- replacing objects with primitives to reduce object creation and garbage collection overheads
- replacing primitives with ints to speedup runtime processing
- reuseable object pools to reduce object creation and garbage collection overheads
- reusing collections to reduce object creation and garbage collection overheads
- reusing exceptions to speedup runtime processing, reduce object creation and garbage collection overheads
- reusing linked list nodes to reduce object creation and garbage collection overheads
- reusing objects to reduce object creation and garbage collection overheads
- reusing parameters to reduce object creation and garbage collection overheads
- rewriting switch statements to speedup runtime processing
- scheduling recently used threads to improve runtime performance
- searching compressed data directly to speedup runtime processing and improve memory requirements
- shrinking classfiles to reduce class loading and network transfer times
- sorting approximately to speedup runtime processing
- sorting directly on a field to speedup runtime processing
- sorting linked lists faster
- sorting twice to speedup runtime processing
- specifying and eliminating environments for analysis and design phases
- specifying performance at the analysis and design phases
- speculative optimization by adaptive compilers
- speeding applet downloads to improve applet download time
- speeding network transfers to speedup serialization
- speeding object creation time
- speeding up array copying
- splitting transfers to eliminate blocking and maintain responsiveness
- striping disks to speedup i/o
- stubbing to reduce unnecessary communication overheads in distributed applications
- targetting easier fixes to speedup tuning
- threading class loading to improve startup time
- threading data structures to speedup intense calculations
- threading slow operations to improve runtime performance
- threading strategies to improve runtime performance
- tightly specifying SQL to speedup SQL queries
- timing out processes to reduce operating system overheads
- timing out transactions to reduce blocking and maintain responsiveness
- transferring blame to improve the user’s perception of the performance
- tuning disks to speedup i/o
- unrolling loops to make them faster
- upgrading disks to speedup i/o and system performance
- using a local DNS server to reduce unnecessary communication overheads in address lookups
- using array lookups to replace runtime processing
- using asynchronous communications to eliminate blocking and maintain responsiveness
- using asynchronous i/o to reduce blocking and maintain responsiveness and to improve i/o throughput
- using atomic operations to reduce synchronization overheads
- using batch processing to improve performance by combining activities
- using better hardware to speedup i/o, CPU, system performance and network bandwidth
- using bigger buffers to improve buffer effects
- using change-objects to improve transaction times by moving changes into change-objects
- using char arrays instead of Strings to reduce object creation and garbage collection overheads, and to speedup character processing and avoid String performance limitations
- using code motion to speedup runtime processing
- using CollationKeys instead of Collators to speedup sorting
- using comparison by identity for faster comparisons of objects
- using comparisons to 0 to speedup comparisons of data
- using compression for large transfers to speedup data transfers and reduce communication overheads
- using compression to speedup i/o
- using data specific comparison algorithms to speedup comparisons of objects
- using dummy objects to speedup method invocations
- using exception terminated loops to speedup loops
- using extra method parameters to reduce method invocations
- using HashMap instead of Hashtable to improve performance
- using hybrid structures to improve the performance of a class without changing its external interface
- using immutable objects to reduce object creation and garbage collection overheads and to speedup object access
- using int data types to speedup runtime processing
- using interfaces to provide implementation flexibility
- using JDBC optimizations to improve runtime performance
- using lazy initialization to reduce object creation and garbage collection overheads, and to speedup runtime processing and improve memory requirements
- using locks to reduce conflicts generated by simultaneous access to shared resources
- using memory mapped files to speedup i/o
- using native method calls to improve performance
- using parallelism to improve performance
- using plain arrays to improve performance
- using prepared statements to speedup SQL queries
- using prime numbers for hashing functions to improve performance
- using raw partitions to speedup i/o
- using shared memory to speedup i/o
- using singletons to speedup runtime processing and improve memory requirements, and to reduce object creation and garbage collection overheads
- using slack system time to improve performance
- using specialized keys to improve lookup times
- using specialized Maps to improve access and updates for particular data types
- using specialized sorts to speedup sorting
- using stateless objects to reduce blocking and maintain responsiveness
- using static fields instead of instance fields to reduce object creation and garbage collection overheads
- using statically defined queries to speedup database queries
- using strength reduction to speedup runtime processing
- using string canonicalization to speedup comparisons of objects and to reduce object creation and garbage collection overheads
- using String methods for comparisons to speedup comparisons
- using StringBuffer instead of string concatenation to reduce object creation and garbage collection overheads
- using thread pools to reduce runtime overheads
- using transactionless modes to reduce blocking and maintain responsiveness and to improve i/o throughput and performance
- using transient fields to avoid serialization and speedup serialization
- using two collections to improve speeds for different types of access and update
- using type specific sorting to speedup runtime processing
- using weakly referenced objects to reduce object creation and garbage collection overheads
- violating encapsulation to speedup field access
- wrapping objects to provide implementation flexibility
- wrapping objects in sorts to speedup comparisons of objects
- unwrapping synchronized wrapped classes to improve performance
Some tuning techniques covered in the Java Performance Tuning book
Last modified 21 August 2000
--Jack Shirazi