SQL Design Considerations
February 28, 2010In order to assist us in designing your database, please have your applications and dba review and answer the following questions.
SQL:
• Has SQL been explained/optimized?
• Have ‘bind variables’ been used? ie select * from my.table where name = :b1; Bind variables are not actually substituted until the statement has been successfully parsed. This allows the sql to be shared even though the bind variables are different.
• Where possible, has ‘hold_cursor’ been used?
• Are explicit rather than implicit cursors being used? If not, is there a reason?
o Explicit cursors usually perform better than implicit cursors. An implicit cursor (one without the declare) usually requires 2 fetches to ensure that there are no more rows to fetch.
• If part of your processing requires deleting all rows from a table, are you using the ‘truncate’ command? This command performs much faster than the ‘delete’ command when deleting all rows.
• Have hints been used where necessary to help the optimizer choose the appropriate path?
• Are sequences used. Should they be cached?
• If using wildcard searches (e.g. XXX%) and an index is available, code with a hint…’select /*+ index(tab indx) */ to tell Oracle to use an index.
Analyze:
• After tables are loaded, has analyze been run?
• If the tables will increase dramatically with time, should we cron an analyze job?
This puts statistics in the data dictionary and helps Oracle to choose the proper path when using the cost based optimizer.
Indexes:
• Have indexes been defined on all foreign keys?
• Have indexes been created where required?
• When using indexed columns in where clauses, are both variables of the same type? ie char = char or numeric = numeric. If a character type is compared to a numeric type, the character column is automatically converted to numeric. This can cause an indexed column NOT to be used.
• Has the use of functions been avoided on index columns?
• Indexes are not used if the indexed column is part of a function… ie ‘salary + 12’.
Same question for ‘NOT’. Same question for ‘OR’.
Distributed:
• When doing distributed calls, are you using ‘arrays’ to return multiple values over the network at one time? In addition, if doing a lot of distributed calls, have the SA look at the size of the SDU.
How to Set Array Sizes:
Oracle Call interface: uses the OFEN parameter to specify #of rows
SQLPLUS : Use set arraysize
SQL LOADER : Use the rows parameter
SQL FORMS : Uses array processing by default
Pro Cobol : Occurs xx times
Pro C : name[xx]
• Is the multi threaded (mts) feature of Oracle being used?
• This should only be used if > 500 concurrent users are expected.
• Do you require or provide data for other database(s)?
• If so, which database(s) and in what form? Will links need to be set up?
• Are there dependencies that the DBA will need to be aware of?
• Do you require snapshots? If so, are they simple snapshots?
• The use of snapshot logs with simple queries usually perform faster.
• What version of sqlnet are you using?
Design/Sizing:
• Have heavily inserted tables had there ‘freelists’ adjusted?
• This should be set to the number of concurrent inserts you would expect against the table.
• Have you calculated production sizes for all of the tables. In addition, have you anticipated future growth? If so, please supply documentation.
• Will a lot of sorts be needed….ie order bys or index creations? Can most be done in memory? If not, would your application benefit by multiple temp tablespaces?
• If there are long queries on a table with many updates, please provide as much info as possible so that the optimal parameter can be set correctly on the rollback segment to avoid the dreaded ‘snapshot too old’ message.
• What is a good timeout value for your application.
• A system parameter is required to detect when a connection is no longer valid.
• How many updates/inserts/deletes to you expect to do per hour/day?
• This information is required to size both the rollback and the redo logs correctly.
• How many users to you expect? How many concurrent? Do you want roles set up….ie groups of privileges assigned to a role which can then be assigned to multiple users? Is each user going to have there own unique Oracle account?
• Do you have any special data retention requirements?
o Is there a preferable time to take backups?
o The default is to take an export and backup nightly.
o These are saved for 15 days.
• Do you have any special monitoring requirements?
This would include application processes and logs that you might want monitored.
• Do you have any web server requirements?
• Do you use hash clusters or partitioning views?
• Which tables have high update activity with rows that may increase?
• These tables should have a higher pctfree value than the default.
• Which tables have either low update activity, or row sizes that do not increase? These tables can be set with 5 pctfree.
• All ‘not null’ columns must be at the beginning of a table to save row storage
• All ‘long’ columns must be at the end of a table
• Primary keys should be placed at the beginning of a table
• Is referential integrity implemented using the database rather than the application? If not, why?
• Are your business rules implemented in stored objects or within the application. If in the application, why? Stored objects allow 1 version of the code.
• It also allows the server to execute all of the code.
• Will you require any special cron jobs to be set up?
• Do you have any application documentation which you can provide me?
• ERD diagrams? Implementation plan? etc.
• Have you addressed the need to purge off data?
• If so, how?
Loads:
• Will you require large rollback segments?
• If so and if possible use the ‘set transaction use rollback segment’ statement to specify the large rollback.
• If you require a large load, are you creating the indexes after the data is loaded?
• If possible, sort data before loading in the sequence of the most commonly used index.
• If using imports:
o Is the buffer set to at least 10m?
o For a large table import, is commit=y specified?
o This commits after each array (buffer). This can decrease the rollback size required and can improve performance of very large imports.
o Is the table sized properly?
o Export by default consolidates extents. The new table must be large enough for the consolidated table.
• If using sqlload:
o Can you use direct path?
o If so, can you use unrecoverable?
o If using conventional, set the bindsize/rows up as high as possible.
• If doing custom loads:
o Set the commit count up if possible…ie between 100-1000.
o This will prevent the redo log buffer from having to be flushed to disk too often if set too low. Setting too high can effect the size of the rollback segment.
Application
• What are the business hours for your application?
• What processes are required for your app on the server?
• Do you have startup and shutdown scripts?