with best GUI interface.There are
many additional tools available that helps for maintainging server. sometimes,
when working with millions of records and a huge amount of data then some
importent tips must follow to improve SQL Operations which is ignored by
Developer.Sometimes user face problem like slow SQL Operations when working
with large amount of data.
So Here,we have some Importent Tips
to improve Performance of SQL Operation.
It
is client/server implementation that consist of a server daemon and many diffrent
client programs/libraries. Here is very useful tips for all mysql DBA's,which
is Suggested by mysql community experts.
1. Kaj (Most Excellent Obvious
Facilitator) Index stuff.
2. Ronald Don't Index
Everything
3. Use benchmarking
4. Minimize traffic by
fetching only what you need.
5. Paging/chunked data
retrieval to limit
6. Don't use SELECT *
7. Be wary of lots of small
quick queries if a longer query can be more efficient
8. Use EXPLAIN to profile the
query execution plan
9. Use Slow Query Log (always
have it on!)
10. LIMIT m,n may not be as
fast as it sounds
11. Don't use ORDER BY RAND()
if you have > ~2K records
12. Use SQL_NO_CACHE when you
are SELECTing frequently updated data or large sets of data
13. avoid wildcards at the
start of LIKE queries
14. avoid correlated subqueries
and in select and where clause (try to avoid in)
15. config params.
16. no calculated comparisons
-- isolate indexed columns
17. innodb_flush_commit=0 can
help slave lag
18. ORDER BY and LIMIT work
best with equalities and covered indexes
19. isolate workloads don't let
administrative work interfere with customer performance. (ie backups)
20. use optimistic locking, not
pessimistic locking. try to use shared lock, not exclusive lock. share mode vs.
FOR UPDATE
21. use row-level instead of
table-level locking for OLTP workloads
22. Know your storage engines
and what performs best for your needs, know that different ones exist.
23. Optimize for data types,
use consistent data types. Use PROCEDURE ANALYSE() to help determine if you
need less
24. separate text/blobs from
metadata, don't put text/blobs in results if you don't need them
25. if you can, compress
text/blobs
26. compress static data
27. don't back up static data
as often
28. derived tables (subqueries
in the FROM clause) can be useful for retrieving BLOBs w/out sorting them.
(self-join can speed up a query if
1st part finds the IDs and use it to fetch the rest)
29. enable and increase the
query and buffer caches if appropriate
30. ALTER TABLE...ORDER BY can
take chronological data and re-order it by a different field
31. InnoDB ALWAYS keeps the
primary key as part of each index, so do not make the primary key very
large, be careful of redundant
columns in an index, and this can make the query faster
32. Do not duplicate indexes
33. Utilize different storage
engines on master/slave ie, if you need fulltext indexing on a table.
34. BLACKHOLE engine and
replication is much faster than FEDERATED tables for things like logs.
35. Design sane query schemas.
don't be afraid of table joins, often they are faster than denormalization
36. Don't use boolean flags
37. Use a clever key and ORDER
BY instead of MAX
38. Keep the database host as
clean as possible. Do you really need a windowing system on that server?
39. Utilize the strengths of
the OS
40. Hire a MySQL (tm) Certified
DBA
41. Know that there are many
consulting companies out there that can help, as well as MySQL's
Professional
Services.
42. Config variables &
tips:
1.
use one of the supplied config files
2.
key_buffer, unix cache (leave some RAM free), per-connection variables, innodb
memory
variables
3.
be aware of global vs. per-connection variables
4.
check SHOW STATUS and SHOW VARIABLES (GLOBAL|SESSION in 5.0 and up)
5.
be aware of swapping esp. with Linux, "swappiness" (bypass OS
filecache for innodb data files, innodb_flush_method=O_DIRECT
if possible (this is also OS specific))
6.
defragment tables, rebuild indexes, do table maintenance
7. If you use
innodb_flush_txn_commit=1, use a battery-backed hardware cache write
controller
8.
more RAM is good so faster disk speed
9.
use 64-bit architectures
43. Know when to split a
complex query and join smaller ones
44. Debugging sucks, testing
rocks!
45. Delete small amounts at a
time if you can
46. Archive old data -- don't
be a pack-rat! 2 common engines for this are ARCHIVE tables and MERGE tables.
47. use INET_ATON and INET_NTOA
for IP addresses, not char or varchar
48. make it a habit to
REVERSE() email addresses, so you can easily search domains
49. Use LOAD DATA instead of
INSERT.
50. increase
myisam_sort_buffer_size to optimize large inserts (this is a per-connection
variable)
51. look up memory tuning
parameter for on-insert caching
52. increase temp table size in
a data warehousing environment (default is 32Mb) so it doesn't write to disk (also
constrained by max_heap_table_size, default 16Mb)
53. Normalize first, and
denormalize where appropriate.
54. Databases are not
spreadsheets, even though Access really really looks like one. Then again,
Access isn't a real database.
55. In 5.1 BOOL/BIT NOT NULL
type is 1 bit, in previous versions it's 1 byte.
56. A NULL data type can take
more room to store than NOT NULL
57. Choose appropriate
character sets & collations -- UTF16 will store each character in 2
bytes, whether it needs it or not, latin1 is faster than UTF8.
58. make similar queries
consistent so cache is used
59. Have good SQL query
standards
60. Don't use deprecated
features
61. Use Triggers wisely
62. Run in SQL_MODE=STRICT to
help identify warnings
63. Turning OR on multiple
index fields (<5.0) into UNION may speed things up (with LIMIT), after
5.0 the index_merge should pick stuff up.
64. /tmp dir on battery-backed
write cache
65. consider battery-backed RAM
for innodb logfiles reference points can be calculated.
67. as your data grows,
indexing may change (cardinality and selectivity change). Structuring may want
to change. Make your schema as modular as your code. Make your code able to
scale. Plan and embrace change, and get developers to do the same.
68. pare down cron scripts
69. create a test environment
70. try out a few schemas and
storage engines in your test environment before picking one.
71. Use HASH indexing for
indexing across columns with similar data prefixes
72. Use myisam_pack_keys for
int data
73. Don't use COUNT * on Innodb
tables for every search, do it a few times and/or summary tables, or if you
need it for the total # of rows, use SQL_CALC_FOUND_ROWS and SELECT
FOUND_ROWS().
74. use --safe-updates for
client
75. Redundant data is redundant
76. Use INSERT ... ON DUPLICATE
KEY update (INSERT IGNORE) to avoid having to SELECT
77. use groupwise maximum
instead of subqueries
78. be able to change your
schema without ruining functionality of your code
79. source control schema and
config files
80. for LVM innodb backups,
restore to a different instance of MySQL so Innodb can roll forward
81. use multi_query if
appropriate to reduce round-trips
82. partition appropriately
83. partition your database
when you have real data
84. Don't use DISTINCT when you
have or could use GROUP BY
85. Use proper data partitions
86. For Cluster. Start thinking
about Cluster *before* you need them
87. Insert performance
88. Batch INSERT and REPLACE
0 comments:
Post a Comment