Frequently Asked Questions could include your suggestions.
Q. Command line/Command prompt usage termination? After completing command line/prompt activities, how do I close it?
Linux ctrl+c to release resources and take your PID off the top/htop list of activities
Windows type 'exit' at the command prompt and press enter, or close your Window you are working in (top right)
there are likely other ways, if you know an easy one, please email suggestions
to firstname.lastname@example.org with Subject of 'faq suggestion'.
Hard Shutdown is another way, but please avoid this solution ).
Q. Computer to computer connectivity options?
Used by Back To Basics, Inc. at some point in the past
Products for 3/18 Windows, Mac 03/2018
When will linux be supported by join.me?
Check their web site, bottom of page. Select Knowledge Base, search for linux.
Products for: 3/18 Windows to be researched
Products for: 3/18 Windows, Linux, Android, Mac, IOS, Web, Xbox One
Products for: 3/18 Windows to be researched
Q. General log use
SET GLOBAL general_log=1; to turn ON
in 15 seconds (or whatever time you want to end the logging)
SET GLOBAL general_log=0; to turn OFF
On a busy system, 1 minute of General Log will be a very large file.
FAILURE to turn OFF your general log will result in VERY BAD NEWS.
For additional information refer to your version's reference manual.
SELECT @@general_log_file; to view the name of your general query log file.
Q. How can I find JOINS or QUERIES not using indexes?
Last Updated 2019-12-17
From SSH login with MySQL Client connection at MySQL prompt,
variable_names to check on current value and possibly modify with SET GLOBAL v_n=xxx;
log_output includes FILE
min_examined_row_limit is 1 to eliminate useless clutter in the Slow Query log
datadir for the path to your log file
slow_query_log_file has a valid/accessible filename
slow_query_log is ON or 1 (for ON)
SELECT long_query_time; you will need to know your current value in an HOUR - WRITE it on PAPER
To find and log ONLY 'not using indexes' for ONE HOUR,
SET GLOBAL long_query_time=50000; to avoid comingling slow queries with 'not using indexes' in the log
SET GLOBAL log_queries_not_using_indexes=1; for ON to find processes not using indexes
SET AN ALARM CLOCK for ONE HOUR
at the end of an HOUR,
SET GLOBAL log_queries_not_using_indexes=0; to OFF
SET GLOBAL long_query_time=(value discovered above);
FLUSH LOGS; to ensure Slow Log is written to media
SELECT @@log_queries_not_using_indexes; to verify actually is OFF
SELECT @@long_query_time; to verify you are back to discovered value before this HOUR started
Your 'not using indexes' list will be at the end of the filename discovered.
If you would like assistance with corrective action, please select
Shop Analysis Services from our menu and consider ordering our
Slow Query Log Analysis service.
Q. How can I reduce processes 'not using indexes'?
Ensure OPTIMIZE [NO_WRITE_TO_BINLOG] TABLE tbl_name; (high record count = long table LOCK)
ANALYZE [NO_WRITE_TO_BINLOG] TABLE tbl_name;
has been completed recently to ensure table(s) statistics are current.
Use SHOW CREATE TABLE tbl_name; results to be aware of existing indexes.
Your creation of appropriate index(s) to enable higher performance.
Tips for indexes: WHERE lcol=rcol condition, have an index on each xcol for high speed matching
JOIN tbl_name ON lJcol=rJcol, have an index on each xJcol for high speed join matching
for complex = you should consider a composite index (covering index) for high speed.
JOIN and WHERE column Data Types should be identical for high speed.
Observation: Resolving ONE frequently used process per WEEK will be worth your
one-time analysis and index storage required for quicker query completion in the future.
If you find this useful, please email sharable comments to email@example.com
and let us know how much we helped you improve your instance.
Additional assistance is available if you Order Slow Query Log Analysis from us.
Q. OPTIMIZE - Why should I use OPTIMIZE?
The same reason someone needs to take out the trash, occasionally.
When rows are added to tables, indexes need to have somewhere to live.
This process involves 'splitting pages' and keeping up with where they are.
If your data table is added to or changed, you may have a page split and
be unaware of what has been done for you behind the scenes.
OPTIMIZE TABLE tbl_name will take care of getting your data back together for improved performance. Table will be LOCKED during the process.
Q. Are there faster alternatives?
ALTER TABLE tbl_name ENGINE=INNODB
is a real good candidate.
Likely, will someone provide timing comparisons, please.
Q. When should I OPTIMIZE tables?
During your maintenance window (when activity is expected to be minimal).
Our Utility Script findfragtables.sql will list your tables with the most fragmentation first, to least fragmented tables last.
Using root login, SELECT @@innodb_file_per_table; to determine current setting. If your result is ON, continue. If OFF, schedule a time to Skype CHAT with us, please.
If the total of DATA_Space and FREE_Space is under 10MB, you should consider the table to be within tolerable limits.
If your FREE_Space is GREATER THAN 10% of your DATA_Space consider using OPTIMIZE. Use of this script weekly should guide you toward automating OPTIMIZE for those tables that show up every week as candidates for your CRON jobs on the weekend.
Q. Where can I find hidden index information?
SHOW EXTENDED INDEX FROM schema.tbl_name;
introduced EXTENDED option in V8.
Q. Why is MySQL slow?
Poorly tuned instance (or never tuned) - our quarterly - LOYALTY services will help.
Fragmented tables that need to be OPTIMIZEd.
Our Utility Scripts+ includes findfragtables.sql to identify fragmented tables.
Redundant indexes on tables that need to be considered for index removal.
Our Utility Scripts+ includes find-redundant-indexes.sql to identify the index schema and name.
Table(s) with no PRIMARY KEY.
A) PRIMARY KEY column attributes MUST include UNIQUE and NOT NULL attribute for best performing tables.
Details to check on a reasonably active table, (maybe a table showing up in Slow Query Log frequently)
SHOW INDEX FROM tbl_name;
In first detail line Non_unique column = 0; UNIQUE key values are REQUIRED
if Key_name column = PRIMARY
AND the Column_name = columnname you would choose
this aspect of your table can not be improved.
If you have more than 5 filenames for indexes, you are likely 'over indexed' and
this will cause additional CPU use and extended time to be used for
DELETE INSERT MERGE and UPDATE operations. Please review available tutorials/books on database index design and maintenance requirements for the many reasons to avoid a table with more than 5 indexes. We have seen 60 indexes many times on one table and yes, the workload is more significant than expected. And then you have the space requirements as well.
As you review the index list, if you see a
Column_name value that you can verify is NEVER used in a WHERE, JOIN, GROUP BY (there could be more) AND on that line, the Seq_in_index is 1, you have an UNUSED index, that will only waste CPU and space on your storage media. Carefully consider eliminating this one index - remembering, it could be used for annual or quarterly reporting.
SHOW CREATE TABLE tbl_name;
Q. Is the engine displayed InnoDB?
No, go to some other technical resource,
unless you want to talk about migration to INNODB.
Q. Does SCT have the PRIMARY key listed that YOU would choose?
If not, all reads are inhibited from performing well.
Behind the scenes MySQL must have a PRIMARY key to minimize system activity, so they just create one the best the software can.
Q. Do you know the NATURAL key for the table?
If not, think about a filing cabinet, how would you find your data?
Is the candidatecolumn UNIQUE (if it is listed as PRIMARY, the column is assumed to be UNIQUE in InnoDB) and column attribute of NOT NULL? This COULD be your PRIMARY key.
To VERIFY the column is unique,
SELECT candidatecolumn, COUNT(*) FROM tbl_name GROUP BY candidatecolumn HAVING COUNT(*) > 1 LIMIT 20;
the LIMIT is to conserve your time, only - there could be thousands.
if anything is listed, they are your non-unique values, with count for the specific value.
If anything is listed, this can NOT be your PRIMARY KEY, stop reading now for this purpose.
To VERIFY every row is NOT NULL in the column,
SELECT COUNT(candidatecolumn) FROM tbl_name WHERE candidatecolumn = NULL;
result must be ZERO for you to ALTER TABLE column for NOT NULL
SELECT COUNT(candidatecolumn) FROM tbl_name WHERE candidatecolumn = NOT NULL;
You now are aware of how many rows were counted for this table.
SHOW INDEX FROM tbl_name; to get a row count from MySQL's perspective
If it is not the same number as in the count of NOT NULL, Houston we have a problem AND you can not proceed with managing this table to get a real PRIMARY key. For documentation of current index structure and number of rows available from the OPTIMIZER's perspective, print, date the report and file in your date tickler of things to follow up on in 30 - 90 days.
Q. How do I make the table perform better?
After practicing in your own test/dev environment,
GET your own tbl_name BACKUP you know HOW to RESTORE ----- and practice, if not, you could be sorry.
ALTER candidatecolumn to be UNIQUE and NOT NULL
ALTER candidate column to be PRIMARY
Q. So, I have to do all these things?
No, you can just leave it alone and it will run as is, just slower than if these details were followed to enable higher performance and your read data counts will be lowered in SHOW GLOBAL STATUS reporting.
Another side effect of completing this is the likely reduction in size of each ndx file you have defined for this table.
Wrap Up 'Why is MySQL Slow?' - Last detail:
In 30 - 90 Days from today (put the item on your calendar, please), SHOW INDEX FROM tbl_name; to get current statistics and cardinality from MySQL's perspective for comparison to SHOW INDEX results from 30 minutes ago.
Q. Review's suggested formats
My 16GB 6 core HDD Linux Wordpress server was always showing 400% CPU busy. After applying suggestions from LOYALTY Global Status and Global Variable Analysis DELTA usually we are under 20% CPU busy.
Our 32GB 6 core SSD Linux Drupal server was struggling with 2nd trip display of a catalog page of 20 items for sale. Catalog has ~ 500,000 items on line. Database contains ~60 Million rows across 416 tables. Typical response time to next/prev page was 30-40 seconds. After applying 10 Suggestions, typical response time is now 1-2 seconds. Visit our site at performanceparts.com when you have a moment, please.
Our 320GB 40 core SSD Linux MySQL server was struggling with erratic context switching delays. Same query repeated testing with profiler indicated nnn context switches usually before 5th run of the query. After applying 10 Suggestions, typical context switches was down to nnn context switches as the highest count by the 5th run of the query.
Q. Timestamp data type considerations
As of 2020 01 26.
Your columns with data type of timestamp likely have a range of 1970-01-01 00:00:00 through 2038-01-19 03:14:07. Why do you care? Because if your software survives to the through date (or near this date), it will be unable to store a date beyond the through date and your remaining customers will be quite unhappy with a non functioning system.
Possible alternative: use data type of datetime where the range goes thru 9999 and by then you will not be able to care due to natural causes.
Similar data types that could impact your application: current_timestamp, localtimestamp, unix_timestamp, utc_timestamp or anything with %timestamp% including default values with timestamp in the name. If you use them, test them to be aware of the limits.
for many details of the cause.
and more details that could be helpful in your effort.