mysql 问题2则

##sample 1  如何使用rotate 循环mysql 日志,rotate 对象包括error.log ,genere log 等

MySQL Logs Do Not Get Rotated by logrotate Cron Job

On most Linux systems, logrotate is used rotate log files.  When used to rotate MySQL logs, logrotate needs to run "mysqladmin flush-logs" before rotating.  

The file /etc/logrotate.d/mysql is the logrotate script installed by the MySQL server RPM  and by default does contain any login credentials.  The credentials for root should be stored in the hidden configuration file /root/.my.cnf .

First, create /root/.my.cnf and make it accessible only by root:

:# touch /root/.my.cnf

:# chmod go-rwx /root/.my.cnf

Then edit the file and make it look like this: 

[mysqladmin]

user=flushuser

password=flushuser_password

This will allow mysqladmin to be run without entering a username and password.  However, when logrotate is run from a cron job, mysqladmin will fail because crond does not set the $HOME directory, making mysqladmin unable to find the .my.cnf.

The solution is to explicitly point mysqladmin to the hidden /root/.my.cnf by replacing all references to mysqladmin in /etc/logrotate.d/mysql with:

mysqladmin --defaults-extra-file=/root/.my.cnf 

As of MySQL 5.6 series, mysql_config_editor command and --login-path option for client programs are added. Using this facility, it is possible to let client programs to login without specifiying password in plain text. This is more secure way, so recommended. Since command programs reads same name of login path as the program name by default, mysqladmin command reads login path named "mysqladmin" without specifying --login-path option. The following command allows mysqladmin command executed by OS root user to login the server without specifying user name and password.

shell> su -
shell# mysql_config_editor --login-path=mysqladmin --user=flushuser --password

I recommend to create a separate user on MySQL Server to rotate logs, as named flushuser in the example above. Using a user with least priviledge is more secure than using highly priviledged user such as root. The following command is an example to create a user to rotate logs. A user requires only RELOAD priviledge to rotate logs.

mysql> CREATE USER flushuser@localhost IDENTIFIED BY 'flushuser_password';
Query OK, 0 rows affected (0.00 sec)

mysql> GRANT RELOAD ON *.* TO flushuser@localhost;
Query OK, 0 rows affected (0.00 sec)

 
This will allow the logrotate script to work via cron.

比如说

删除通用查询日志

在 MySQL 中,可以使用 mysqladmin 命令来开启新的通用查询日志。新的通用查询日志会直接覆盖旧的查询日志,不需要再手动删除了。

mysqladmin 命令的语法如下:

mysqladmin -uroot -p flush-logs

 

 

#####sample 2  怎么评估mysql 备份时间和备份时间的影响因素,这里使用mysqlbackup工具的 

How to Investigate Fluctuating Duration of MySQL Enterprise Backup backups?

 

It is natural that there is some variation in the execution time. For example MySQL Server and MySQL Enterprise Backup will at least to some extend fight for the same resources (disk, CPU, and possible network), so it is impossible to have identical executions times for each backup.

There are several potential reasons there may be significant variations to the time it takes MySQL Enterprise Backup (MEB) to create a backup. Some of the common reasons are discussed with one reason per section:

  • Lock Time: How long time it takes to flush the tables and obtain a read lock.
  • Amount of Data to Copy: How much work MySQL Enterprise Backup needs to perform.
  • Operating System I/O Cache: The operating system will cache I/O. The effectiveness of the cache influences how fast MySQL Enterprise Backup can read the data.
  • Resources: Whether the system is close to have used all of its resources in one or more areas.

Lock Time

MySQL Enterprise Backup requires locking all tables (using FLUSH TABLES WITH READ LOCK) while copying non-InnoDB tables, getting the binary log file and position, etc. The duration of this depends on two factors:

  • The time it takes to flush the tables and get the lock.
  • The time it takes to perform all of the actions that must be done under lock. See Note 1380266.1 for details of the actions performed.

The time to complete the table flush and get the lock mostly depends on how long time it takes for all running queries to complete. That is, if there is a query that needs another hour to complete, it will take an hour to flush the tables and get the lock. See also Note 1394432.1 and Note 2222462.1. So this phase can vary a lot between backups depending on the queries being executed.

On the other hand, the time to perform the actions under lock is expected to be more stable as long as the amount of non-InnoDB data does not change and the number of InnoDB tablespaces remain roughly constant. However, there is one exception: MySQL Enterprise Backup also completes copying the InnoDB redo log while holding the lock. If MySQL Enterprise Backup has not been able to keep up copying the redo log during the backup of the InnoDB tablespace files, there may be more redo log to copy under log. It can also be an issue if there is a large history list length as InnoDB will start to purge that at a high rate when the read lock is obtained; purging the history list also writes to the redo log which potentially can cause MySQL Enterprise Backup not to be able to catch up - see Note 2179642.1 for an example. In general MySQL Enterprise Backup 4.1.0 and later is less likely to encounter delays due to copying the InnoDB redo log than earlier versions as version 4.1.0 made the redo log handling use three threads compared to the previous one thread.

The lock time can be found from the mysql.backup_history table when history logging is enabled (it is by default), for example:

mysql> SELECT backup_id, start_time, end_time, lock_time, exit_state FROM mysql.backup_history ORDER BY start_time;
+-------------------+---------------------+---------------------+-----------+------------+
| backup_id         | start_time          | end_time            | lock_time | exit_state |
+-------------------+---------------------+---------------------+-----------+------------+
| 15058689045194302 | 2017-09-20 10:55:04 | 2017-09-20 11:21:58 |     6.189 | SUCCESS    |
| 15058705335279522 | 2017-09-20 11:22:13 | 2017-09-20 11:23:04 |     6.555 | SUCCESS    |
| 15058705917411573 | 2017-09-20 11:23:11 | 2017-09-20 11:24:12 |    16.630 | SUCCESS    |
+-------------------+---------------------+---------------------+-----------+------------+
3 rows in set (0.00 sec)

However, the lock_time does not include the time it took to obtain the lock. The time to flush the tables and get the read lock can only be found from the backup log, for example:

...
 mysqlbackup: INFO: Unique generated backup id for this is 15058689045194302
...
170920 10:55:46 mysqlbackup: INFO: Starting to lock all the tables...
170920 11:21:50 mysqlbackup: INFO: All tables are locked and flushed to disk
...
170920 11:21:58 mysqlbackup: INFO: All tables unlocked
170920 11:21:58 mysqlbackup: INFO: All MySQL tables were locked for 6.189 seconds.
...

So in this case, it took just over 26 minutes from the FLUSH TABLES WITH READ LOCK was executed until it completed. The duration of holding the lock can also be seen from the log.

Tip: The backup log can be found in the meta directory of the backup directory.

Amount of Data to Copy

The amount of data copied may change from backup to backup. There can be several reasons for this, for example:

  • The data in the database has changed since the last backup.
  • For incremental and differential backups, the amount of data depends on the activities since the base backup.
  • The amount of binary logs or relay logs has changed, for example due to the time since last purge of binary logs is different or there has been a smaller or larger amount of changes made.
  • The amount of redo log to copy is different. This will happen is the InnoDB activity is different during the backup.

The amount of data read can be seen from the backup log, for example:

...
170920 10:55:20 mysqlbackup: INFO: Copying /home/user/MySQL/Instances/5.7.19_defaults/data/<database>/<tbl4>.ibd (Barracuda file format).
 mysqlbackup: Progress in MB: 200
...

Progress information will be displayed for every 200 MB of data read.

The log will also contain information about the binary log and relay log files backed up, for example:

...
170920 10:55:39 mysqlbackup: INFO: Starting to copy Binlog files...
170920 10:55:39 mysqlbackup: INFO: Copying /var/lib/mysql/ol7-bin.000001.
170920 10:55:39 mysqlbackup: INFO: Copying /var/lib/mysql/ol7-bin.000002.
170920 10:55:39 mysqlbackup: INFO: Copying /var/lib/mysql/ol7-bin.000003.
...

For differential backups, more and more data will be copied to the backup the longer it is since the previous full backup. So it is expected that differential backups will take longer and longer the further into the period between full backups you get.

For both incremental and differential backups, the more data that has been changed, the more data will be included in the backup, and thus the longer it will take. One thing to be aware of is that OPTIMIZE TABLE will rebuild the whole table and those cause an incremental or differential backup to backup the whole table again - even if none of the data within the table has changed - see also Note 2120496.1.

The actual size of the backup can be obtained as follows depending on whether the it is a directory backup (the backup or backup-and-apply-log commands) or an image backup (backup-to-image):

  • Directory backups: Check the total size of the datadir directory inside the backup directory, for example:
    shell$ du -sch 2017-09-20_10-55-04/datadir/*
    232M    2017-09-20_10-55-04/datadir/<database>
    4.0K    2017-09-20_10-55-04/datadir/ibbackup_logfile
    4.0K    2017-09-20_10-55-04/datadir/ib_buffer_pool
    76M     2017-09-20_10-55-04/datadir/ibdata1
    12M     2017-09-20_10-55-04/datadir/mysql
    4.0K    2017-09-20_10-55-04/datadir/ol7-bin.000001
    4.0K    2017-09-20_10-55-04/datadir/ol7-bin.000002
    ...
    1.1M    2017-09-20_10-55-04/datadir/performance_schema
    676K    2017-09-20_10-55-04/datadir/sys
    96K     2017-09-20_10-55-04/datadir/ts1.ibd
    112K    2017-09-20_10-55-04/datadir/ts2.ibd
    1020K   2017-09-20_10-55-04/datadir/world
    438M    total
    Some comments about the output:
    • ibbackup_logfile is the file where the InnoDB redo log has been saved. The larger that file is, the more InnoDB changes were made during the backup.
    • ib_buffer_pool is a list of the pages stored in the InnoDB buffer pool. This is used to improve the performance of InnoDB tables just after restarting MySQL. The size will not change much in general between backups unless the InnoDB buffer pool size is changed.
    • There is a directory for each schema. Examples are employeesmysqlperformance_schemasys, and world. This size for each directory shows the amount of data backed up for that schema.
    • There is a file for each system tablespace file (ibdata1) and for each general tablespace (ts1.ibd and ts2.ibd).
  • Image backups: The total size of the backup can be found by checking the size of the image file, for example:
    shelll$ -lh full_backup.mbi
    -rw-rw-r--. 1 meb meb 437M Sep 20 11:58 full_backup.mbi

    To get more details, use the list-image command:
    shell$ mysqlbackup --backup-image=full_backup.mbi list-image
    ...
    170920 12:00:29 mysqlbackup: INFO: List-image operation starts with following threads
                    1 read-threads    6 process-threads
    [File]: [Size:         287]: backup-my.cnf
    [Dir]: meta
    [File]: [Size:        7374]: meta/backup_create.xml
    [File]: [Size:    79691776]: datadir/ibdata1
    [File]: [Size:       65536]: datadir/ts$.ibd
    [File]: [Size:       98304]: datadir/ts1.ibd
    [File]: [Size:      114688]: datadir/ts2.ibd
    [Dir]: datadir/db1
    [File]: [Size:       98304]: datadir/db1/test1.ibd
    [File]: [Size:       98304]: datadir/db1/test2.ibd
    [Dir]: datadir/db3
    [File]: [Size:    41943040]: datadir/db3/t1.ibd
    [Dir]: datadir/employees
    [File]: [Size:      114688]: datadir/<database>/<tbl1>.ibd
    [File]: [Size:    30408704]: datadir/<database>/<tbl2>.ibd
    [File]: [Size:      131072]: datadir/<database>/<tbl3>.ibd
    [File]: [Size:    23068672]: datadir/<database>/<tbl4>.ibd
    [File]: [Size:   146800640]: datadir/<database>/<tbl5>.ibd
    [File]: [Size:    41943040]: datadir/<database>/<tbl6>.ibd
    [Dir]: datadir/mysql
    [File]: [Size:       98304]: datadir/mysql/engine_cost.ibd
    [File]: [Size:       98304]: datadir/mysql/gtid_executed.ibd
    ...
     
Note: the size of the backup also depends on the backup options. For example enabling compression will cause the resulting files in the backup to be smaller, so it will also depend on the data how much the compressed size is.

Operating System I/O Cache

The operating system will use memory left over after the processes have taken the memory they require to cache I/O. An example from Linux can be seen using the free command:

shell$ free -m
              total        used        free      shared  buff/cache   available
Mem:           7982        1235        2761          80        3985        6546
Swap:          2063           0        2063

In this case 3985 MiB is used for buffers and caches.

The effectiveness of the cache very much depends on the data available in it, so it can vary greatly from backup to backup. How to investigate how effective the I/O cache is beyond the scope of this document - please contact your system administrator.

The main setting in MySQL Server to affect the usage of the I/O cache is innodb_flush_method. If this is set to bypass the I/O cache, InnoDB reads and writes will not affect the content of the cache which is in general better from an InnoDB performance point of view. On the other hand, if InnoDB does use the operating system I/O cache, it can in some cases benefit the speed of the backup.

Resources

While creating the backup MySQL Enterprise Backup will need some of the system resources, for example:

  • Disk I/O: This is one of the main resources required. The backup will both cause an additional read load on the disks where the MySQL Server data directory is located and write load on the disk where the backup directory is located. If these disks are the same the impact will be double. If the backup is streamed to a remote host or to tape, there write load will be greatly reduced, but not entirely eliminated as the InnoDB redo log and meta data files are still written to the local backup directory during the backup.
  • CPU: CPU is both required due to the I/O and to process the data. Particularly incremental backups scanning the tablespace files and compressed backups will require CPU cycles. See Note 1371105.1 for suggestions how to reduce the CPU usage by the mysqlbackup process.
  • Memory: MySQL Enterprise Backup does not in general require much memory to run, but it will need some. If the memory usage of the backup causes the host to swap, it can greatly impact the duration of the backup. The main tool to control how much memory is used by the mysqlbackup process is the --limit-memory option.
  • Network: The network is mostly an issue if the backup is streamed to a remote host or to tape. Do also remember to verify whether there are limitations on the other end (including disk usage). This includes writing backups to an NFS partition.

Resources starvation can be difficult to detect. It is necessary to monitor the resource usage both during "fast" and "slow" backups and compare. For example if the I/O utilization during a "slow" backup is 100% but lower during the "fast" backup, it can suggest the disk I/O is the limitation. Whether the backup causes resource starvation depends on the workload inside MySQL Server (both of foreground and background processes) and of other processes.

MySQL Enterprise Monitor (MEM) is recommended to monitor MySQL and the host: Open the graphs and change the time frame of the graphs to cover different backups, then compare. See als

 

 

##sample 3 general log 如何开启 写入某个其他的表中。

genere log

参考文档

r those blessed with MySQL >= 5.1.12, you can control this option globally at runtime:

  1. Execute SET GLOBAL log_output = 'TABLE';
  2. Execute SET GLOBAL general_log = 'ON';
  3. Take a look at the table mysql.general_log

If you prefer to output to a file instead of a table:

  1. SET GLOBAL log_output = "FILE"; the default.
  2. SET GLOBAL general_log_file = "/path/to/your/logfile.log";
  3. SET GLOBAL general_log = 'ON';

I prefer this method to editing .cnf files because:

  1. you're not editing the my.cnf file and potentially permanently turning on logging
  2. you're not fishing around the filesystem looking for the query log - or even worse, distracted by the need for the perfect destination. /var/log /var/data/log /opt /home/mysql_savior/var
  3. You don't have to restart the server and interrupt any current connections to it.
  4. restarting the server leaves you where you started (log is by default still off)

For more information, see MySQL 5.1 Reference Manual - Server System Variables - general_log

 
  • 4
     
     
    Great UI design. Anyway, MySQL log tables are actually using CSV engine so you can do all that FlipMcF said in the answer about enabling logging into general_log table, and have tail -f of general_log like this: tail -f /var/lib/mysql/mysql/general_log.CSV – user1244798 Mar 2 '12 at 10:09 
  • 2
     
     
    damn it, the mysql doc for this does not even shot the table parameter. thanks a lot. – Abhishek Dujari May 27 '12 at 22:22
  • 2
     
     
  • 7
     
     
    remember to clear out your general log table when you are finished: "truncate table mysql.general_log" – pdwalker Oct 3 '14 at 5:06
  • 1
     
     
    I got the result in this way, but parameters are present by question mark, e.g., select foo from bar where x=?. How can I get the complete query? – petertc Nov 4 '15 at 5:52
 

44

You can enable a general query log for that sort of diagnostic. Generally you don't log all SELECT queries on a production server though, it's a performance killer.

Edit your MySQL config, e.g. /etc/mysql/my.cnf - look for, or add, a line like this

[mysqld]
log = /var/log/mysql/mysql.log

Restart mysql to pick up that change, now you can

tail -f /var/log/mysql/mysql.log

Hey presto, you can watch the queries as they come in.

 

19
SELECT * FROM  mysql.general_log  WHERE command_type ='Query' LIMIT total;
 
 

19

You can do the flowing thing for monitoring mysql query logs.

Open mysql configuration file my.cnf

sudo nano /etc/mysql/my.cnf

Search following lines under a [mysqld] heading and uncomment these lines to enable log

general_log_file        = /var/log/mysql/mysql.log
general_log             = 1

Restart your mysql server for reflect changes

sudo service mysql start

Monitor mysql server log with following command in terminal

tail -f /var/log/mysql/mysql.log
 

12

1) If general mysql logging is enabled then we can check the queries in the log file or table based what we have mentioned in the config. Check what is enabled with the following command

mysql> show variables like 'general_log%';
mysql> show variables like 'log_output%';

If we need query history in table then

Execute SET GLOBAL log_output = 'TABLE';
Execute SET GLOBAL general_log = 'ON';

Take a look at the table mysql.general_log

If you prefer to output to a file:

SET GLOBAL log_output = "FILE"; which is set by default.
SET GLOBAL general_log_file = "/path/to/your/logfile.log";
SET GLOBAL general_log = 'ON';

2) We can also check the queries in the .mysql_history file cat ~/.mysql_history

 

5

Maybe you could find that out by looking at the query log.

 

4

If mysql binlog is enabled you can check the commands ran by user by executing following command in linux console by browsing to mysql binlog directory

mysqlbinlog binlog.000001 >  /tmp/statements.sql

enabling

[mysqld]
log = /var/log/mysql/mysql.log

or general log will have an effect on performance of mysql

 
  •  
    possible, but painful. more useful if you are looking to see what happened in the past. – pdwalker Oct 3 '14 at 5:07

3

If you don't feel like changing your MySQL configuration you could use an SQL profiler like "Neor Profile SQL" http://www.profilesql.com .

 

3

After reading Paul's answer, I went on digging for more information on https://dev.mysql.com/doc/refman/5.7/en/query-log.html

I found a really useful code by a person. Here's the summary of the context.

(Note: The following code is not mine)

This script is an example to keep the table clean which will help you to reduce your table size. As after a day, there will be about 180k queries of log. ( in a file, it would be 30MB per day)

You need to add an additional column (event_unix) and then you can use this script to keep the log clean... it will update the timestamp into a Unix-timestamp, delete the logs older than 1 day and then update the event_time into Timestamp from event_unix... sounds a bit confusing, but it's working great.

Commands for the new column:

SET GLOBAL general_log = 'OFF';
RENAME TABLE general_log TO general_log_temp;
ALTER TABLE `general_log_temp`
ADD COLUMN `event_unix` int(10) NOT NULL AFTER `event_time`;
RENAME TABLE general_log_temp TO general_log;
SET GLOBAL general_log = 'ON';

Cleanup script:

SET GLOBAL general_log = 'OFF';
RENAME TABLE general_log TO general_log_temp;
UPDATE general_log_temp SET event_unix = UNIX_TIMESTAMP(event_time);
DELETE FROM `general_log_temp` WHERE `event_unix` < UNIX_TIMESTAMP(NOW()) - 86400;
UPDATE general_log_temp SET event_time = FROM_UNIXTIME(event_unix);
RENAME TABLE general_log_temp TO general_log;
SET GLOBAL general_log = 'ON';

Credit goes to Sebastian Kaiser (Original writer of the code).

Hope someone will find it useful as I did.

 
  •  
    Interesting. You'd miss queries during the cleanup unless you locked all the db's first. There are other issues that may arise here. If we're at this point of 'continuous' logging, I'd use file logging or the bin log and an off-the-shelf log rotator. – FlipMcF Apr 9 '18 at 17:34 

1

You can look at the following in linux

cd /root

ls -al

vi .mysql_history It may help

 
  • 6
     
     
    This looks like it would only work for the official command-line mysql client, and only for queries executed by the root user – golimar Apr 17 '13 at 10:47
  •  
    Question says 'all the server' which makes this answer incorrect. This would show all queries executed by a single client. – FlipMcF Aug 21 '13 at 20:29
  •  
    If mysql binlog is enabled you can check the commands ran by user by executing following command in linux console by browsing to mysql binlog directory mysqlbinlog binlog.000001 > /tmp/statements.sql enabling [mysqld] log = /var/log/mysql/mysql.log or genral log will have an effect on performance of mysql – Avinash Singh Mar 26 '14 at 9:48
  •  
    Some entries in this are suppressed by default anyway, strings containing "password" for example. – mckenzm May 15 '15 at 18:51

https://stackoverflow.com/questions/650238/how-to-show-the-last-queries-executed-on-mysql

炊烟起了;夕阳下了;细雨来了 多调试,交互式编程体验 记录,独立思考,对比 感谢转载作者 修车 国产化 read and connect 匍匐前进, 讲故事
原文地址:https://www.cnblogs.com/feiyun8616/p/14487393.html