php opcache

让PHP7达到最高性能的几个Tips   http://www.laruence.com/2015/12/04/3086.html

开启opcache前后对比 cpu运算时间减少,内存使用减少

opcache_reset();  //清除opcache缓存

opcache_compile_file(‘test2.php’); //缓存某个文件

opcache_invalidate(‘test2.php’, true) //清除某个文件缓存

opcache_is_script_cached(‘test2.php’) //查看文件是否缓存返回true/false

opcache_get_status();获取缓存的状态信息

opcache_get_configuration(); 获取缓存的配置信息

[opcache]
zend_extension = "G:/PHP/php-5.5.6-Win32-VC11-x64/ext/php_opcache.dll"
 
; Zend Optimizer + 的开关, 关闭时代码不再优化.
opcache.enable=1
 
; Determines if Zend OPCache is enabled for the CLI version of PHP
opcache.enable_cli=1
 
 
; Zend Optimizer + 共享内存的大小, 总共能够存储多少预编译的 PHP 代码(单位:MB)
; 推荐 128
opcache.memory_consumption=64
 
; Zend Optimizer + 暂存池中字符串的占内存总量.(单位:MB)
; 推荐 8
opcache.interned_strings_buffer=4
 
 
; 最大缓存的文件数目 200  到 100000 之间
; 推荐 4000
opcache.max_accelerated_files=2000
 
; 内存“浪费”达到此值对应的百分比,就会发起一个重启调度.
opcache.max_wasted_percentage=5
 
; 开启这条指令, Zend Optimizer + 会自动将当前工作目录的名字追加到脚本键上,
; 以此消除同名文件间的键值命名冲突.关闭这条指令会提升性能,
; 但是会对已存在的应用造成破坏.
opcache.use_cwd=0
 
 
; 开启文件时间戳验证 
opcache.validate_timestamps=1
 
 
; 2s检查一次文件更新 注意:0是一直检查不是关闭
; 推荐 60
opcache.revalidate_freq=2
 
; 允许或禁止在 include_path 中进行文件搜索的优化
;opcache.revalidate_path=0
 
 
; 是否保存文件/函数的注释   如果apigen、Doctrine、 ZF2、 PHPUnit需要文件注释
; 推荐 0
opcache.save_comments=1
 
; 是否加载文件/函数的注释
;opcache.load_comments=1
 
 
; 打开快速关闭, 打开这个在PHP Request Shutdown的时候会收内存的速度会提高
; 推荐 1
opcache.fast_shutdown=1
 
;允许覆盖文件存在(file_exists等)的优化特性。
;opcache.enable_file_override=0
 
 
; 定义启动多少个优化过程
;opcache.optimization_level=0xffffffff
 
 
; 启用此Hack可以暂时性的解决”can’t redeclare class”错误.
;opcache.inherited_hack=1
 
; 启用此Hack可以暂时性的解决”can’t redeclare class”错误.
;opcache.dups_fix=0
 
; 设置不缓存的黑名单
; 不缓存指定目录下cache_开头的PHP文件. /png/www/example.com/public_html/cache/cache_ 
;opcache.blacklist_filename=
 
 
; 通过文件大小屏除大文件的缓存.默认情况下所有的文件都会被缓存.
;opcache.max_file_size=0
 
; 每 N 次请求检查一次缓存校验.默认值0表示检查被禁用了.
; 由于计算校验值有损性能,这个指令应当紧紧在开发调试的时候开启.
;opcache.consistency_checks=0
 
; 从缓存不被访问后,等待多久后(单位为秒)调度重启
;opcache.force_restart_timeout=180
 
; 错误日志文件名.留空表示使用标准错误输出(stderr).
;opcache.error_log=
 
 
; 将错误信息写入到服务器(Apache等)日志
;opcache.log_verbosity_level=1
 
; 内存共享的首选后台.留空则是让系统选择.
;opcache.preferred_memory_model=
 
; 防止共享内存在脚本执行期间被意外写入, 仅用于内部调试.
;opcache.protect_memory=0

 

【Linux】安装 virtualbox 增强功能(共享文件)

挂载 VBoxLinuxAdditions 镜像

方法1:使用虚拟机自带功能 – 安装增强功能来安装镜像

会自动mount镜像到虚拟机中

VirtualBox 增强功能01

方法2:通过寻找iso镜像手动mount

未尝试
VirtualBox 增强功能02

VirtualBox 增强功能03

本机目录: C:\Program Files\Oracle\VirtualBox\VBoxGuestAdditions.iso

[root@WOM ~]# df -h
Filesystem            Size  Used Avail Use% Mounted on
/dev/mapper/vg_xcdw-lv_root
                      493G  212G  256G  46% /
tmpfs                 4.9G  1.2G  3.7G  25% /dev/shm
/dev/sda1             477M   41M  411M   9% /boot
/dev/mapper/vg_xcdw-lv_home
                      493G  225G  243G  48% /home
/dev/sr0               56M   56M     0 100% /media/VBox_GAs_5.2.12 # 增强功能镜像

VBoxLinuxAdditions被自动mount到/media/VBox_GAs_5.2.12

复制文件:cp /media/VBox_GAs_5.2.12/VBoxLinuxAdditions.run /tmp/VBoxLinuxAdditions.run

安装前置依赖包

安装 yum install kernel sources kernel-devel gcc -y 等包

[root@WOM ~]# yum install kernel sources kernel-devel gcc  -y
已加载插件:fastestmirror, refresh-packagekit, security
设置安装进程
Loading mirror speeds from cached hostfile
 * base: mirrors.huaweicloud.com
 * epel: ftp.cuhk.edu.hk
 * extras: mirrors.huaweicloud.com
 * updates: mirrors.shu.edu.cn
No package sources available.
包 gcc-4.4.7-18.el6_9.2.x86_64 已安装并且是最新版本解决依赖关系

--> 执行事务检查
---> Package kernel.x86_64 0:2.6.32-696.28.1.el6 will be 安装
--> 处理依赖关系 kernel-firmware >= 2.6.32-696.28.1.el6,它被软件包 kernel-2.6.32-696.28.1.el6.x86_64 需要
---> Package kernel-devel.x86_64 0:2.6.32-696.28.1.el6 will be 安装
--> 执行事务检查
---> Package kernel-firmware.noarch 0:2.6.32-696.el6 will be 升级
---> Package kernel-firmware.noarch 0:2.6.32-696.28.1.el6 will be an update
--> 完成依赖关系计算
依赖关系解决
===========================================================================================================================================================================================
 软件包                                          架构                                   版本                                                 仓库                                     大小
===========================================================================================================================================================================================
正在安装:
 kernel                                          x86_64                                 2.6.32-696.28.1.el6                                  updates                                  32 M
 kernel-devel                                    x86_64                                 2.6.32-696.28.1.el6                                  updates                                  11 M
为依赖而更新:
 kernel-firmware                                 noarch                                 2.6.32-696.28.1.el6                                  updates                                  29 M

事务概要
===========================================================================================================================================================================================
Install       2 Package(s)
Upgrade       1 Package(s)

总下载量:72 M
下载软件包:
(1/3): kernel-2.6.32-696.28.1.el6.x86_64.rpm                                                                                                                        |  32 MB     00:14     
(2/3): kernel-devel-2.6.32-696.28.1.el6.x86_64.rpm                                                                                                                  |  11 MB     00:16     
http://mirrors.shu.edu.cn/centos/6.9/updates/x86_64/Packages/kernel-firmware-2.6.32-696.28.1.el6.noarch.rpm: [Errno 12] Timeout on http://mirrors.shu.edu.cn/centos/6.9/updates/x86_64/Packages/kernel-firmware-2.6.32-696.28.1.el6.noarch.rpm: (28, 'Operation too slow. Less than 1 bytes/sec transfered the last 30 seconds')
尝试其他镜像。
(3/3): kernel-firmware-2.6.32-696.28.1.el6.noarch.rpm                                                                                                               |  29 MB     00:43     
-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
总计                                                                                                                                                       234 kB/s |  72 MB     05:15     
运行 rpm_check_debug 
执行事务测试
事务测试成功
执行事务
  正在升级   : kernel-firmware-2.6.32-696.28.1.el6.noarch                                                                                                                                1/4 
  正在安装   : kernel-2.6.32-696.28.1.el6.x86_64                                                                                                                                         2/4 
  正在安装   : kernel-devel-2.6.32-696.28.1.el6.x86_64                                                                                                                                   3/4 
  清理       : kernel-firmware-2.6.32-696.el6.noarch                                                                                                                                     4/4 
VirtualBox Guest Additions: Building the VirtualBox Guest Additions kernel modules.
  Verifying  : kernel-firmware-2.6.32-696.28.1.el6.noarch                                                                                                                                1/4 
  Verifying  : kernel-devel-2.6.32-696.28.1.el6.x86_64                                                                                                                                   2/4 
  Verifying  : kernel-2.6.32-696.28.1.el6.x86_64                                                                                                                                         3/4 
  Verifying  : kernel-firmware-2.6.32-696.el6.noarch                                                                                                                                     4/4 

已安装:
  kernel.x86_64 0:2.6.32-696.28.1.el6                                                        kernel-devel.x86_64 0:2.6.32-696.28.1.el6                                                       

作为依赖被升级:
  kernel-firmware.noarch 0:2.6.32-696.28.1.el6                                                                                                                                               

完毕!

运行镜像安装脚本

目录:/tmp
运行命令sh VBoxLinuxAdditions.run

[root@WOM tmp]# sh VBoxLinuxAdditions.run
Verifying archive integrity... All good.
Uncompressing VirtualBox 5.2.12 Guest Additions for Linux........
VirtualBox Guest Additions installer
Removing installed version 5.2.12 of VirtualBox Guest Additions...
You may need to restart your guest system to finish removing the guest drivers.
Copying additional installer modules ...
Installing additional modules ...
VirtualBox Guest Additions: Building the VirtualBox Guest Additions kernel modules.
This system is currently not set up to build kernel modules.
Please install the Linux kernel "header" files matching the current kernel
for adding new hardware support to the system.
The distribution packages containing the headers are probably:
    kernel-devel kernel-devel-2.6.32-696.el6.x86_64
VirtualBox Guest Additions: Starting.
VirtualBox Guest Additions: Building the VirtualBox Guest Additions kernel modules.
This system is currently not set up to build kernel modules.
Please install the Linux kernel "header" files matching the current kernel
for adding new hardware support to the system.
The distribution packages containing the headers are probably:
    kernel-devel kernel-devel-2.6.32-696.el6.x86_64

由上述可知,需要重启系统
重启之前,先配置好共享文件夹

添加共享文件夹

VirtualBox 增强功能04

挂载共享文件夹

切换到root用户输入挂载命令:

sudo mount -t vboxsf shared_file /home/xingoo/shared

注意格式为

sudo mount -t vboxsf 共享文件夹名称(在设置页面设置的) 挂载的目录

配置完成,效果展示

* 查看 共享盘是否已经mount上去 *

[root@WOM ~]# df -h
Filesystem            Size  Used Avail Use% Mounted on
/dev/mapper/vg_xcdw-lv_root
                      493G  212G  256G  46% /
tmpfs                 4.9G     0  4.9G   0% /dev/shm
/dev/sda1             477M   79M  374M  18% /boot
/dev/mapper/vg_xcdw-lv_home
                      493G  225G  243G  48% /home
share_file            1.9T  1.7T  128G  94% /media/sf_share_file

发现多了一个盘share_file
文字文件可以在虚拟机和本机互相复制,则配置成功

原文链接:https://blog.csdn.net/qq_21165007/article/details/80344810

php redis 操作手册

来自:https://www.cnblogs.com/jackluo/p/5708024.html

String 类型操作

string是redis最基本的类型,而且string类型是二进制安全的。意思是redis的string可以包含任何数据。比如jpg图片或者序列化的对象

$redis->set('key','TK');
$redis->set('number','1');
$redis->setex('key',5,'TK'); //设置有效期为5秒的键值
$redis->psetex('key',5000,'TK'); //设置有效期为5000毫秒(同5秒)的键值
$redis->setnx('key','XK'); //若键值存在返回false 不存在返回true
$redis->delete('key'); 删除键值 可以传入数组 array('key1','key2')删除多个键
$redis->getSet('key','XK'); //将键key的值设置为XK, 并返回这个键值原来的值TK
 $ret = $redis->multi()  //批量事务处理,不保证处理数据的原子性
        ->set('key1', 'val1')
        ->get('key1')
        ->setnx('key', 'val2')
        ->get('key2')
        ->exec();
$redis->watch('key');   // 监控键key 是否被其他客户端修改
                           如果KEY在调用watch()和exec()之间被修改,exec失败
function f($redis, $chan, $msg) {  //频道订阅
    switch($chan) {
        case 'chan-1':
            echo $msg;
            break;

        case 'chan-2':
            echo $msg;
            break;

        case 'chan-2':
            echo $msg;
            break;
    }
}

$redis->subscribe(array('chan-1', 'chan-2', 'chan-3'), 'f'); // subscribe to 3 chans

$redis->publish('chan-1', 'hello, world!'); // send message. 
$redis->exists('key'); //验证键是否存在,存在返回true
$redis->incr('number'); //键值加1
$redis->incrby('number',-10); //键值加减10
$redis->incrByFloat('number', +/- 1.5); //键值加减小数
$redis->decr('number'); // 键值减1
$redis->decrBy('number',10); // 键值减10
$mget = $redis->mget(array('number','key')); // 批量获取键值,返回一个数组
$redis->mset(array('key0' => 'value0', 'key1' => 'value1')); // 批量设置键值
$redis->msetnx(array('key0' => 'value0', 'key1' => 'value1')); 
                                        // 批量设置键值,类似将setnx()方法批量操作
$redis->append('key', '-Smudge'); //原键值TK,将值追加到键值后面,键值为TK-Smudge
$redis->getRange('key', 0, 5); // 键值截取从0位置开始到5位置结束
$redis->getRange('key', -6, -1); // 字符串截取从-6(倒数第6位置)开始到-1(倒数第1位置)结束
$redis->setRange('key', 0, 'Smudge'); 
                                    // 键值中替换字符串,0表示从0位置开始
                                       有多少个字符替换多少位置,其中汉字占2个位置
$redis->strlen('key'); //键值长度
$redis->getBit('key');
$redis->setBit('key');

list链表操作

$redis->delete('list-key'); // 删除链表
$redis->lPush('list-key', 'A'); //插入链表头部/左侧,返回链表长度
$redis->rPush('list-key', 'B'); //插入链表尾部/右侧,返回链表长度
$redis->lPushx('list-key', 'C'); 
                 // 插入链表头部/左侧,链表不存在返回0,存在即插入成功,返回当前链表长度
$redis->rPushx('list-key', 'C'); 
                 // 插入链表尾部/右侧,链表不存在返回0,存在即插入成功,返回当前链表长度
$redis->lPop('list-key'); //返回LIST顶部(左侧)的VALUE ,后入先出(栈)
$redis->rPop('list-key'); //返回LIST尾部(右侧)的VALUE ,先入先出(队列)
$redis->blPop();
$redis->brPop();
$redis->lSize('list-key'); 
                    // 如果是链表则返回链表长度,空链表返回0 
                       若不是链表或者不为空,则返回false ,判断非链表 " === false "                          
$redis->lGet('list-key',-1); // 通过索引获取链表元素 0获取左侧一个  -1获取最后一个
$redis->lSet('list-key', 0, 'X'); //0位置元素替换为 X
$redis->lRange('list-key', 0, 3); 
                    //链表截取 从0开始 3位置结束 ,结束位置为-1 获取开始位置之后的全部
$redis->lTrim('list-key', 0, 1); // 截取链表(不可逆) 从0索引开始 1索引结束 
$redis->lRem('list-key', 'C', 2); //链表从左开始删除元素2个C
$redis->lInsert('list-key', Redis::BEFORE, 'C', 'X'); 
                    // 在C元素前面插入X  , Redis::AfTER(表示后面插入) 
                       链表不存在则插入失败 返回0 若元素不存在返回-1
$redis->rpoplpush('list-key', 'list-key2'); 
                    //从源LIST的最后弹出一个元素
                      并且把这个元素从目标LIST的顶部(左侧)压入目标LIST。 

$redis->brpoplpush();
                    //rpoplpush的阻塞版本,这个版本有第三个参数用于设置阻塞时间
                      即如果源LIST为空,那么可以阻塞监听timeout的时间,如果有元素了则执行操作。

Set集合类型

set无序集合 不允许出现重复的元素 服务端可以实现多个 集合操作
$redis->sMembers('key'); //获取容器key中所有元素
$redis->sAdd('key' , 'TK');
                 // (从左侧插入,最后插入的元素在0位置),集合中已经存在TK 则返回false 
                     不存在添加成功 返回true
$redis->sRem('key' , 'TK'); // 移除容器中的TK
$redis->sMove('key','key1','TK'); //将容易key中的元素TK 移动到容器key1  操作成功返回TRUE
$redis->sIsMember('key','TK'); //检查VALUE是否是SET容器中的成员
$redis->sCard('key'); //返回SET容器的成员数
$redis->sPop('key'); //随机返回容器中一个元素,并移除该元素
$redis->sRandMember('key');//随机返回容器中一个元素,不移除该元素
$redis->sInter('key','key1'); 
     // 返回两个集合的交集 没有交集返回一个空数组,若参数只有一个集合,则返回集合对应的完整的数组
$redis->sInterStore('store','key','key1'); //将集合key和集合key1的交集 存入容器store 成功返回1
$redis->sUnion('key','key1'); //集合key和集合key1的并集  注意即使多个集合有相同元素 只保留一个

$redis->sUnionStore('store','key','key1'); 
            //集合key和集合key1的并集保存在集合store中,  注意即使多个集合有相同元素 只保留一个
$redis->sDiff('key','key1','key2'); //返回数组,该数组元素是存在于key集合而不存在于集合key1 key2

Zset数据类型

**(stored set) 和 set 一样是字符串的集合,不同的是每个元素都会关联一个 double 类型的 score
redis的list类型其实就是一个每个子元素都是string类型的双向链表。**

$redis->zAdd('tkey', 1, 'A'); 
                           //  插入集合tkey中,A元素关联一个分数,插入成功返回1
                               同时集合元素不可以重复, 如果元素已经存在返回 0
$redis->zRange('tkey',0,-1); // 获取集合元素,从0位置 到 -1 位置
$redis->zRange('tkey',0,-1, true); 
                    // 获取集合元素,从0位置 到 -1 位置, 返回一个关联数组 带分数 
                      array([A] => 0.01,[B] => 0.02,[D] => 0.03) 其中小数来自zAdd方法第二个参数
$redis->zDelete('tkey', 'B'); // 移除集合tkey中元素B  成功返回1 失败返回 0
$redis->zRevRange('tkey', 0, -1); // 获取集合元素,从0位置 到 -1 位置,数组按照score降序处理

$redis->zRevRange('tkey', 0, -1,true); 
                // 获取集合元素,从0位置 到 -1 位置,数组按照score降序处理 返回score关联数组
$redis->zRangeByScore('tkey', 0, 0.2,array('withscores' => true)); 
            //获取几个tkey中score在区间[0,0.2]元素 ,score由低到高排序,
                元素具有相同的score,那么会按照字典顺序排列 , withscores 控制返回关联数组
$redis->zRangeByScore('tkey', 0.1, 0.36, array('withscores' => TRUE, 'limit' => array(0, 1)));
             //其中limit中 0和1 表示取符合条件集合中 从0位置开始,向后扫描1个 返回关联数组
$redis->zCount('tkey', 2, 10); // 获取tkey中score在区间[2, 10]元素的个数
$redis->zRemRangeByScore('tkey', 1, 3); // 移除tkey中score在区间[1, 3](含边界)的元素
$redis->zRemRangeByRank('tkey', 0, 1); 
                         //默认元素score是递增的,移除tkey中元素 从0开始到-1位置结束
$redis->zSize('tkey');  //返回存储在key对应的有序集合中的元素的个数
$redis->zScore('tkey', 'A'); // 返回集合tkey中元素A的score值
$redis->zRank('tkey', 'A'); 
                      // 返回集合tkey中元素A的索引值 
                         z集合中元素按照score从低到高进行排列 ,即最低的score index索引为0
$redis->zIncrBy('tkey', 2.5, 'A'); // 将集合tkey中元素A的score值 加 2.5
$redis->zUnion('union', array('tkey', 'tkey1')); 
        // 将集合tkey和集合tkey1元素合并于集合union , 并且新集合中元素不能重复
           返回新集合的元素个数, 如果元素A在tkey和tkey1都存在,则合并后的元素A的score相加
$redis->zUnion('ko2', array('k1', 'k2'), array(5, 2)); 
        // 集合k1和集合k2并集于k02 ,array(5,1)中元素的个数与子集合对应,然后 5 对应k1 
           k1每个元素score都要乘以5 ,同理1对应k2,k2每个元素score乘以1 
           然后元素按照递增排序,默认相同的元素score(SUM)相加
$redis->zUnion('ko2', array('k1', 'k2'), array(10, 2),'MAX'); 
        // 各个子集乘以因子之后,元素按照递增排序,相同的元素的score取最大值(MAX)
           也可以设置MIN 取最小值
$redis->zInter('ko1', array('k1', 'k2')); 
        // 集合k1和集合k2取交集于k01 ,且按照score值递增排序
           如果集合元素相同,则新集合中的元素的score值相加
$redis->zInter('ko1', array('k1', 'k2'), array(5, 1)); 
        //集合k1和集合k2取交集于k01 ,array(5,1)中元素的个数与子集合对应,然后 5 对应k1 
          k1每个元素score都要乘以5 ,同理1对应k2,k2每个元素score乘以1 
          ,然后元素score按照递增排序,默认相同的元素score(SUM)相加
$redis->zInter('ko1', array('k1', 'k2'), array(5, 1),'MAX'); 
        // 各个子集乘以因子之后,元素score按照递增排序,相同的元素score取最大值(MAX)
           也可以设置MIN 取最小值

Hash数据类型

redis hash是一个string类型的field和value的映射表.它的添加,删除操作都是O(1)(平均).hash特别适合用于存储对象。

$redis->hSet('h', 'name', 'TK'); // 在h表中 添加name字段 value为TK
$redis->hSetNx('h', 'name', 'TK');
         // 在h表中 添加name字段 value为TK 如果字段name的value存在返回false 否则返回 true
$redis->hGet('h', 'name'); // 获取h表中name字段value
$redis->hLen('h'); // 获取h表长度即字段的个数
$redis->hDel('h','email'); // 删除h表中email 字段
$redis->hKeys('h'); // 获取h表中所有字段
$redis->hVals('h'); // 获取h表中所有字段value
$redis->hGetAll('h'); // 获取h表中所有字段和value 返回一个关联数组(字段为键值)
$redis->hExists('h', 'email'); //判断email 字段是否存在与表h 不存在返回false
$redis->hSet('h', 'age', 28);
$redis->hIncrBy('h', 'age', -2); 
 // 设置h表中age字段value加(-2) 如果value是个非数值 则返回false 否则,返回操作后的value
$redis->hIncrByFloat('h', 'age', -0.33);  
        // 设置h表中age字段value加(-2.6) 如果value是个非数值 则返回false 否则
           返回操作后的value(小数点保留15位)
$redis->hMset('h', array('score' => '80', 'salary' => 2000)); // 表h 批量设置字段和value
$redis->hMGet('h', array('score','salary')); // 表h 批量获取字段的value

mongodb 安装使用

1.下载mongodb

根据自己的系统类型下载指定安装包

下载地址:https://www.mongodb.com/download-center?jmp=nav#community

2.安装mongodb

#解压二进制安装包

tar -zxvf mongodb-linux-x86_64-rhel70-4.0.1.tgz

#安装包移动到知道目录

mv mongodb-linux-x86_64-rhel70-4.0.1 /usr/local/mongodb

3.启动配置mongodb

创建mongodb.conf配置文件

#日志文件位置
logpath=/data/db/journal/mongodb.log  (这些都是可以自定义修改的)
#监听ip
bind_ip = 0.0.0.0
# 以追加方式写入日志
logappend=true

# 是否以守护进程方式运行
fork = true

# 默认27017
#port = 27017

# 数据库文件位置
dbpath=/data/db

# 启用定期记录CPU利用率和 I/O 等待
#cpu = true

# 是否以安全认证方式运行,默认是不认证的非安全方式
#noauth = true
#auth = true

# 详细记录输出
#verbose = true

# Inspect all client data for validity on receipt (useful for
# developing drivers)用于开发驱动程序时验证客户端请求
#objcheck = true

# Enable db quota management
# 启用数据库配额管理
#quota = true
# 设置oplog记录等级
# Set oplogging level where n is
#   0=off (default)
#   1=W
#   2=R
#   3=both
#   7=W+some reads
#diaglog=0

# Diagnostic/debugging option 动态调试项
#nocursors = true

# Ignore query hints 忽略查询提示
#nohints = true
# 禁用http界面,默认为localhost:28017
#nohttpinterface = true

# 关闭服务器端脚本,这将极大的限制功能
# Turns off server-side scripting.  This will result in greatly limited
# functionality
#noscripting = true
# 关闭扫描表,任何查询将会是扫描失败
# Turns off table scans.  Any query that would do a table scan fails.
#notablescan = true
# 关闭数据文件预分配
# Disable data file preallocation.
#noprealloc = true
# 为新数据库指定.ns文件的大小,单位:MB
# Specify .ns file size for new databases.
# nssize =

# Replication Options 复制选项
# in replicated mongo databases, specify the replica set name here
#replSet=setname
# maximum size in megabytes for replication operation log
#oplogSize=1024
# path to a key file storing authentication info for connections
# between replica set members
#指定存储身份验证信息的密钥文件的路径
#keyFile=/path/to/keyfile

详细说明可参考:https://www.cnblogs.com/zhoujinyi/p/3130231.html

启动是提示以下错误,可以检查配置文件是否正确

4.安装php扩展

下载地址:https://pecl.php.net/package/mongodb

windows客户端

https://robomongo.org/download

spring boot idea 热部署不生效 devtools

解决 devtools 不生效

  • 检查依赖
<dependency>
	<groupId>org.springframework.boot</groupId>
	<artifactId>spring-boot-devtools</artifactId>
	<optional>true</optional>
</dependency>
  • IDEA工具启动自动编译功能

  • IDEA 设置为在程序运行过程中,依然允许自动编译 操作: ctrl + shift + alt + /,选择Registry,勾选勾上 Compiler autoMake allow when app running

spring boot task

任务

定时任务

  • 开启 @EnableScheduling 注解
@SpringBootApplication
@EnableScheduling
public class ExampleApplication {

    public static void main(String[] args) {
        SpringApplication.run(ExampleApplication.class, args);
    }
}
  • 定义任务类 启用 @Component @Scheduled 注解 (cron 写法参考linux crontab)
import org.springframework.scheduling.annotation.Scheduled;
import org.springframework.stereotype.Component;

@Component
public class CronTask {
    
    //每过5秒执行一次 
    @Scheduled(cron = "0/5 * * * * *")
    public void hello(){
        System.out.println("hello!");
    }
}

异步任务

  • 增加 @EnableAsync 注解开启异步支持
@SpringBootApplication
@EnableScheduling
@EnableAsync
public class ExampleApplication {

	public static void main(String[] args) {
		SpringApplication.run(ExampleApplication.class, args);
	}
}
  • 编写异步任务 增加 @Component @Async 注解
import org.springframework.scheduling.annotation.Async;
import org.springframework.stereotype.Component;

@Component
public class AsyncTask {
    @Async
    public void async1(){
        try{
            Thread.sleep(1000);
        }catch (Exception e){

        }
    }
    @Async
    public void async2(){
        try{
            Thread.sleep(700);
        }catch (Exception e){

        }
    }

    @Async
    public void async3(){
        try{
            Thread.sleep(500);
        }catch (Exception e){

        }
    }

}

spring boot websocket

websocket server

  • 导入 websocket 依赖
<dependency>
    <groupId>org.springframework.boot</groupId>
    <artifactId>spring-boot-starter-websocket</artifactId>
</dependency>
  • 配置 bean
import org.springframework.context.annotation.Bean;
import org.springframework.context.annotation.Configuration;
import org.springframework.web.socket.server.standard.ServerEndpointExporter;

@Configuration
public class WebsocketConf {

    @Bean
    public ServerEndpointExporter serverEndpointExporter(){
        return new ServerEndpointExporter();
    }

}
  • 编写服务器端代码

import org.springframework.stereotype.Component;

import javax.websocket.OnClose;
import javax.websocket.OnMessage;
import javax.websocket.OnOpen;
import javax.websocket.Session;
import javax.websocket.server.ServerEndpoint;
import java.io.IOException;
import java.util.concurrent.CopyOnWriteArrayList;

/**
 * 服务地址:ws://127.0.0.1:8080/ws
 */

@ServerEndpoint("/ws")
@Component
public class WebSocketServer {
    /**
     * 客户端会话
     */
    private Session session;

    private CopyOnWriteArrayList<WebSocketServer> webSocketServers = new CopyOnWriteArrayList<>();

    /**
     * 客户端连接事件
     * @param session
     */
    @OnOpen
    public void onOpen(Session session){
        this.session = session;
        webSocketServers.add(this);
        System.out.println("[ 新客户端连接上了] 当前在线客户端数:" + webSocketServers.size());
    }

    /**
     * 断开链接事件
     */
    @OnClose
    public void onClose(){
        webSocketServers.remove(this);
        System.out.println("[ 客户端断开连接 ] 当前在线客户端数:" + webSocketServers.size());
    }

    /**
     * 接收消息事件
     * @param message 接收到的消息
     */
    @OnMessage
    public void onMessage(String message){
        System.out.println("收到消息[" + toString() + "] :" + message );
    }

    /**
     * 广播消息
     * @param message 广播数据
     */
    public void sendMessage(String message){
        if ( webSocketServers != null && webSocketServers.size() > 0 ){
            for (WebSocketServer websocketServer : webSocketServers){
                try {
                    websocketServer.session.getBasicRemote().sendText(message);
                } catch (IOException e) {
                    e.printStackTrace();
                }
            }
        }
    }

}

websocket 测试客户端 : http://www.blue-zero.com/WebSocket/

理解I/O:随机和顺序

转自:https://blog.csdn.net/BaiWfg2/article/details/52885287

Storage for DBAs: Ever been to one of those sushi restaurants where the food comes round in dishes on a conveyor belt? As each dish travels around the loop you eye it up and, as long as you can make your mind up in time, grab it. However, if you are as indecisive as me, there’s a chance it will be out of range before you come to your senses – in which case you have to wait for it to complete a further full revolution before getting another chance. And that’s assuming someone else doesn’t get to it first.

曾经去过寿司店吗,那里的食物都是放在一个传送带上。随着每份食品在带上的传送,你瞄准了一些食物,等它们来到跟前时,立刻拿走它。然而如果你像我一样那么迟疑,那就有可能食物已经超出了你能够着的范围了。这时候你就得再等上一圈才可能拿到,前提还是别人没取走

Let’s assume that it takes a dish exactly 4 minutes to complete a whole lap of the conveyor belt. And just for simplicity’s sake let’s also assume that no two dishes on the belt are identical. As a hungry diner you look in the little menu and see a particular dish which you decide you want. It’s somewhere on the belt, so how long will it take to arrive?

我们假定一道食品在传送带上走完一圈需要4分钟,为简单起见还假定传送带上的食品互不相同。作为一个吃货,你看了看菜单,找到了几样你想要的食物,它就在带上的某个地方,那么需要多久才会到达你的旁边呢?

Probability dictates that it could be anywhere on the belt. It could be passing by right now, requiring no wait time – or it could have just passed out of reach, thus requiring 4 minutes of wait time to go all the way round again. As you follow this random method (choose from the menu then look at the belt) it makes sense that the average wait time will tend towards halfway between the min and max wait times, i.e. 2 minutes in this case. So every time you pick a dish you wait an average of 2 minutes: if you have eight dishes the odds say that you will spend (8 x 2) = 16 minutes waiting for your food. Welcome to the disk data diet, I hope you weren’t too hungry?

我们指定它可以在带上的任何地方。可能正在经过,你不需要等待就可以拿到手,或者它刚刚过了你的范围,那么需要4分钟的等待转完一圈。当你遵循这套随机规则(菜单中选择食品然后看传送带),就会意识到平均等待时间将会倾向于最大和最小等待时间的中间位置,也就是2分钟。于是乎你每次取食物时都需要等待2分钟,如果你有8个盘子,那很可能你需要等上16分钟才能取完。欢迎来到磁盘数据饮食,希望你不会太饿?

Now let’s consider an alternative option, where you order eight dishes from the chef and he or she places all of them sequentially (i.e. next to each other) somewhere on the conveyor belt. That location is random, so again you might have to wait anywhere between 0 and 4 minutes (an average of 2 minutes) for the first dish to pass… but the next seven will follow one after the other with no wait time. So now, in this scenario, you only had to wait 2 minutes for all eight dishes. Much better.

现在让我们来考虑另一种方案,你订了8道菜,厨师依次把他们放在了传送带的某个地方,位置是随机的,所以你需要等上平均时间2分钟取得第一份菜。然而剩下的7份菜都不需要等待。所以在这种场景下,取8道菜你只需等2分钟,比刚才好多了。

I’m sure you will have seen through my analogy right from the start. The conveyor belt is a hard disk and the sushi dishes are blocks which are being eaten / read. I haven’t yet worked out how to factor a bottle Asahi Super Dry into this story, but I’ll have one all the same thanks.

我确定你能看懂我在文章开头所作的类比了。传送带就是磁盘,食品就好比要吃/读的块。【最后一句翻译不来。555……】

Random versus Sequential I/O

I have another article planned for later in this series which describes the inescapable mechanics of disk. For now though, I’ll outline the basics: every time you need to access a block on a disk drive, the disk actuator arm has to move the head to the correct track (the seek time), then the disk platter has to rotate to locate the correct sector (the rotational latency). This mechanical action takes time, just like the sushi travelling around the conveyor belt.

我改日会有另外一篇文章来谈磁盘原理。但现在,我大概说一下基本内容:每次访问磁盘的一个块时,磁臂就需移动到正确的磁道上(这段时间为寻址时间),然后盘片就需旋转到正确的扇区上(这叫旋转时延)。这套动作需要时间,正如寿司在传送带上传送需要时间一样。

Obviously the amount of time depends on where the head was previously located and how fortunate you are with the location of the sector on the platter: if it’s directly under the head you do not need to wait, but if it just passed the head you have to wait for a complete revolution. Even on the fastest 15k RPM disk that takes 4 milliseconds (15,000 rotations per minute = 250 rotations per second, which means one rotation is 1/250th of a second or 4ms). Admittedly that’s faster than the sushi in my earlier analogy, but the chances are you will need to read or write a far larger number of blocks than I can eat sushi dishes (and trust me, on a good day I can pack a fair few away).

很明显总共的时间依赖于磁头的初使位置,还有要访问的扇区的位置。如果它刚好就在磁头下方,那不需要等待;如果刚刚经过磁头,那就不得不等上一个周期时间。哪怕对于最快的15k RPM磁盘,每分钟15000转,每秒250转,那么一转需要4ms。很明显比刚才寿司的情况要快得多,但是很多时候需要读上大量的数据块,远远超过我要吃的寿司量。相信我,这种时候的时间我都可以打包好几份了。

What about the next block? Well, if that next block is somewhere else on the disk, you will need to incur the same penalties of seek time and rotational latency. We call this type of operation a random I/O. But if the next block happened to be located directly after the previous one on the same track, the disk head would encounter it immediately afterwards, incurring no wait time (i.e. no latency). This, of course, is a sequential I/O.

那下一个磁盘块又是如何呢?如果它在磁盘的某个地方,访问它会有同样的寻道和旋转时延,我们就把这种方式的IO叫做随机IO;但是如果它刚好就在你刚才访问的那一个磁盘块的后面,磁头就能立刻遇到,不需等待,这种IO就叫顺序IO

Size Matters

In my last post I described the Fundamental Characteristics of Storage: Latency, IOPS and Bandwidth (or Throughput). As a reminder, IOPS stands for I/Os Per Second and indicates the number of distinct Input/Output operations (i.e. reads or writes) that can take place within one second. You might use an IOPS figure to describe the amount of I/O created by a database, or you might use it when defining the maximum performance of a storage system. One is a real-world value and the other a theoretical maximum, but they both use the term IOPS.

在我上一篇博文中讲到了磁盘的基本特征:延时、IOPS和带宽(或叫吞吐量)。这里再说一次,IOPS是每秒I/O数的简称,表示一秒中输入输出操作(比如读和写)的次数。可以用IOPS数值来描述一个数据库的IO操作量,或者在定义一个存储系统的最大性能时采用这个词。前者是一种真实世界的值,后者是一个理论最大值,它们都IOPS这个术语。

When describing volumes of data, things are slightly different. Bandwidth is usually used to describe the maximum theoretical limit of data transfer, while throughput is used to describe a real-world measurement. You might say that the bandwidth is the maximum possible throughput. Bandwidth and throughput figures are usually given in units of size over units of time, e.g. Mb/sec or GB/sec. It pays to look carefully at whether the unit is using bits (b) or bytes (B), otherwise you are likely to end up looking a bit silly (sadly, I speak from experience). In the previous post we stated that IOPS and throughput were related by the following relationship:

当描述大量数据时,情况就有所不同了。带宽用来描述数据传输的理论最大值,而吞吐量是实际值。你可以说带宽是吞吐量的上限。带宽和吞吐量数值经常带有单位时间上的单位大小的单位,如Mb/sec,Gb/sec.注意这里b和B是不同的,前者是位,后者是字节。在上一篇博文中,我们讲到了IOPS和吞吐量之间有这样的关系:

Throughput   =   IOPS   x   I/O size
  • 1
  • 2

吞吐量 = IOPS * I/O大小

It’s time to start thinking about that I/O size now. If we read or write a single random block in one second then the number of IOPS is 1 and the I/O size is also 1 (I’m using a unit of “blocks” to keep things simple). The Throughput can therefore be calculated as (1 x 1) = 1 block / second.

现在有必要来谈谈IO 大小了。如果一秒中读一个单个随机块,那么 IOPS就是1,IO大小也是1(这里用块作单位是使问题简化)。那么吞吐量就是1*1=1块/s

Alternatively, if we wanted to read or write eight contiguous blocks from disk as a sequential operation then this again would only result in the number of IOPS being 1, but this time the I/O size is 8. The throughput is therefore calculated as (1 x 8) = 8 blocks / second.
Hopefully you can see from this example the great benefit of sequential I/O on disk systems: it allows increased throughput. Every time you increase the I/O size you get a corresponding increase in throughput, while the IOPS figure remains resolutely fixed. But what happens if you increase the number of IOPS?

或者另外一种方式,顺序读连续8个数据块,那么此时IOPS仍是1,但大小为8,所以吞吐量是1*8=8块/s。相信你能看出顺序IO的优势了,它支持递增式的吞吐量,每一次增加IO数据块数量就能获得吞吐量的提升,然而IOPS恒定不变。要是它增加了呢?

Latency Kills Disk Performance

In the example above I described a single-threaded process reading or writing a single random block on a disk. That I/O results in a certain amount of latency, as described earlier on (the seek time and rotational latency). We know that the average rotational latency of a 15k RPM disk is 4ms, so let’s add another millisecond for the disk head seek time and call the average I/O latency 5ms. How many (single-threaded) random IOPS can we perform if each operation incurs an average of 5ms wait? The answer is 1 second / 5 ms = 200 IOPS. Our process is hitting a physical limit of 200 IOPS on this disk.
What do you do if you need more IOPS? With a disk system you only really have one choice: add more disks. If each spindle can drive 200 IOPS and you require 80,000 IOPS then you need (80,000 / 200) = 400 spindles. Better clear some space in that data centre, eh?

在上面的例子中,我描述了一个单线程的进程读写磁盘的单个随机块的情况。那种IO将会有很大的延时,如前所说的寻道时间和旋转时延。已经知道对于15k RPM 的磁盘而言,平均旋转时延是4ms,我们假定磁头的寻道时间是1ms,那么平均IO时延是5ms。那么这种情况下,每次操作要5ms,在一秒内可以有多少次操作呢,也就是IOPS的值。答案是1s/5ms=200 IOPS(单线程情况下)。那想增加IOPS该怎么做呢?只有一个法子就是增加更多磁盘。如果一个转轴驱动200 IOPS,那么若想达到80000 IOPS的值,就需要80000/200=400个转轴。对于这种数据中心的空间情况更清楚了吗?

On the other hand, if you can perform the I/O sequentially you may be able to reduce the IOPS requirement and increase the throughput, allowing the disk system to deliver more data. I know of Oracle customers who spend large amounts of time and resources carving up and re-ordering their data in order to allow queries to perform sequential I/O. They figure that the penalty incurred from all of this preparation is worth it in the long run, as subsequent queries perform better. That’s no surprise when the alternative was to add an extra wing to the data centre to house another bunch of disk arrays, plus more power and cooling to run them. This sort of “no pain, no gain” mentality used to be commonplace because there really weren’t any other options. Until now.

在另一方面,如果你能执行顺序IO,你将可以降低对IOPS的要求而提升吞吐量,使磁盘系统传送更多的数据。我了解到Oracle用户就花大量时间和资源对数据进行重新划分和排序,这样请求就会是顺序IO。【这里我个人疑问是:对数据的重新划分和排序就能保证在磁盘上的排列是符合顺序IO的吗?】他们认为这样的初始准备工作虽然麻烦,但长久看来却是值得的,因为后来的请求将会执行得更好。当然增加额外的磁盘去存储更多的数据,增加更多的电能和冷却装置也是可以的。这种NO PAIN, NO GAIN心理是很常见的,因为迄今没有其它选择

Flash Offers Another Way

The idea of sequential I/O doesn’t exist with flash memory, because there is no physical concept of blocks being adjacent or contiguous. Logically, two blocks may have consecutive block addresses, but this has no bearing on where the actual information is electronically stored. You might therefore say that all flash I/O is random, but in truth the principles of random I/O versus sequential I/O are disk concepts so don’t really apply. And since the latency of flash is sub-millisecond, it should be possible to see that, even for a single-threaded process, a much larger number of IOPS is possible. When we start considering concurrent operations things get even more interesting… but that topic is for another day.

闪存中是不存在顺序IO的概念的,因为没有邻接或连续这种块的物理概念。逻辑上,两个块可以有连续的块地址,但不能确定实际的信息存在哪里。你也许会说所以闪存IO是随机的,但事实上随机IO和顺序IO只是磁盘概念,所以不要这么用。由于闪存的时延是亚毫秒级,所对对于单线程的进程而言可以有很大的IOPS。当考虑并发时就更有趣了,但以后再讲。

Back to the sushi analogy, there is no longer a conveyor belt – the chefs are standing right in front of you. When you order a dish, it is placed in front of you immediately. Order a number of dishes and you might want to enlist the help of a few friends to eat in parallel, because the food will start arriving faster than you can eat it on your own. This is the world of flash memory, where hunger for data can be satisfied and appetites can be fulfilled. Time to break that disk diet, eh?

Looking back at the disk model, all that sitting around waiting for the sushi conveyor belt just takes too long. Sure you can add more conveyor belts or try to get all of your sushi dishes arranged in a line, but at the end of the day the underlying problem remains: it’s disk. And now that there’s an alternative, disk just seems a bit too fishy to me…

回到寿司这个类比,不再有传送带了,厨师就站在你面前。当你订一道食品,它就立刻出现在你面前;订很多食品你可以请求朋友们的帮助一起吃,因为上食物的速度总会比你一个人吃的速度快。这就是闪存实现的机制了。

最后一段,略

《高性能mysql》第三版

第一章 mysql架构与历史

逻辑架构:

第一层:链接/线程处理,比如连接处理,授权认证,安全等等;

第二层:查询解析,分析,优化,缓存以及所有的内置函数(例如:日期,时间)。

第三层:包含存储引擎,负责mysql中的数据存储和提取

锁:共享锁和排它锁(读锁和写锁)

锁粒度:表锁,行锁

事务(ACID)

原子性(atomicity):不可分割的最小单元,全部成功或者全部失败;

一致性(consistency)

隔离性(isolation):一个事务所做的修改在最终提交之前对其他事务不可见

持久性(durability):数据提交永久保存在数据库中

隔离级别 (参考链接:)

设置隔离级别:set transaction isolatton level 【级别名】

未提交读(read uncommitted):脏读,事务中的修改,即使没有提交,对其他事务也是可见的

提交读(read committed):不可重复读,满足事务的隔离性要求;

可重复读(repeatable read):可重复读是mysql默认事务隔离级别,解决了脏读问题无法解决幻读(事务在读取某个范围内的记录时,另一个事务又在该返回插入了新的数据,再次读取产生幻行)

可串行化(serializable)

死锁:多个事务在同一资源上相互占用并请求锁定了对方的资源,从而导致事务一直无法提交

start transaction;                                                            update user set coin = coin - 100 where id = 1;                              update user set coin = coin + 100 where id = 2;                               commit;                                                              

start transaction;                                                            update user set coin = coin - 100 where id = 2;                              update user set coin = coin + 100 where id = 1;                               commit; 

选择存储引擎

事务:是否需要支持事务;

备份:是否需要在线热备;

崩溃恢复

特有的特性