容器虚拟化之LXC(LinuX Container)


HAL-level virtualization
VMware
Xen
Hyper-V
Qemu
KVM


OS-level virtualization
containerization takes place at the kernel level. Most modern operating system kernels now support the primitives necessary for containerization, including Linux with openvz, vserver and more recently lxc, Solaris with zones and FreeBSD with Jails.
FreeBSD
http://www.freebsd.org/doc/handbook/jails.html
Solaris
http://docs.oracle.com/cd/E26502_01/html/E29024/preface-1.html#scrolltoc
Linux
openvz http://openvz.org/Main_Page
vserver http://linux-vserver.org/Welcome_to_Linux-VServer.org
lxc https://linuxcontainers.org/




这里主要介绍lxc容器的使用方法

环境:
CentOS6.5 x64
lxc-0.7.5

官方网站:
https://linuxcontainers.org
http://libvirt.org/drvlxc.html
参考文档:
http://wiki.1tux.org/wiki/Centos6/Installation/Minimal_installation_using_yum
http://wiki.1tux.org/wiki/Lxc/Installation/Guest/Centos/6
http://17173ops.com/2013/11/14/linux-lxc-install-guide.shtml
http://www.ibm.com/developerworks/cn/linux/l-lxc-containers/
http://blog.csdn.net/quqi99/article/details/9532105
http://blog.sina.com.cn/s/blog_999d1f4c0101dxad.html

简介:
LXC是Linux containers的简称,是一种基于容器的操作系统层级的虚拟化技术。
LXC可以在操作系统层次上为进程提供虚拟的执行环境,一个虚拟的执行环境就是一个容器,同时可以为容器绑定特定的cpu和memory节点,分配特定比例的cpu时间、IO时间,限制可以使用的内存大小(包括内存和是swap空间),提供device访问控制,提供独立的namespace(网络、pid、ipc、mnt、uts)。
LXC主要包含三个方面:cgroup、network和rootfs。
    Cgroups是control groups的缩写,是Linux内核提供的一种可以限制、记录、隔离进程组(process groups)所使用的物理资源(如:cpu,memory,IO等等)的机制。LXC主要通过cgroup实现对于资源的管理和控制。
    Network是为容器提供的网络环境设置。
    rootfs用于指定容器的虚拟根目录,设定此项以后,容器内所有进程将会把此目录根目录,不能访问此目录之外的路径,相当于chroot的效果。
生命周期和状态
   CONTAINER LIFE CYCLE
       When the container is created, it contains the configuration information. When a process is launched, the container will be starting and running. When the last process running inside the  container  exits,
       the container is stopped.

       In case of failure when the container is initialized, it will pass through the aborting state.
容器虚拟化之LXC(LinuX <wbr>Container)           
另外:从lxc-wait帮助里可以看到有7种状态:STOPPED, STARTING, RUNNING, STOPPING, ABORTING, FREEZING, FROZEN。

已知的实现lxc的解决方案有2个:liblxc与libvirt


************
libvirt
************

libvirt是Linux上的虚拟化库,是长期稳定的C语言API,支持QEMU/KVM、Xen、LXC等主流虚拟化方案。
操作LXC容器和操作KVM虚拟机一样。
一.应用容器
1.定义lxc XML
root@jun-live:~#cat lxc-test.xml
容器虚拟化之LXC(LinuX <wbr>Container)
这里只定义了可用容器的最小化配置,在导入的时候系统会自动生成一份详细的配置文件,默认存放在/etc/libvirt/lxc目录下。
2.导入lxc容器
root@jun-live:~#virsh -c lxc:/// define lxc-test.xml
Domain lxc-test defined from lxc-test.xml
root@jun-live:~#virsh -c lxc:/// list --all
 Id    Name                           State
----------------------------------------------------
     lxc-test                       shut off
root@jun-live:~#ls /etc/libvirt/lxc/
lxc-test.xml
修改配置文件要通过以下命令调用VIM来编辑
root@jun-live:~#virsh -c lxc:/// edit lxc-test
3.开启应用实例
root@jun-live:~#virsh -c lxc:/// start lxc-test
Domain lxc-test started
root@jun-live:~#virsh -c lxc:/// list --all
 Id    Name                           State
----------------------------------------------------
 22157 lxc-test                       running
4.连接应用实例
可以通过console从文本界面连接
root@jun-live:~#virsh -c lxc:/// console lxc-test
Connected to domain lxc-test
Escape character is ^]
sh-4.1# ls
bin    dev   lib      media  net   root  selinux  sys  var
boot    etc   lib64      misc     opt   run   smb      tmp
cgroup    home  lost+found  mnt     proc  sbin  srv      usr

sh-4.1# pwd
/
可以看到,LXC的隔离性做的不大好,它对cpu, memory, network的隔离还好一点,但对存储的隔离不够,尤其对于应用容器,甚至可以从lxc虚机可以访问宿主机的根目录。当然,可以自己做一个根文件系统然后使用操作系统容器来隔离。但lxc在这块支持不够,全要自己实现,所以又催生了两个项目docker和dockerlite,似乎弥补了这些不足。
       dockerlite利用LXC实现运行时资源隔离,并利用Btrfs文件系统的快照功能完成状态保持和虚拟环境克隆。所谓轻量级虚拟化,也指代操作系统级别的虚拟化,通过内核和用户态进程组的支持,实现的独立网络IP、进程树等类似虚拟机的隔离运行环境,但是和宿主机运行同样的内核。dockerlite 和另一款用 Go 语言实现的docker的区别有:
    dockerlite 使用Shell脚本实现,而docker用Go。
    dockerlite 使用BTRFS文件系统,而docker使用AUFS。
    docker以后台进程方式运行并通过命令行客户端实现操作交互,dockerlite则无法以后台进程运行。

二.OS容器
自定义最小化chroot环境
A.将特定的软件包安装到指定的chroot目录
root@jun-live:~#mkdir /lxc-root
root@jun-live:~#setarch x86_64 bash
root@jun-live:~#yum -y install --installroot=/lxc-root/ dhclient openssh-server passwd rsyslog vim-minimal vixie-cron wget which yum
或者
yum -y --installroot=/lxc-root groupinstall "base"
yum -y --installroot=/lxc-root install dhclient
进入到chroot环境
root@jun-live:~#chroot /lxc-root/
bash-4.1# pwd
/
bash-4.1# ls
bin   dev  home  lib64    mnt  proc  sbin     srv  tmp  var
boot  etc  lib     media    opt  root  selinux  sys  usr

1.修改root密码
bash-4.1# echo redhat|passwd --stdin root
2.调试必须的设备文件及相应权限
bash-4.1# rm -rf /dev/null
bash-4.1# mknod -m 666 /dev/null c 1 3
bash-4.1# mknod -m 666 /dev/zero c 1 5
bash-4.1# mknod -m 666 /dev/urandom c 1 9
bash-4.1# mknod -m 600 /dev/console c 5 1
bash-4.1# mknod -m 600 /dev/tty1 c 4 1
bash-4.1# mknod -m 600 /dev/tty1 c 4 2
bash-4.1# mknod -m 600 /dev/tty1 c 4 3
bash-4.1# mknod -m 600 /dev/tty1 c 4 4
bash-4.1# mknod -m 600 /dev/tty1 c 4 5
bash-4.1# mknod -m 600 /dev/tty1 c 4 6
bash-4.1# mknod -m 600 /dev/tty1 c 4 7

bash-4.1# ln -s /dev/urandom /dev/random
bash-4.1# chown root:tty /dev/tty*
bash-4.1# mkdir -p /dev/shm
bash-4.1# chmod 1777 /dev/shm
bash-4.1# mkdir -p /dev/pts
bash-4.1# chmod 755 /dev/pts
3.初始化root登录shell
bash-4.1# cp -a /etc/skel/. /root/
4.设置主机名及网络相关配置
bash-4.1# cat /etc/hosts
127.0.0.1   localhost localhost.localdomain localhost4 localhost4.localdomain4
::1         localhost localhost.localdomain localhost6 localhost6.localdomain6
bash-4.1# cat /etc/sysconfig/network
NETWORKING=yes
HOSTNAME=lxc-centos6
bash-4.1# cat /etc/sysconfig/network-scripts/ifcfg-eth0
DEVICE=eth0
ONBOOT=yes
BOOTPROTOTO=dhcp
5.生成最小化/etc/fstab
bash-4.1# cat /etc/fstab
/dev/root                                    rootfs   defaults        0 0
none                    /dev/shm                tmpfs    nosuid,nodev    0 0

6.生成lxc兼容的init脚本
bash-4.1# cat /etc/init/lxc-sysinit.conf
start on startup
env container

pre-start script
        if [ "x$container" != "xlxc" -a "x$container" != "xlibvirt" ]; then
                stop;
        fi
        initctl start tty TTY=console
        rm -f /var/lock/subsys/*
        rm -f /var/run/*.pid
        telinit 3
        exit 0;
end script

7.退出chroot
bash-4.1# exit
exit
提示:大批量部署时可以打包chroot环境
root@jun-live:~#tar -jcvf lxc-rootfs-centos6_x64.tar.bz2 /lxc-root/.
8.定义并运行lxc实例
root@jun-live:~#cp lxc-test.xml lxc-root.xml
容器虚拟化之LXC(LinuX <wbr>Container)
只需要改name,init,source这3行内容指向正确的位置即可
root@jun-live:~#virsh -c lxc:/// define lxc-root.xml
Domain lxc-root defined from lxc-root.xml
root@jun-live:~#virsh -c lxc:/// list --all
 Id    Name                           State
----------------------------------------------------
     lxc-root                       shut off
root@jun-live:~#virsh -c lxc:/// start lxc-root
Domain lxc-root started
root@jun-live:~#virsh -c lxc:/// list --all
 Id    Name                           State
----------------------------------------------------
 2330  lxc-root                       running

B.直接借用最小化安装的kvm虚拟机镜像
root@jun-live:~#losetup /dev/loop20 /var/lib/libvirt/images/lxc-rhel6.raw
root@jun-live:~#kpartx -a /dev/loop20
root@jun-live:~#mount /dev/mapper/loop20p1 /lxc-kvm/
root@jun-live:~#cp lxc-root.xml lxc-kvm.xml
root@jun-live:~#vim lxc-kvm.xml
只需要改name,init,source这3行内容指向正确的位置即可

root@jun-live:~#virsh -c lxc:/// define lxc-kvm.xml
Domain lxc-kvm defined from lxc-kvm.xml
root@jun-live:~#virsh -c lxc:/// list --all
 Id    Name                           State
----------------------------------------------------
 2330  lxc-root                       running
     lxc-kvm                        shut off
root@jun-live:~#virsh -c lxc:/// start lxc-kvm
Domain lxc-kvm started
root@jun-live:~#virsh -c lxc:/// list --all
 Id    Name                           State
----------------------------------------------------
 2330  lxc-root                       running
 4337  lxc-kvm                        running

5.删除容器
root@jun-live:~#virsh -c lxc:/// destroy lxc-kvm
Domain lxc-test destroyed
root@jun-live:~#virsh -c lxc:/// undefine lxc-kvm
Domain lxc-test has been undefined
root@jun-live:~#virsh -c lxc:/// list --all
 Id    Name                           State
----------------------------------------------------




************
liblxc
************

一.安装编译信赖库
[root@ipa-server ~]# yum -y install gcc libcap-devel libcgroup

二.下载并安装
说明:
1.0.7版本在安装完成后,要作一个链接,不然会报liblxc.so.1找不到的错误
[root@ipa-server ~]# lxc-info
lxc-info: error while loading shared libraries: liblxc.so.1: cannot open shared object file: No such file or directory
[root@ipa-server ~]# ln -s /usr/local/lib/liblxc.so.1 /usr/lib64/
>=0.8.0的版本(如2015-01-19最新的稳定版本1.0.7)在执行lxc-create或其它操作的时候会提示
lxc_container: lxc_create.c: main: 271 Error creating container test
rhel6.5_x64,rhel7.0_x64都测试过,报同样的错,目前还没有找到解决办法,但0.7.5版本能正常工作。

所以,这里以lxc-0.7.5为例
root@jun-live:~#wget https://linuxcontainers.org/downloads/lxc/lxc-0.7.5.tar.gz --no-check-certificate
[root@ipa-server ~]# tar -xvf lxc-0.7.5.tar.gz -C /usr/local/src/
[root@ipa-server lxc-0.7.5]# ./configure && make && make install
补充:可以打成rpm包
root@jun-live:lxc-0.7.5#yum -y install rpm-build docbook-utils
root@jun-live:lxc-0.7.5#./configrue && make rpm
... ...
Wrote: /root/rpmbuild/SRPMS/lxc-0.7.5-1.src.rpm
Wrote: /root/rpmbuild/RPMS/x86_64/lxc-0.7.5-1.x86_64.rpm
Wrote: /root/rpmbuild/RPMS/x86_64/lxc-devel-0.7.5-1.x86_64.rpm
Executing(%clean): /bin/sh -e /var/tmp/rpm-tmp.9ZbafW
+ umask 022
+ cd /root/rpmbuild/BUILD
+ cd lxc-0.7.5
+ rm -rf /root/rpmbuild/BUILDROOT/lxc-0.7.5-1.x86_64
+ exit 0
Executing(--clean): /bin/sh -e /var/tmp/rpm-tmp.ByFPNZ
+ umask 022
+ cd /root/rpmbuild/BUILD
+ rm -rf lxc-0.7.5
+ exit 0

安装完成后查看版本及判断linux内核是否支持LXC
root@jun-live:~#lxc-version
lxc version: 0.7.5
root@jun-live:~#lxc-checkconfig
Kernel config /proc/config.gz not found, looking in other places...
Found kernel config file /boot/config-2.6.32-431.el6.x86_64
--- Namespaces ---
Namespaces: enabled
Utsname namespace: enabled
Ipc namespace: enabled
Pid namespace: enabled
User namespace: enabled
Network namespace: enabled
Multiple /dev/pts instances: enabled

--- Control groups ---
Cgroup: enabled
Cgroup namespace: enabled
Cgroup device: enabled
Cgroup sched: enabled
Cgroup cpu account: enabled
Cgroup memory controller: enabled
Cgroup cpuset: enabled

--- Misc ---
Veth pair device: enabled
Macvlan: enabled
Vlan: enabled
File capabilities: enabled
enabled

Note : Before booting a new kernel, you can check its configuration
usage : CONFIG=/path/to/config /usr/local/bin/lxc-checkconfig


三.创建容器
说明:一个完整的LXC容器必须包含以下2个文件和1个目录
        config
        fstab
        rootfs
    config格式:key = value
    lxc.network.type = veth
    config内容与分类
        ARCHITECTURE
            lxc.arch # 定义架构类型,通常是x86_64
        HOSTNAME
            lxc.utsname # 定义本lxc实例的名称
        NETWORK
            lxc.network.type # 定义网络类型
            lxc.network.flags # 此项不生效,因为当前只支持up,默认也是up
            lxc.network.link # 定义宿主服务器的网络设备进行桥接(若lxc.network.type设置为veth)
            lxc.network.name # 定义lxc实例的网卡名称(不超过15位)
            lxc.network.hwaddr # 定义lxc实例的网卡硬件地址
            lxc.network.veth.pair # 定义宿主服务器可见的网络设备名称, ifconfig可见
            lxc.network.ipv4 # 定义lxc实例的IPv4地址
            lxc.network.ipv6 # 定义lxc实例的IPv6地址
            lxc.network.script.up
        NEW PSEUDO TTY INSTANCE (DEVPTS)
            lxc.pts # 伪终端数量
        CONTAINER SYSTEM CONSOLE
            lxc.console # console记录输出至文件里,也可通过lxc-start的-c选项来指定文件
        CONSOLE THROUGH THE TTYS
            lxc.tty # tty数量
        MOUNT POINTS
            lxc.mount # fstab文件路径
            lxc.mount.entry # fstab文件内容直接写于此,替代lxc.mount指定文件路径方式
        ROOT FILE SYSTEM
            lxc.rootfs # 指定rootfs路径
            lxc.rootfs.mount
            lxc.pivotdir
        CONTROL GROUP
            lxc.cgroup.subsystem # CGROUP配置
        CAPABILITIES
            lxc.cap.drop # 移除指定的容器的Linux能力

1.关闭宿主机的control group服务
root@jun-live:~#chkconfig cgconfig off
root@jun-live:~#chkconfig cgred off
root@jun-live:~#chkconfig --list cgconfig
cgconfig           0:off    1:off    2:off    3:off    4:off    5:off    6:off
root@jun-live:~#chkconfig --list cgred
cgred              0:off    1:off    2:off    3:off    4:off    5:off    6:off
root@jun-live:~#/etc/init.d/cgconfig status
Stopped
root@jun-live:~#/etc/init.d/cgred status
cgred is stopped
root@jun-live:~#echo "none /cgroup cgroup defaults 0 0" >>/etc/fstab
root@jun-live:~#mount -a
2.准备rootfs
root@jun-live:~#cd /var/lib/libvirt/lxc/
root@jun-live:lxc#mkdir -p host1/rootfs
root@jun-live:~#tar -xvf lxc-rootfs-centos6_x64.tar.bz2 -C /var/lib/libvirt/lxc/host1/rootfs/
root@jun-live:~#ls /var/lib/libvirt/lxc/host1/rootfs/
bin/   cgroup/  etc/   lib/    media/  opt/   root/  selinux/  sys/  usr/
boot/  dev/     home/  lib64/  mnt/    proc/  sbin/  srv/      tmp/  var/

root@jun-live:~#cat /var/lib/libvirt/lxc/host1/fstab
proc            /var/lib/libvirt/lxc/host1/rootfs/proc         proc  nodev,noexec,nosuid 0 0
sysfs           /var/lib/libvirt/lxc/host1/rootfs/sys          sysfs defaults  0 0
tmpfs           /var/lib/libvirt/lxc/host1/rootfs/dev/shm      tmpfs defaults  0 0
3.修改lxc配置文件
lxc源码包内己经为我们提供了很多不同类型的参考模板
root@jun-live:~#ls /usr/local/src/lxc-0.7.5/doc/examples/
lxc-complex.conf         lxc-macvlan.conf.in   lxc-veth.conf     Makefile.am
lxc-complex.conf.in      lxc-no-netns.conf     lxc-veth.conf.in  Makefile.in
lxc-empty-netns.conf     lxc-no-netns.conf.in  lxc-vlan.conf
lxc-empty-netns.conf.in  lxc-phys.conf         lxc-vlan.conf.in
lxc-macvlan.conf         lxc-phys.conf.in      Makefile
root@jun-live:~#cat /usr/local/etc/lxc/lxc-liblxc.conf
lxc.network.type=veth
lxc.network.link=br0
lxc.network.flags=up
lxc.network.hwaddr= 00:16:3e:77:52:20
lxc.network.veth.pair= veth
lxc.utsname = foo


lxc.tty = 1
lxc.pts = 1024
lxc.rootfs = /var/lib/libvirt/lxc/host1/rootfs
lxc.mount  = /var/lib/libvirt/lxc/host1/fstab

#lxc.arch = i686
lxc.arch = x86_64
lxc.cap.drop = sys_module mac_admin

lxc.cgroup.devices.deny = a
# Allow any mknod (but not using the node)
lxc.cgroup.devices.allow = c *:* m
lxc.cgroup.devices.allow = b *:* m
# /dev/null and zero
lxc.cgroup.devices.allow = c 1:3 rwm
lxc.cgroup.devices.allow = c 1:5 rwm
# consoles
lxc.cgroup.devices.allow = c 5:1 rwm
lxc.cgroup.devices.allow = c 5:0 rwm
# /dev/{,u}random
lxc.cgroup.devices.allow = c 1:9 rwm
lxc.cgroup.devices.allow = c 1:8 rwm
lxc.cgroup.devices.allow = c 136:* rwm
lxc.cgroup.devices.allow = c 5:2 rwm
# rtc
lxc.cgroup.devices.allow = c 254:0 rwm
#fuse
lxc.cgroup.devices.allow = c 10:229 rwm
#tun
lxc.cgroup.devices.allow = c 10:200 rwm
#full
lxc.cgroup.devices.allow = c 1:7 rwm
#hpet
lxc.cgroup.devices.allow = c 10:228 rwm
#kvm
lxc.cgroup.devices.allow = c 10:232 rwm

4.创建容器
root@jun-live:~#lxc-create -n lxc01 -f /usr/local/etc/lxc/lxc-liblxc.conf
'lxc01' created
root@jun-live:~#lxc-ls
lxc01
root@jun-live:~#lxc-start -n host1
CentOS release 6.5 (Final)
Kernel 2.6.32-431.el6.x86_64 on an x86_64

host1 login: init: rcS main process (7) killed by TERM signal
Bringing up loopback interface:                            OK  ]
Bringing up interface eth0:                                OK  ]
Starting system logger:                                    OK  ]
Mounting filesystems:                                      OK  ]
Retrigger failed udev events                               OK  ]
Starting sshd:                                             OK  ]
Starting sendmail:                                         OK  ]
Starting sm-client: No such file or directory
                                                           OK  ]
Starting crond:                                            OK  ]

CentOS release 6.5 (Final)
Kernel 2.6.32-431.el6.x86_64 on an x86_64

host1 login:
提示:加上-d参数后可以后台启动

常用的管控命令
root@jun-live:~#lxc-info -n host1
state:   RUNNING
pid:     24509
root@jun-live:~#lxc-monitor -n host1
'host1' changed state to [STARTING]
'host1' changed state to [RUNNING]
root@jun-live:~#lxc-ps
CONTAINER    PID TTY          TIME CMD
           24297 pts/13   00:00:00 bash
           24974 pts/13   00:00:00 lxc-ps
           24975 pts/13   00:00:00 ps
root@jun-live:~#lxc-netstat -n host1
Active Internet connections (w/o servers)
Proto Recv-Q Send-Q Local Address               Foreign Address             State     
Active UNIX domain sockets (w/o servers)
Proto RefCnt Flags       Type       State         I-Node Path
unix      [ ]         DGRAM                    115061 /dev/log
unix      [ ]         DGRAM                    115278
unix      [ ]         DGRAM                    115252
unix      [ ]         DGRAM                    115207
root@jun-live:~#lxc-cgroup -n host1 cpuset.cpus
0-3
root@jun-live:~#lxc-console -n host1
Type Crtl a + q to exit the console


CentOS release 6.5 (Final)
Kernel 2.6.32-431.el6.x86_64 on an x86_64

login: root
Password:
Would you like to enter a security context? [N]  Y
role: root
[root@host1 ~]# ls
[root@host1 ~]# pwd
/root
[root@host1 ~]# cd /
[root@host1 /]# ls
bin   cgroup  etc   lib    media  opt   root  selinux  sys  usr
boot  dev     home  lib64  mnt    proc  sbin  srv      tmp  var
注意:退出tty控制台, 和退出screen相似Crtl a + q

原文地址:https://www.cnblogs.com/lixuebin/p/10814443.html