现在上网已经成为每个人必备的技能,打开浏览器,输入网址,回车,简单的几步就能浏览到漂亮的网页,那从请求发出到返回漂亮的页面是怎么做到的呢,我将从公司中一般的分层架构角度考虑搭建一个简易集群来实现。目标是做到在浏览中输入网址,打开网页,而且每一层还具有高可用,只要一层中有一台主机是存活的,整个服务都将可用。
一、环境
三、Docker
1.安装docker
最开始我是在MacOs系统上安装docker(下载地址),但是macOS无法直接访问docker容器的IP(官网上也有说明,有知道的朋友麻烦告知),最终在Centos7系统安装docker,我安装的是CE版本(下载及安装说明地址).
2.安装docker-compose
- 使用curl下载
- 将下载的文件权限修改为可执行权限
- 将docker-compose移入/usr/bin目录,以便在终端直接执行
具体参考官方安装文档
3.编写 dockerfile
Docker下载完成之后,编写dockerfile文件,下载centos7镜像,此处要注意,由于以后我们要使用systemctl,所以需要特殊处理,如下:1
2
3
4
5
6
7
8
9
10
11
12
13FROM centos:7
ENV container docker
RUN (cd /lib/systemd/system/sysinit.target.wants/; for i in *; do [ $i == \
systemd-tmpfiles-setup.service ] || rm -f $i; done); \
rm -f /lib/systemd/system/multi-user.target.wants/*;\
rm -f /etc/systemd/system/*.wants/*;\
rm -f /lib/systemd/system/local-fs.target.wants/*; \
rm -f /lib/systemd/system/sockets.target.wants/*udev*; \
rm -f /lib/systemd/system/sockets.target.wants/*initctl*; \
rm -f /lib/systemd/system/basic.target.wants/*;\
rm -f /lib/systemd/system/anaconda.target.wants/*;
VOLUME [ "/sys/fs/cgroup" ]
CMD ["/usr/sbin/init"]
具体情况请参考官方文档说明
四、DNS
我计划安装两台DNS服务器,一台Master,一台Slave,Master配置IP与域名的正向与反向对应关系,Slave进行同步。
1.编写docker-compose.yml
假定上面下载的centos image的名称为centos,标签为latest1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30version: "3"
services:
dns_master:
image: centos:latest
container_name: dns_master
hostname: dns_master
privileged: true
dns: 192.168.254.10
networks:
br0:
ipv4_address: 192.168.254.10
dns_slave:
image: centos:latest
container_name: dns_slave
hostname: dns_slave
privileged: true
dns:
- 192.168.254.10
- 192.168.254.11
networks:
br0:
ipv4_address: 192.168.254.11
networks:
br0:
driver: bridge
ipam:
driver: default
config:
-
subnet: 192.168.254.0/24
从docker-compose.yml文件可知我选择了bridge桥接网络模式,并为dns master和dns slave分别分配了ip.
在docker-compose.yml文件所在目录运行 docker-compose up 命令,创建名称分别为dns_master和dns_slave的容器。
2.配置DNS Master服务器
(1).我们进入dns_master容器1
docker exec -it dns_master /bin/bash
(2).安装bind9 dns package1
yum install bind bind-utils -y
(3).修改配置文件named.conf1
vim /etc/named.conf
注意以双星号(**)包围的内容,只是为了强调,实际配置时应去掉
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32 options {
listen-on port 53 { 127.0.0.1; **192.168.254.10;** }; //Master Dns Ip
listen-on-v6 port 53 { ::1; };
directory "/var/named";
dump-file "/var/named/data/cache_dump.db";
statistics-file "/var/named/data/named_stats.txt";
memstatistics-file "/var/named/data/named_mem_stats.txt";
recursing-file "/var/named/data/named.recursing";
secroots-file "/var/named/data/named.secroots";
allow-query { localhost; **192.168.254.0/24;** }; // IP Ranges
allow-transfer { localhost; **192.168.254.11;**}; // Slave Ip
......
....
zone "." IN {
type hint;
file "named.ca";
};
**
zone "elong.com" IN {
type master;
file "forward.yanggy"; // 正向解析文件
allow-update { none; };
};
zone "254.168.192.in-addr.arpa" IN {
type master;
file "reverse.yanggy"; // 反向解析文件
allow-update { none;};
};
**
include "/etc/named.rfc1912.zones";
include "/etc/named.root.key";
(3). 配置正向解析文件forward.yanggy1
vim /var/named/forward.yanggy
1 | $TTL 86400 |
(4).配置反向解析文件1
vim /var/named/reverse.yanggy
1 | $TTL 86400 |
(5).检查配置文件的正确性1
2
3named-checkconf /etc/named.conf
named-checkzone yanggy.com /var/named/forward.yanggy
named-checkzone yanggy.com /var/named/reverse.yanggy
第一条命令如果没错误,什么都不会输出,后面两条命令如果没错误,则输出内容包含OK.
(6).启动named服务1
2systemctl enable named
systemctl start named
(7).配置相关文件所属用户和组1
2
3
4chgrp named -R /var/named
chown -v root:named /etc/named.conf
restorecon -rv /var/named
restorecon /etc/named.conf
(8).安装配置完成,开始测试1
dig masterdns.yanggy.com
1 | ; <<>> DiG 9.9.4-RedHat-9.9.4-72.el7 <<>> masterdns.yanggy.com |
(9). 退出容器后,将此容器保存为image:dns_image,以后dns_master就用此image1
docker commit dns_master dns_master
3.配置DNS Slave服务器
(1).进入容器和安装bind。1
yum install bind bind-utils -y
(2).配置named.conf1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26options {
listen-on port 53 { 127.0.0.1; 192.168.254.11;};
listen-on-v6 port 53 { ::1; };
directory "/var/named";
dump-file "/var/named/data/cache_dump.db";
statistics-file "/var/named/data/named_stats.txt";
memstatistics-file "/var/named/data/named_mem_stats.txt";
recursing-file "/var/named/data/named.recursing";
secroots-file "/var/named/data/named.secroots";
allow-query { localhost;192.168.254.0/24;};
....
....
zone "yanggy.com" IN {
type slave;
file "slaves/yanggy.fwd";
masters {192.168.254.10;};
};
zone "254.168.192.in-addr.arpa" IN {
type slave;
file "slaves/yanggy.rev";
masters {192.168.254.10;};
};
include "/etc/named.rfc1912.zones";
include "/etc/named.root.key";
(3).启动dns服务1
2systemctl enable named
systemctl start named
(4).启动成功后,就会在目录/var/named/slaves/下出现yagnggy.fwd和yanggy.rev,不用手动配置
(5).配置相关文件的所属用户和用户组1
2
3
4chgrp named -R /var/named
chown -v root:named /etc/named.conf
restorecon -rv /var/named
restorecon /etc/named.conf
(6).配置完后,也可照上面方法测试,看是否正常。
(7).退出窗器后,将此容器保存为image:dns_slave,以后dns_slave就用此image
五、LVS+KeepAlived
1.在dns的docker-compose.yum添加如下内容,创建lvs和openresty容器1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58lvs01:
image: centos:latest
container_name: lvs01
hostname: lvs01
privileged: true
dns:
- 192.168.254.10
- 192.168.254.11
volumes:
- /home/yanggy/docker/lvs01/:/home/yanggy/
- /home/yanggy/docker/lvs01/etc/:/etc/keepalived/
networks:
br0:
ipv4_address: 192.168.254.13
lvs02:
image: centos:latest
container_name: lvs02
hostname: lvs02
privileged: true
dns:
- 192.168.254.10
- 192.168.254.11
volumes:
- /home/yanggy/docker/lvs02/:/home/yanggy/
- /home/yanggy/docker/lvs02/etc/:/etc/keepalived/
networks:
br0:
ipv4_address: 192.168.254.14
resty01:
image: centos:latest
container_name: resty01
hostname: resty01
privileged: true
expose:
- "80"
dns:
- 192.168.254.10
- 192.168.254.11
volumes:
- /home/yanggy/docker/web/web01/:/home/yanggy/
networks:
br0:
ipv4_address: 192.168.254.15
resty02:
image: centos:latest
container_name: web02
hostname: web02
privileged: true
expose:
- "80"
dns:
- 192.168.254.10
- 192.168.254.11
volumes:
- /home/yanggy/docker/web/web02/:/home/yanggy/
networks:
br0:
ipv4_address: 192.168.254.16
2.创建lvs01和lvs02容器1
docker-compose up
3.进入lvs01容器中,安装ipvsadm和keepalived1
2yum install ipvsadm -y
yum install keepalived -y
4.配置keepalived1
$ vim /etc/keepalived/keepalived.conf
1 | ! Configuration File for keepalived |
从上面的配置可以看到真实的服务器(RS)地址是192.168.254.15和192.168.254.16.
另一台LVS容器也是如此配置,不过需要修改router_id 为LVS_02,state为BACKUP,priority设置的比MASTER低一点,比如100.
在两台lvs上分别执行如下命令,启动keepalived1
2systemctl enable keepalived
systemctl start keepalived
5.登录到上面两台RS容器上,编写如下脚本,假设名称为rs.sh
为lo:0绑定VIP地址、抑制ARP广播1
2
3
4
5
6
7
8#!/bin/bash
ifconfig lo:0 192.168.254.100 broadcast 192.168.254.100 netmask 255.255.255.255 up
route add -host 192.168.254.100 dev lo:0
echo "1" > /proc/sys/net/ipv4/conf/lo/arp_ignore
echo "2" > /proc/sys/net/ipv4/conf/lo/arp_announce
echo "1" > /proc/sys/net/ipv4/conf/all/arp_ignore
echo "2" > /proc/sys/net/ipv4/conf/all/arp_announce
sysctl -p &>/dev/null
脚本编写完毕后,将脚本设置成可执行,然后运行。使用ifconfig命令检查配置结果 .
6.登录到lvs
使用ipvsadm命令查看映射状态(要想得到下图,还需要在192.168.254.15和192.168.254.16容器中安装openresty并启动,见下面openresty部分)1
ipvsadm -Ln
7.测试keepalived监测状态
将192.168.254.16:80服务关闭1
docker stop web02
再次使用ipvsadm查看状态,可以看见IP:192.168.254.16:80已经剔除,以后的请求都转发到192.168.254.15:80上。
8.退出容器,使用docker commit lvs01 lvs,保存为lvs镜像,修改docker-compose.yml文件中lvs01和lvs02的image值为lvs:latest.
以后启动容器resty01和resty02之后,需要手动执行一下rs.sh脚本
六、OpenResty
1.创建并启动resty01容器,然后进入容器中
安装就不介绍了,自行看官网上安装说明。
安装完之后,在用户的目录中执行如下命令:1
2
3mkdir ~/work
cd ~/work
mkdir logs/ conf/
然后在conf目录下新建nginx.conf配置文件,填写如下内容:1
[root@centos7 docker]# vim web/resty01/work/conf/nginx.conf
1 | worker_processes 1; |
192.168.254.17和192.168.254.18是上游web服务器的IP,负载均衡方法还可以配置权重或Hash方式。
退出容器,然后使用docker commit resty01 openresty.保存为openresty镜像。
2.修改docker-compose.yml,将resty01和resty02容器image属性都个性为openresty,执行docker-compose up,执行成功后进入容器resty02,此时容器resty02中已经安装了openresty,同样需要在用户Home目录下创建work、conf、logs。
resty01和resty02容器都映射了宿主机的文件系统,请看docker-compose.yml文件中配置的volumes属性,所以可以在配置resty02之前,将resty01的配置复制到resty02。
修改nginx.conf1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21worker_processes 1;
error_log logs/error.log;
events {
worker_connections 1024;
}
http {
upstream web-group2 {
server 192.168.254.19:80 weight=1;
server 192.168.254.20:80 weight=1;
server 192.168.254.21:80 weight=1;
}
server {
listen 80;
server_name 192.168.254.16;
location / {
proxy_pass http://web-group2;
}
}
}
配置完openresty之后,启动nginx.1
nginx -c /home/yanggy/work/conf/nginx.conf
可以使用netstat -nltp 检查一下80端口服务是否开启
可以将nginx注册为系统服务,以便容器启动时自动运行
七、Web应用
1.修改docker-compose.yml,添加如下内容1
2
3
4
5
6
7
8
9
10
11
12
13web01:
image: centos:latest
container_name: web01
hostname: web01
privileged: true
expose:
- "80"
volumes:
- /home/yanggy/docker/web/web01/conf/:/etc/httpd/conf/
- /home/yanggy/docker/web/web01/www/:/var/www/
networks:
br0:
ipv4_address: 192.168.254.17
2.创建并启动容器web01
进入容器之后,安装httpd1
yum install -y httpd
3.编辑主配置文件1
vim /etc/httpd/conf/httpd.conf
ServerName前的#去掉,并将服务名称个性为Web01
4.创建index.html1
2cd /var/www/html/
echo "<h1>Web01</h1>" > index.html
5.启动服务1
2systemctl enable httpd
systemctl start httpd
6.退出容器,并将容器保存为镜像web
7.向docker-compose.yml添加如下内容1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52 web02:
image: web:latest
container_name: web02
hostname: web02
privileged: true
expose:
- "80"
volumes:
- /home/yanggy/docker/web/web02/conf/:/etc/httpd/conf/
- /home/yanggy/docker/web/web02/www/:/var/www/
networks:
br0:
ipv4_address: 192.168.254.18
web03:
image: web:latest
container_name: web03
hostname: web03
privileged: true
expose:
- "80"
volumes:
- /home/yanggy/docker/web/web03/conf/:/etc/httpd/conf/
- /home/yanggy/docker/web/web03/www/:/var/www/
networks:
br0:
ipv4_address: 192.168.254.19
web04:
image: web:latest
container_name: web04
hostname: web04
privileged: true
expose:
- "80"
volumes:
- /home/yanggy/docker/web/web04/conf/:/etc/httpd/conf/
- /home/yanggy/docker/web/web04/www/:/var/www/
networks:
br0:
ipv4_address: 192.168.254.20
web05:
image: web:latest
container_name: web05
hostname: web05
privileged: true
expose:
- "80"
volumes:
- /home/yanggy/docker/web/web05/conf/:/etc/httpd/conf/
- /home/yanggy/docker/web/web05/www/:/var/www/
networks:
br0:
ipv4_address: 192.168.254.21
8.将web01在宿主机上映射的文件夹复制四份,分别命令web01,web03,web04,web05,并修改其中的httpd.conf和index.html为相应的服务器名称
9.使用docker-compose up创建web02-05容器,此时容器内就已经启动了web服务。
八、打通DNS和LVS
在上面LVS部分,配置了一个虚拟IP:192.168.254.100,现我将其添加到DNS服务器中,当输入域名时,能够解析到这个虚拟IP。
1.进入dns_master容器,修改正向解析配置文件1
[root@dns_master /]# vim /var/named/forward.yanggy
添加正向解析www.yanggy.com->192.168.254.100,并使用了别名webserver。1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17$TTL 86400
@ IN SOA masterdns.yanggy.com. root.yanggy.com. (
2019011201 ;Serial
3600 ;Refresh
1800 ;Retry
64800 ;Expire
86400 ;Minimum TTL
)
@ IN NS masterdns.yanggy.com.
@ IN NS slavedns.yanggy.com.
@ IN A 192.168.254.10
@ IN A 192.168.254.11
@ IN A 192.168.254.100
masterdns IN A 192.168.254.10
slavedns IN A 192.168.254.11
webserver IN A 192.168.254.100
www CNAME webserver
2.修改反向解析文件1
vim /var/named/reverse.yanggy
192.168.254.100->webserver.yanggy.com
1 | $TTL 86400 |
九、总结
在宿主机的/etc/resolv.conf配置文件中添加DNS服务器IP1
2
3# Generated by NetworkManager
nameserver 192.168.254.10
nameserver 192.168.254.11
在浏览器中输入www.yanggy.com时,首先经DNS服务器解析成192.168.254.100,再经由keepalived转到其中一个openresty nginx服务器,然后nginx服务器再转到其上游的一个web应用服务器。
DNS和LVS都是高可用的,其中一台宕机,仍能响应浏览器请求,openresty服务器也是高可用的,只有DNS和LVS服务器可用,就会将请求分发到可用的openresty上,openresty不可用时,LVS就将其摘掉,可用时再恢复。同样,web服务器也是高可用的,openresty可以监测到其上游web应用服务器的可用状态,做到动态摘除和恢复。
完整的docker-compose.yml文件内容如下:1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160version: "3.7"
services:
dns_master:
image: dns_master:latest
container_name: dns_master
hostname: dns_master
privileged: true
dns:
- 192.168.254.10
volumes:
- /home/yanggy/docker/dns/master/:/home/yanggy/
networks:
br0:
ipv4_address: 192.168.254.10
dns_slave:
image: dns_slave:latest
container_name: dns_slave
hostname: dns_slave
privileged: true
dns:
- 192.168.254.10
- 192.168.254.11
volumes:
- /home/yanggy/docker/dns/slave/:/home/yanggy/
networks:
br0:
ipv4_address: 192.168.254.11
client:
image: centos:latest
container_name: client
hostname: client
privileged: true
dns:
- 192.168.254.10
- 192.168.254.11
volumes:
- /home/yanggy/docker/client/:/home/yanggy/
networks:
br0:
ipv4_address: 192.168.254.12
lvs01:
image: lvs:latest
container_name: lvs01
hostname: lvs01
privileged: true
volumes:
- /home/yanggy/docker/lvs01/:/home/yanggy/
- /home/yanggy/docker/lvs01/etc/:/etc/keepalived/
networks:
br0:
ipv4_address: 192.168.254.13
lvs02:
image: lvs:latest
container_name: lvs02
hostname: lvs02
privileged: true
volumes:
- /home/yanggy/docker/lvs02/:/home/yanggy/
- /home/yanggy/docker/lvs02/etc/:/etc/keepalived/
networks:
br0:
ipv4_address: 192.168.254.14
resty01:
image: openresty:latest
container_name: resty01
hostname: resty01
privileged: true
expose:
- "80"
volumes:
- /home/yanggy/docker/web/resty01/:/home/yanggy/
networks:
br0:
ipv4_address: 192.168.254.15
resty02:
image: openresty:latest
container_name: resty02
hostname: resty02
privileged: true
expose:
- "80"
volumes:
- /home/yanggy/docker/web/resty02/:/home/yanggy/
networks:
br0:
ipv4_address: 192.168.254.16
web01:
image: web:latest
container_name: web01
hostname: web01
privileged: true
expose:
- "80"
volumes:
- /home/yanggy/docker/web/web01/conf/:/etc/httpd/conf/
- /home/yanggy/docker/web/web01/www/:/var/www/
networks:
br0:
ipv4_address: 192.168.254.17
web02:
image: web:latest
container_name: web02
hostname: web02
privileged: true
expose:
- "80"
volumes:
- /home/yanggy/docker/web/web02/conf/:/etc/httpd/conf/
- /home/yanggy/docker/web/web02/www/:/var/www/
networks:
br0:
ipv4_address: 192.168.254.18
web03:
image: web:latest
container_name: web03
hostname: web03
privileged: true
expose:
- "80"
volumes:
- /home/yanggy/docker/web/web03/conf/:/etc/httpd/conf/
- /home/yanggy/docker/web/web03/www/:/var/www/
networks:
br0:
ipv4_address: 192.168.254.19
web04:
image: web:latest
container_name: web04
hostname: web04
privileged: true
expose:
- "80"
volumes:
- /home/yanggy/docker/web/web04/conf/:/etc/httpd/conf/
- /home/yanggy/docker/web/web04/www/:/var/www/
networks:
br0:
ipv4_address: 192.168.254.20
web05:
image: web:latest
container_name: web05
hostname: web05
privileged: true
expose:
- "80"
volumes:
- /home/yanggy/docker/web/web05/conf/:/etc/httpd/conf/
- /home/yanggy/docker/web/web05/www/:/var/www/
networks:
br0:
ipv4_address: 192.168.254.21
networks:
br0:
driver: bridge
ipam:
driver: default
config:
-
subnet: 192.168.254.0/24
十、演示
1.命令行curl访问
2.浏览器访问
其它的比如停止一台openresty服务器或web应用服务器,请自行验证,最终目的是看能否自动摘除和恢复。
十一、问题
1.MacOS无法直接访问容器
官网上有说明,Mac系统中无法直接从Mac中直接通过IP访问容器
如果有知道如何访问的朋友,麻烦告知。
2.使用ipvsadm -Ln命令时,提示ip_vs模块不存在
原因可能是宿主机的linux内核不支持也可能是宿主机没加载内ip_vs模块1
2
3
4
5查看宿主机内核,一般高于3.10都可以。
uname -a
```
检查宿主机是否已加载ip_vs模块
lsmod | grep ip_vs1
如果没出来结果,则使用modprobe命令加载,加载完后,再使用lsmod检查一下。
modprobe ip_vs`
3.上面架构的问题
由于最下层Web应用是分组的,如果其中一组所有Web服务器都宕机,则需要将上游的openresty也关掉,这样lvs才不会将流量转到已经没有可用的web应用那一组。
参考网址:
https://docs.docker.com/compose/compose-file/
https://www.unixmen.com/setting-dns-server-centos-7/
https://blog.csdn.net/u012852986/article/details/52412174
http://blog.51cto.com/12227558/2096280
https://hub.docker.com/_/centos/