100万并发连接服务器笔记之测试端就绪

重新编写测试端程序

测试端程序需要增加绑定本机IP和本地端口的功能,以尽可能的向外发出更多的tcp请求。需要对client1.c重构,增加参数传递。 下面是client2.c的代码

123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566676869707172737475767778798081828384858687888990919293949596979899100101102103104105106107108
#include <sys/types.h>
#include <sys/time.h>
#include <sys/queue.h>
#include <stdlib.h>
#include <err.h>
#include <event.h>
#include <evhttp.h>
#include <unistd.h>
#include <stdio.h>
#include <string.h>
#include <sys/socket.h>
#include <netinet/in.h>
#include <time.h>
#include <pthread.h>
 
#define BUFSIZE 4096
#define SLEEP_MS 10
 
char buf[BUFSIZE];
 
int bytes_recvd = 0;
int chunks_recvd = 0;
int closed = 0;
int connected = 0;
 
static char ip_array[300] = "192.168.190.134,192.168.190.143,192.168.190.144,192.168.190.145,192.168.190.146,192.168.190.147,192.168.190.148,192.168.190.149,192.168.190.151,192.168.190.152";
static char server_ip[16] = "192.168.190.133";
static int server_port = 8000;
static int max_conns = 62000;
 
// called per chunk received
void chunkcb(struct evhttp_request *req, void *arg) {
int s = evbuffer_remove( req->input_buffer, &buf, BUFSIZE );
bytes_recvd += s;
chunks_recvd++;
if (connected >= max_conns && chunks_recvd % 10000 == 0)
printf(">Chunks: %d Bytes: %d Closed: %d ", chunks_recvd, bytes_recvd, closed);
}
 
// gets called when request completes
void reqcb(struct evhttp_request *req, void *arg) {
closed++;
}
 
int main(int argc, char **argv) {
int ch;
while ((ch = getopt(argc, argv, "o:h:p:m:")) != -1) {
switch (ch) {
case 'h':
printf("host is %s ", optarg);
strncpy(server_ip, optarg, 15);
break;
case 'p':
printf("port is %s ", optarg);
server_port = atoi(optarg);
/*strncpy(server_ip, optarg, 15);*/
break;
case 'm':
printf("max_conns is %s ", optarg);
max_conns = atoi(optarg);
/*strncpy(server_ip, optarg, 15);*/
break;
case 'o':
printf("ori_ips is %s ", optarg);
 
strncpy(ip_array, optarg, 300 - 1);
break;
}
}
 
event_init();
struct evhttp *evhttp_connection;
struct evhttp_request *evhttp_request;
char path[32];
int i;
 
char delims[] = ",";
char *ori_ip = NULL;
ori_ip = strtok( ip_array, delims );
while (ori_ip != NULL) {
for (i = 1; i <= max_conns; i++) {
evhttp_connection = evhttp_connection_new(server_ip, server_port);
evhttp_connection_set_local_address(evhttp_connection, ori_ip);
evhttp_set_timeout(evhttp_connection, 864000); // 10 day timeout
evhttp_request = evhttp_request_new(reqcb, NULL);
evhttp_request->chunk_cb = chunkcb;
sprintf(&path, "/test/%d", ++connected);
 
if (i % 1000 == 0)
printf("Req: %s -> %s ", ori_ip, &path);
 
evhttp_make_request( evhttp_connection, evhttp_request, EVHTTP_REQ_GET, path );
evhttp_connection_set_timeout(evhttp_request->evcon, 864000);
event_loop( EVLOOP_NONBLOCK );
 
if ( connected % 1000 == 0 )
printf(" Chunks: %d Bytes: %d Closed: %d ", chunks_recvd, bytes_recvd, closed);
 
usleep(SLEEP_MS * 10);
}
 
ori_ip = strtok( NULL, delims );
}
 
event_dispatch();
 
return 0;
}
view rawclient2.c hosted with ❤ by GitHub

若不指定端口,系统会随机挑选没有使用到的端口,可以节省些心力。

编译:

gcc -o client2 client2.c -levent

参数解释

  • -h 要连接的服务器IP地址
  • -p 要连接的服务器端口
  • -m 本机IP地址需要绑定的随机端口数量
  • -o 本机所有可用的IP地址列表,注意所有IP地址都应该可用

运行:

./client2 -h 192.168.190.230 -p 8000 -m 64000 -o 192.168.190.134,192.168.190.143,192.168.190.144,192.168.190.145,192.168.190.146,192.168.190.147,192.168.190.148,192.168.190.149,192.168.190.150,192.168.190.151

太长了,每次执行都要粘贴过去,直接放在一个client2.sh文件中,执行就很简单方便多了。

#!/bin/sh
./client2 -h 192.168.190.230 -p 8000 -m 64000 -o 192.168.190.134,192.168.190.143,192.168.190.144,192.168.190.145,192.168.190.146,192.168.190.147,192.168.190.148,192.168.190.149,192.168.190.150,192.168.190.151

保存,赋值可运行:

chmod +x client2.sh

启动测试:

sh client2.sh

第三个遇到的问题:fs.file-max的问题

测试端程序client2.c在发出的数量大于某个值(大概为40万时)是,通过dmesg命令查看会得到大量警告信息:

[warn] socket: Too many open files in system

此时,就需要检查/proc/sys/fs/file-max参数了。

查看一下系统对fs.file-max的说明

 /proc/sys/fs/file-max
This file defines a system-wide limit on the number of open files for all processes. (See also setrlimit(2), which can be used by a process to set the per-process limit,
RLIMIT_NOFILE, on the number of files it may open.) If you get lots of error messages about running out of file handles, try increasing this value:
echo 100000 > /proc/sys/fs/file-max
The kernel constant NR_OPEN imposes an upper limit on the value that may be placed in file-max.
If you increase /proc/sys/fs/file-max, be sure to increase /proc/sys/fs/inode-max to 3-4 times the new value of /proc/sys/fs/file-max, or you will run out of inodes.

file-max表示系统所有进程最多允许同时打开所有的文件句柄数,系统级硬限制。Linux系统在启动时根据系统硬件资源状况计算出来的最佳的最大同时打开文件数限制,如果没有特殊需要,不用修改,除非打开的文件句柄数超过此值。

在为测试机分配4G内存时,对应的fs.file-max值为386562,很显然打开的文件句柄很受限,38万个左右。 很显然,无论是测试端还是服务端,都应该将此值调大些,一定要大于等于/etc/security/limits.conf送所设置的soft nofile和soft nofile值。 
注意ulimit -n,仅仅设置当前shell以及由它启动的进程的资源限制。

备注:以上参数,具有包含和被包含的关系。

当前会话修改,可以这么做:

echo 1048576 > /proc/sys/fs/file-max

但系统重启后消失。

永久修改,要添加到 /etc/sysctl.conf 文件中:

fs.file-max = 1048576

保存并使之生效:

sysctl -p

再测,就不会出现此问题了。

一台6G内存机器测试机,分配7个网卡,可以做到不占用虚拟内存,对外发出64000 * 7 = 448000个对外持久请求。要完成100万的持久连接,还得再想办法。

最终测试端组成如下:

  • 两台物理机器各自一个网卡,每个发出64000个请求
  • 两个6G左右的centos测试端机器(绑定7个桥接或NAT连接)各自发出64000*7 = 448000请求
  • 共使用了16个网卡(物理网卡+虚拟网卡)
  • 1M ≈ 1024K ≈ 1024000 = (64000) + (64000) + (64000*7) + (64000*7)
  • 共耗费16G内存,16个网卡(物理+虚拟),四台测试机

备注: 
下面就要完成1M持久连接的目标,但在服务端还会遇到最后一个问题。

原文地址:https://www.cnblogs.com/hzcya1995/p/13318390.html