Python日志产生器

Python日志产生器

写在前面
有的时候,可能就是我们做实时数据收集的时候,会有一个头疼的问题就是,你会发现,你可能一下子,没有日志的数据源。所以,我们可以简单使用python脚本来实现产生实时的数据,这样就很方便了

在编写代码之前,我们得知道我们的webserver日志到底长什么样,下面我找了一段的nginx服务器上真实日志,作为样例:

223.104.25.1 - - [21/Nov/2017:20:34:16 +0800] "GET / HTTP/1.1" 200 94 "-" "Mozilla/5.0 (iPhone; CPU iPhone OS 10_3_3 like Mac OS X) AppleWebKit/603.3.8 (KHTML, like Gecko) Version/10.0 Mobile/14G60 Safari/602.1" "-"
223.104.25.1 - - [21/Nov/2017:20:34:16 +0800] "GET / HTTP/1.1" 200 94 "-" "Mozilla/5.0 (iPhone; CPU iPhone OS 10_3_3 like Mac OS X) AppleWebKit/603.3.8 (KHTML, like Gecko) Version/10.0 Mobile/14G60 Safari/602.1" "-"
156.151.199.137 - - [21/Nov/2017:20:34:19 +0800] "GET / HTTP/1.1" 304 0 "-" "Mozilla/5.0 (Windows NT 10.0; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/62.0.3202.62 Safari/537.36" "-"

从上面的服务器日志中我们可以看见,主要的字段有:
1.访问的ip地址156.151.199.137
2.访问的时间/时区 [21/Nov/2017:20:34:19 +0800]
3.状态码,
4.useragent 信息等

接下来,我们就开始来开发模拟的日志产生器

思路??
开发的pyhton日志产生器中包括:请求的URL、ip、referer和状态码等信息。
实现,这里直接贴上代码python:

#coding=UTF-8

import random
import time

url_paths = [
	"class/154.html",
	"class/128.html",
	"class/147.html",
	"class/116.html",
	"class/138.html",
	"class/140.html",
	"learn/828",
	"learn/521",
	"course/list"
]

ip_slices = [127,156,222,105,24,192,153,127,31,168,32,10,82,77,118,228]

http_referers = [
	"http://www.baidu.com/s?wd={query}",
	"https://www.sogou.com/web?query={query}",
	"http://cn.bing.com/search?q={query}",
	"https://search.yahoo.com/search?p={query}",
]

search_keyword = [
	"Spark 项目实战",
	"Hadoop 项目实战",
	"Storm 项目实战",
	"Spark Streaming实战",
	"古诗词鉴赏"
]

status_codes = ["200","404","500","503","403"]

def sample_url():
	return random.sample(url_paths, 1)[0]

def sample_ip():
	slice = random.sample(ip_slices , 4)
	return ".".join([str(item) for item in slice])

def sample_referer():
	if random.uniform(0, 1) > 0.2:
		return "-"

	refer_str = random.sample(http_referers, 1)
	query_str = random.sample(search_keyword, 1)
	return refer_str[0].format(query=query_str[0])

def sample_status_code():
	return random.sample(status_codes, 1)[0]

def generate_log(count = 10):
	time_str = time.strftime("%Y-%m-%d %H:%M:%S", time.localtime())
	
	f = open("/home/hadoop/data/project/logs/access.log","w+")

	while count >= 1:
		query_log = "{ip}	{local_time}	"GET /{url} HTTP/1.1"	{status_code}	{referer}".format(url=sample_url(), ip=sample_ip(), referer=sample_referer(), status_code=sample_status_code(),local_time=time_str)

		f.write(query_log + "
")

		count = count - 1 

if __name__ == '__main__':
	generate_log(10)




这样我们就能够实现日志的产生,测试:

[hadoop@hadoop000 logs]$ more access.log 
105.228.77.82	2017-11-21 06:38:01	"GET /learn/828 HTTP/1.1"	200	-
31.10.153.77	2017-11-21 06:38:01	"GET /class/138.html HTTP/1.1"	200	-
77.156.153.105	2017-11-21 06:38:01	"GET /class/140.html HTTP/1.1"	503	http://www.bai
du.com/s?wd=Storm 项目实战
222.32.228.77	2017-11-21 06:38:01	"GET /learn/521 HTTP/1.1"	404	https://www.so
gou.com/web?query=Spark 项目实战
#产生的部分

数据可以产生了,接下来我们要实现数据的实时产生了,这里就是需要使用到linux里面的Crontab执行计划了。相信学过linux的人,肯定会知道。我们编写一个执行计划就好。
推荐一个测试工具网站:
https://tool.lu/crontab

1)先写一个执行计划的执行脚本。new一个.sh文件:

[hadoop@hadoop000 project]$ vim log_generator.sh 
python /home/hadoop/data/project/generate_log.py

2)写好之后,就可以写我们的执行计划了

[hadoop@hadoop000 project]$ crontab -e
* * * * * /home/hadoop/data/project/log_generator.sh

* * * * * sleep 10; /home/hadoop/data/project/log_generator.sh

* * * * * sleep 20; /home/hadoop/data/project/log_generator.sh

* * * * * sleep 30; /home/hadoop/data/project/log_generator.sh

* * * * * sleep 40; /home/hadoop/data/project/log_generator.sh

* * * * * sleep 50; /home/hadoop/data/project/log_generator.sh

这样,我们的执行计划就设计好了,我们这里设计的是每10秒执行一次
,即每10秒产生十条日志信息

验证:

[hadoop@hadoop000 logs]$ tail -f access.log 
222.153.118.82	2017-11-21 06:45:01	"GET /class/147.html HTTP/1.1"	403	-
127.192.168.31	2017-11-21 06:45:01	"GET /class/138.html HTTP/1.1"	200	-
77.31.153.127	2017-11-21 06:45:01	"GET /class/116.html HTTP/1.1"	403	https://search.yahoo.com/search?p=Spark Streaming实战
153.10.82.192	2017-11-21 06:45:01	"GET /class/147.html HTTP/1.1"	404	-
168.32.153.222	2017-11-21 06:45:01	"GET /learn/828 HTTP/1.1"	503	-
118.153.222.192	2017-11-21 06:45:01	"GET /class/128.html HTTP/1.1"	503	-
192.32.156.31	2017-11-21 06:45:01	"GET /class/147.html HTTP/1.1"	500	https://search.yahoo.com/search?p=Spark 项目实战
127.192.82.228	2017-11-21 06:45:01	"GET /class/154.html HTTP/1.1"	403	-
118.31.222.105	2017-11-21 06:45:01	"GET /learn/521 HTTP/1.1"	503	-
127.127.168.228	2017-11-21 06:45:01	"GET /class/140.html HTTP/1.1"	200	-
tail: access.log: file truncated
228.10.153.192	2017-11-21 06:56:01	"GET /class/147.html HTTP/1.1"	500	-
10.168.156.31	2017-11-21 06:56:01	"GET /course/list HTTP/1.1"	403	-
192.153.222.77	2017-11-21 06:56:01	"GET /class/154.html HTTP/1.1"	200	-
153.32.105.82	2017-11-21 06:56:01	"GET /course/list HTTP/1.1"	500	http://www.baidu.com/s?wd=Spark 项目实战

上面是部分截取,可以观察到,每隔10秒就会产生日志数据

接下来,我们就可以来使用这个日志产生器来实时产生我们需要的日志信息了。

原文地址:https://www.cnblogs.com/liuge36/p/9883016.html