How to analyze Apache access.log with AWStats?

When i use VirtualSever in which provide the awstats report for administrator to help analyze the webserver`s visit status.In my case , I want a single static html file , easily to share 🙂

First of all .you must change you awstats.model.conf correctly!

In my case , i change this.

# i change to 4 default is 1

# change the path to you access.log file full path!

Then you can run the command below to generate the html report file!

$ cd awstats-7.7\wwwroot\cgi-bin
$ perl -config=model -output -staticlink >> youserver.html

When you access.log file include long duration of can use sed command to extract the line you want to analyze!

$ sed -n '[start line number],[end line number]p' access.log >> access_sed.log


Some images in the static html can`t show , so i will figure it out later.

for more detail pls visit

How to limit cpu usage with Nodejs?

I want to introduce the node-cpulimit in this post.When i use puppeteer ,some page cost a long time in page loading. At the same time , the cpu limit will increase instantly.So i want to find some tools to help limit the cpu usage.

A very simple example to use cpulimit and childprocess

const limiter = require('cpulimit');

// Run command
const { spawn } = require('child_process');
const node = spawn('node', ['/root/puppeteer.js']);

// Run cpulimit
const options = {
  limit: 50, // or any other value
  includeChildren: true,

limiter.createProcessFamily(options, (err, processFamily) => {
  if (err) {
    console.error('Error:', err.message);

  limiter.limit(processFamily, options, (err) => {
    if (err) {
      console.error('Error:', err.message);
    } else {

for more detail pls visit

Get player data and match via PUBG API

The programmer who do not know data analyze is not a good player in PUBG. 🙂

I will show some tool method below

# this function get the player data by PUBG player name and return the json data

def get_player_data(name):
    headers = {
        "Authorization": "Bearer {}".format(PUBG_API_TOKEN),
        "Accept": "application/vnd.api+json",
    payload = {"filter[playerNames]": name}

    result = requests.get("", params=payload,headers=headers)
    return result.json()
# this function get the match data by match id and return the match data json
def get_matches_by_id(id):
    headers = {
        "Accept": "application/vnd.api+json",

    result = requests.get("{}".format(id),headers=headers)
    return result.json()

This post just show the tool function . The next post will show how to use it and extract data!

download GoProxy via vultr vps startup and open a socks proxy

As we know , Goproxy just release the latest version on github and delete the old releases. And more , the goproxy will check update at the proxy program startup.

Try to image , you vps reboot , you have to download the latest goproxy and run a socks proxy, And you must use shell script on vps bootup!

Here is the shell script . The script visit the goproxy github repository and get the latest linux version download link.Then download the goproxy ,extract it and run it.


url=$(curl -sb -H "Accept: application/json" | grep browser_download_url | grep linux-amd64 | awk -F '"browser_download_url":' '{print $2}' | sed 's/\"//g' );

wget $url

tar -xvf proxy-linux-amd64.tar.gz

echo '' > blocked

./proxy socks -t tcp -p "" -a "[username]:[password]" &

Well, you just need to replace [username] and [password] with you content! Then you paste in the vultr startup script.That`s all!

the goproxy github repository

Extract response header via curl in php

$ch = curl_init();
curl_setopt($ch, CURLOPT_RETURNTRANSFER, 1);
curl_setopt($ch, CURLOPT_VERBOSE, 1);
curl_setopt($ch, CURLOPT_HEADER, 1);
// ...

$response = curl_exec($ch);

// Then, after your curl_exec call:
$header_size = curl_getinfo($ch, CURLINFO_HEADER_SIZE);
$header = substr($response, 0, $header_size);
$body = substr($response, $header_size);

Change pip source in windows


Sometimes the connection to pip origin source is extremely slow.But in you country,many optional mirrors site may speed up the pip install process.

How to:

  1. goto C:\Users\[you username]\AppData\Roaming
  2. if not exist “pip” folder , create it
  3. create pip.ini file in “pip” folder
  4. copy the follow code in pip.ini
timeout = 6000  
index-url =
trusted-host =

Use PyPDF2 to extract the PDF annotation


I use Foxit PDF reader to read PDF books and well as take some marks and comments in the PDF file.But the PDF reader do not provide the export function,maybe i can find the entry for the function..:)

So I try to use PyPDF to export the comment I written in PDF

The Code:

#!/usr/bin/env python
# -*- coding: UTF-8 -*-
from PyPDF2 import PdfFileReader
import types
def text_extractor(path,output_path):
    text_file = open(output_path,'w')

    with open(path, 'rb') as f:
        pdf = PdfFileReader(f)
        # get the 4 page start from 0
        page = pdf.getPage(4)
        # write the raw text in to file
        contents = page.extractText()
        # print you comment/annotation text to console
        annotations = page.get("/Annots").getObject()
        for annot in annotations:
                if annot is not None:
                        if annot.getObject().get("/Contents") is not None:
                        # .get("/Highlight").getObject())

if __name__ == '__main__':
    path = './youPDF.pdf'
    out_path = './test.txt'


When you fill you PDF filename and path ,and run the script . the raw test will save in test.txt file.You comments will show in the console.But the code show above,just process page 4.If you want to export all you comment.You must add a for loop to iterate all pages and extract the text!

use Fabric & GoAccess to generate website access report


I want to know the daily website(apache2/nginx etc.) access report,like some charts include the page view(PV) , IPs.

Part 1: install fabric and goaccess

  1. $ pip install fabric
  2. $ apt-get install goaccess (installation in other platform see

Part 2: write down the code

# -*- coding: utf-8 -*-
from fabric import Connection
import time
import os

# the host connection infomation , fill you infomation in []
c = Connection(
    host="[you host]",
       # key_filename": "[user private file]",
        "password": "[user_password]",
# get the start time
current_time = time.time()
date_str = time.strftime('%Y_%m_%d',time.localtime(current_time))

# download the nginx log (access.1.log means yesterday`s access log)
c.get('/var/log/nginx/access.1.log'.format(date_str) , './access.{}.log'.format(date_str))

# run the goaccess and generate the report in html format
os.system("goaccess ./access.{}.log -o report.{}.html".format(date_str,date_str))

Finally you will see you daily report like the image below


Add google website tracking tags(Google Analytics) in WordPress5.2 manually

First: find you tracking code in google Analytics

  1. goto Admin panel
  2. click “Tracnking info” under “Properly” block
  3. click “Tracking Info” and “Tracking Code”
  4. you will see you “Global Site Tag” on the right side.
  5. copy the html script !

Second: login you webserver and edit the php file

  1. cd into wordpress folder(wordpress/wp-content/themes/twentysixteen)
  2. vim header.php
  3. paste the First step code into <head>[put here]</head>
  4. save the file

Finally: visit you wordpress again the google analytics will work!

But you must edit each theme,if you change you theme.Edit the
wordpress/wp-content/themes/[twentysixteen,twentynineteen,twentyseventeen ]/header.php

I will try a more easy way to add google Analytics !