FaceNet Configuration and Deployment

FaceNet Background

  1. Reference: “FaceNet: A Unified Embedding for Face Recognition and Clustering”
  2. Inspiration: The code is heavily inspired by the OpenFace implementation.
  3. Dataset:
    • CASIA-WebFace dataset: this training set consists of total of 453 453 images over 10 575 identities after face detection.
    • VGGFace2 dataset: the best performing model has been trained on this training set which consists of ~3.3M faces and ~9000 classes.
    • LFW: is a public benchmark for face verification, also known as pair matching. No matter what the performance of an algorithm on LFW, it should not be used to conclude that an algorithm is suitable for any commercial purpose. It consists of 13233 images over 5749 people, in which 1680 people with two or more images.
      • 每张图片命名方式为“lfw/name/name_xxxx.jpg” (这也是我们预处理后需要的图片命名格式)
      • “xxxx”是前面补零的四位图片编号: 例如,前美国总统乔治•W•布什的第10张图片为“lfw/George_W_Bush/George_W_Bush_0010.jpg”
  4. Pre-processing
    • Dlib face detector misses some of the hard examples (partial occlusion, silhouettes, etc), this makes the training set too “easy” which causes the model to perform worse on other benchmarks.
    • Multi-task CNN has proven to work very well as a face landmark detectors.
    • So we use MTCNN to make dataset be aligned to serve as training set.
  5. Training result
    • the best results are achieved by training the model using softmax loss
    • train a model using softmax loss on the CASIA-WebFace dataset can be found here
  6. Run the test
    • Validate on LFW
    • use the option --use_fixed_image_standardization when running validate_on_lfw.py (the input images to the model need to be standardized using fixed image standardization)

Configure FaceNet Conda Environment

Download data

  1. 下载facenet
    git clone https://github.com/davidsandberg/facenet
  2. 下载两个训练好的人脸识别模型 (Architecture: Inception ResNet v1)
    • 20180408-102900 LFW accuracy: 0.9905 Training dataset: CASIA-WebFace
    • 20180402-114759 LFW accuracy: 0.9965 Training dataset: VGGFace2
      在根目录下新建models文件夹,将模型放入models下: models\20180402-114759\20180402-114759
      # 请理智选择训练的数据集,如果使用vggface2的话,请留出超过100G的硬盘空间
  3. 下载数据集
    • Download LFW
      在根目录下新建datasets文件夹,将lfw解压后放入datasets下: datasets\lfw
      如果使用PyCharm,右键将lfw夹设为excluded,否则Pycharm会将所有图片读取到缓存中
    • Download CASIA-WebFace: 提取码 i565

Install Dependencies

配置GPU环境加速数据预处理和训练模型(配置CUDA,只有显卡为N卡才行):

Visual Studio 2017 + python 3.6.12 + tensorflow-gpu 1.7.0 + CUDA 9.0 + cuDNN 7.0.5 + facenet site-packages
  1. Visual Studio 2017
  2. NVIDIA Driver
  3. CUDA Toolkit 9.0
    • Base installer | Patch1 | Patch2 | Patch3 | Patch4
  4. cuDNN 7.0.5
    把cuDNN下的bin,include,lib文件夹拷贝到CUDA的安装路径: C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v9.0
    # 要安的tensoflow版本不一样,所对应的CUDA 和cuDNN版本也就不一样 (一定要对应上,否则会报错)
    $nvcc -V   # 检查CUDA是否安装成功
  5. 环境变量
    • 添加系统环境:
      C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v9.0\bin
      C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v9.0\libnvvp
      C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v9.0\lib\x64
      C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v9.0\include
      C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v9.0\extras\CUPTI\libx64
      C:\Program Files\NVIDIA Corporation\NVSMI   (nvidia-smi.exe的Path)
      C:\Program Files (x86)\NVIDIA Corporation\PhysX\Common
    • 添加environment variables
      CUDA_PATH       = C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v9.0
      CUDA_PATH_V9_0  = C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v9.0
  6. Anaconda 4.9.2
    添加系统环境:
    C:\ProgramData\Anaconda3
    C:\ProgramData\Anaconda3\Library\mingw-w64\bin
    C:\ProgramData\Anaconda3\Library\usr\bin
    C:\ProgramData\Anaconda3\Library\bin
    C:\ProgramData\Anaconda3\Scripts
    $conda -V
  7. install tensorflow-gpu 1.7.0
    $conda create -n facenet python=3.6 && conda activate facenet
    $cd facenet
    $pip install -r requirements.txt
    $pip uninstall -y tensorflow
    $pip install tensorflow-gpu==1.7.0
  8. 配置site-packages下的facenet和align模块
    • 找到site-packages路径
      $conda activate facenet
      $where python # C:\Users\PC\.conda\envs\facenet\python.exe
      # site-packages就在C:\Users\PC\.conda\envs\facenet\Lib\site-packages
    • facenet模块
      1. envs\facenet\Lib\site-packages下新建facenet文件夹
      2. 复制facenet-master\src下所有文件到上述facenet文件夹下
      3. 在conda环境中python, 然后import facenet, 不会报错即可
    • align模块
      1. 复制facenet-master\src\align文件夹到envs\facenet\Lib\site-packages下
      2. 在conda环境中python, 然后import align, 不会报错即可
  9. 调到2.5根据程序运行结果来添加缺失的包再继续接下来的步骤
  10. 修改代码
    • src\align\align_dataset_mtcnn.py:
      import facenet.facenet as facenet # 原: import facenet
    • contributed\predict.py
      import facenet.facenet as facenet # 原: import facenet
  11. 调整包版本
    $pip install numpy==1.16.2
    $pip install scipy==1.2.1
    # 可能会需要(视情况而定)包: align nltk gensim
    # numpy如果不是指定版本,需要修改代码: numpy\lib\npyio.py: allow_pickle=False -> allow_pickle=True
  12. 保持一致
    • 测试时的包版本要与训练模型时的包版本一致才可以预测(所以不要随便升级包版本)
      1. AttributeError: 'SVC' object has no attribute '_probA' (or something like that)
      It turned out that I had to stay with the same version of scikit that was used to train the models I currently have. Later versions of scikit don't work with the trained face models. If you want to upgrade scikit, you have to retrain you models with the new version of scikit.
    • 训练和测试的基准模型保持一致
      - 训练的时候使用的基准模型时models/20180402-114759/,predict的时候也请使用此模型
      - 训练的时候使用的基准模型时models/20180408-102900/,predict的时候也请使用此模型

Check GPU Info

  1. 查看GPU使用率
    $nvidia-smi.exe -h
    $nvidia-smi.exe -l 1   # 一秒钟更新一次信息
    # Task Manager (dedicated GPU)
  2. 查看你的显卡是否支持GPU
    1. 右键点击桌面 -> 如果在弹出窗口中看到“NVIDIA控制面板”或“NVIDIA显示”,则说明您具有NVIDIA GPU
    2. 在弹出窗口中单击“NVIDIA控制面板”或“NVIDIA显示” -> 查看“图形卡信息”
    3. 您将看到NVIDIA GPU的名称 (Dell工作电脑是: 版本442.70, GeForce GTX 1050 对应的Compute Capability是6.1) -> 意味着您的计算机具有现代的GPU,可以利用CUDA加速的应用程序
  3. 检查显卡支持的CUDA版本号
    1. 右键点击桌面 -> 在弹出窗口中单击“NVIDIA控制面板”或“NVIDIA显示”
    2. "设置 PhysX 配置" --> 帮助 --> 系统信息 --> 组件 --> 观察: NVCUDA.DLL 文件版本: Dell工作电脑为26.21.14.. 产品名称为NVIDIA CUDA 10.2.150 driver
  4. 查看电脑有几块显卡
    计算机->管理->设备管理器->显示适配器
  5. TensorFlow官网
    • 每个Tensorflow版本对应的Python版本和Cuda、Cudnn版本
    • 此表显示: TensorFlow是1.7.0版本,对应的Cuda应该是9,Cudnn应该是7

Configure PyCharm Editor

  1. PyCharm,右键将lfw夹设为excluded,否则Pycharm会将所有图片读取到缓存中
  2. 添加 Python Interpreter
    • 找到项目环境的python路径
      $conda activate facenet
      $where python
    • 将python路径作为Python Interpreter
      1. File->settings->Project->Project Interpreter->Add(新建环境)
      2. Conda Environment -> Existing environment: xxx\python.exe
      3. 左下角Terminal: 会发现环境已经切换至虚拟环境中

Add Missing Module

Add Missing Module and Adjust Module Version by Running Program

  • Align
    $python src/align/align_dataset_mtcnn.py datasets/lfw datasets/lfw_160 --image_size 160 --margin 32 # 如果GPU够强劲
    $python src/align/align_dataset_mtcnn.py datasets/lfw datasets/lfw_160 --image_size 160 --margin 32 --gpu_memory_fraction 0.5 # 如果GPU不够强劲
  • TRAIN
    $python src/classifier.py TRAIN datasets/lfw_160 models/20180402-114759/20180402-114759.pb models/lfw.pkl
    # 此时专用GPU内存应该是被高度占用的
  • CLASSIFY
    $python src/classifier.py CLASSIFY datasets/lfw_160 models/20180402-114759/20180402-114759.pb models/lfw.pkl
    # 此时专用GPU内存应该是被高度占用的
  • Predict
    $python contributed/predict.py datasets/lfw/Aaron_Eckhart/Aaron_Eckhart_0001.jpg models/20180402-114759 models/lfw.pkl # 如果GPU够强劲
    $python contributed/predict.py datasets/lfw/Aaron_Eckhart/Aaron_Eckhart_0001.jpg models/20180402-114759 models/lfw.pkl --gpu_memory_fraction 0.5 # 如果GPU不够强劲

Run FaceNet Program

Log Snippet

Method 1

import os
import sys
import time
import logging

start_time  = time.time()

log_dir     = "Logs/"
if not os.path.exists(log_dir):
    os.makedirs(log_dir)

logfile     = log_dir + "align_dataset_mtcnn.log"
FORMAT      = '%(asctime)-15s %(message)s'
logging.basicConfig(filename=logfile, format=FORMAT)
logging.warning('****************************************************************************************')
logging.warning('Program Start At: ' + time.strftime("%Y-%m-%d %H:%M:%S", time.localtime()))
print('Program Start At: ' + time.strftime("%Y-%m-%d %H:%M:%S", time.localtime()))

logging.warning('User input is: ')
logging.warning(sys.argv)

# <Program>

logging.warning('\n')
logging.warning('************************** All Time **************************')
logging.warning('Program End At: ' + time.strftime("%Y-%m-%d %H:%M:%S", time.localtime()))
logging.warning('Program All Time: {0} seconds = {1} minutes = {2} hrs'.format((time.time() - start_time), (time.time() - start_time)/60, (time.time() - start_time)/3600))
logging.warning('\n\n\n\n')
print('Program End At: ' + time.strftime("%Y-%m-%d %H:%M:%S", time.localtime()))
print('Program All Time: {0} seconds = {1} minutes = {2} hrs'.format((time.time() - start_time), (time.time() - start_time)/60, (time.time() - start_time)/3600))
logging.shutdown()

f_in = open(log_dir + 'align_dataset_mtcnn.log')
lines = f_in.read()
f_out = open(log_dir + 'All_Program.log', 'a')
f_out.write(lines)
f_in.close()
f_out.close()

Method 2

import time
import logging

log_dir     = "Logs/"
if not os.path.exists(log_dir):
    os.makedirs(log_dir)

logfile     = log_dir + 'ProgramName_log.log'

start_time  = time.time()
FORMAT      = '%(asctime)-15s %(message)s'
logging.basicConfig(filename=logfile, format=FORMAT)
logging.warning('**********************************************************************************************************************************')
logging.warning('Program Start At: ' + time.strftime("%Y-%m-%d %H:%M:%S", time.localtime()))

logging.warning('\n')
logging.warning('************************** All Time **************************')
logging.warning('Program End At: ' + time.strftime("%Y-%m-%d %H:%M:%S", time.localtime()))
logging.warning('Program All Time: {0} seconds = {1} minutes = {2} hrs'.format((time.time() - start_time), (time.time() - start_time)/60, (time.time() - start_time)/3600))
logging.warning('\n\n\n\n')
logging.shutdown()

Method 3

import time

start_time  = time.time()
print('Program Start At: ' + time.strftime("%Y-%m-%d %H:%M:%S", time.localtime()))
print('Program End At: ' + time.strftime("%Y-%m-%d %H:%M:%S", time.localtime()))
print('Program All Time: {0} seconds = {1} minutes = {2} hrs'.format((time.time() - start_time), (time.time() - start_time)/60, (time.time() - start_time)/3600))

Method 4

def log(content):
    logging.warning(content)
    print(content)

def log(content, end=False):
    logging.info(content)
    if end != False:
        print(content, end=end)
    else:
        print(content)

log_dir     = "notes/log/"
if not os.path.exists(log_dir):
    os.makedirs(log_dir)
logfile     = log_dir + 'compare.log'
FORMAT      = '%(asctime)-15s %(message)s'
logging.basicConfig(filename=logfile, format=FORMAT)
# logging.basicConfig(filename=logfile, level=logging.INFO)
# logging.basicConfig(filename=logfile, level=logging.DEBUG)

start_time  = time.time()
log('**********************************************************************************************************************************')
log('Program Start At: ' + time.strftime("%Y-%m-%d %H:%M:%S", time.localtime()))
log('gpu_memory_fraction: ' + str(args.gpu_memory_fraction))

log('Program End At: ' + time.strftime("%Y-%m-%d %H:%M:%S", time.localtime()))
log('Program All Time: {0} seconds = {1} minutes = {2} hrs'.format((time.time() - start_time), (time.time() - start_time)/60, (time.time() - start_time)/3600))

Align: align_dataset_mtcnn.py

# ****************Path: src/align/align_dataset_mtcnn.py
# $python src/align/align_dataset_mtcnn.py datasets/lfw datasets/lfw_160 --image_size 160 --margin 32 # 如果GPU够强劲
# $python src/align/align_dataset_mtcnn.py datasets/lfw datasets/lfw_160 --image_size 160 --margin 32 --gpu_memory_fraction 0.5 # 如果GPU不够强劲

# ****************Action1:修改模块的引用方式
import facenet.facenet as facenet   # import facenet

# ****************Action2:修改sess配置 -> 使用GPU进行数据预处理 (使用GPU加速计算) 不要忘了在虚拟环境中也更替此文件
# sess = tf.Session(config=tf.ConfigProto(gpu_options=gpu_options, log_device_placement=False))修改为
config  = tf.ConfigProto(gpu_options=gpu_options, log_device_placement=False, allow_soft_placement=True)
config.gpu_options.allow_growth = True
sess    = tf.Session(config=config)
- specify the size of the face thumbnails and have them aligned.
- this step is important to ensure consistency within the dataset.
- without this consistency, a model would have to learn to classify a dataset with unnecessary variance between images of the same face. 
- to process the images, we had to run the script on a server machine as the local machine would require a substantial amount of time.
- It took roughly 30 minutes on a 20 cores server to align the CASIA Webface dataset.

Train: train_tripletloss.py

# ****************Path: src/train_tripletloss.py
# $python src/train_tripletloss.py

# ****************Action1:修改加载数据集(验证集)的路径
# 将'--data_dir' default='~/datasets/casia/casia_maxpy_mtcnnalign_182_160'
default = "datasets/lfw_160"

# ****************Action2:对预训练模型重新进行训练
# 1. 添加预训练模型的参数,加一个default
parser.add_argument('--pretrained_model', type=str,
help='Load a pretrained model before training starts.', default='模型所在路径')
# 2. 更换模型加载方法
# saver.restore(sess, os.path.expanduser(args.pretrained_model))
# 这一处函数的作用是:如果预训练模型这个参数非空,那么用tensorflow的saver.restore()函数重新加载模型参数,但是此处会报错,因此模仿compare.py函数中的加载模型方法,改为
facenet.load_model(args.pretrained_model)
# 取一个已经训练好的模型,加载之后训练一轮,会发现初始的损失函数非常小,同时,训练一轮之后模型的准确率已经和加载的预训练模型准确率差不多了,说明模型加载成功
# 3. 修改几个bug
# 260行: for key, value in stat.iteritems(): 在python3中,dict已经没有iteritems方法了
for key, value in stat.items(): 
# 308行: if lr<=0: (在最后一轮训练的时候,学习率本来应该是变成-1然后结束,但是出现了lr变成None的情况)
if lr is None or lr <= 0:

# ****************Action3:修改sess配置 -> 使用GPU加速计算(供GPU玩家修改,使用CPU的话就不用修改了,跳过这一点直接训练就好了)
# 191行: sess = tf.Session(config=tf.ConfigProto(gpu_options=gpu_options, log_device_placement=False))
config                          = tf.ConfigProto(gpu_options=gpu_options, log_device_placement=False, allow_soft_placement=True)
config.gpu_options.allow_growth = True
sess                            = tf.Session(config=config)

程序运行参数介绍

$python src/train_softmax.py --logs_base_dir logs/facenet/ --models_base_dir models/facenet/ --data_dir datasets/CASIA-WebFace_train_182/ --image_size 160 --model_def models.inception_resnet_v1 --lfw_dir datasets/lfw_160/ --optimizer ADAM --learning_rate -1 --max_nrof_epochs 150 --keep_probability 0.8 --random_crop --random_flip --use_fixed_image_standardization --learning_rate_schedule_file data/learning_rate_schedule_classifier_casia.txt --weight_decay 5e-4 --embedding_size 512 --lfw_distance_metric 1 --lfw_use_flipped_images --lfw_subtract_mean --validation_set_split_ratio 0.05 --validate_every_n_epochs 5 --prelogits_norm_loss_factor 5e-4
使用CPU(i7-6700H),显卡(GTX965M),CUDA 9.0,cuDNN 7.0.5的笔记本,采用WebFace数据集进行训练测试
- 简析上述参数(当然还有非常多未加入的,可以自己去研究源码): 
    --logs_base_dir                 存放训练过程的目录
    --models_base_dir               存放训练结果模型的目录
    --data_dir                      训练数据集的目录,可以根据需要自己更改
    --model_def                     训练的神经网络方法,这里采用resnet
    --lfw_dir                       存放lfw数据集的位置,作用是在训练过程中对模型进行测试,可以不加,但是推荐!!!
    --learning_rate                 学习率的设置(在训练过程中非常重要,设置的太大模型难以收敛,太小模型容易过拟合,所以在训练过程中基本上会选择动态学习率的设置,就是下面要讲到的参数)
    --learning_rate_schedule_file   学习率时间计划表,我们来打开其中一个文件进行分析

    - 在训练时出现内存或者显存不足的情况可以尝试修改batch_size的大小 
        - 说明显卡的显存不够(就会报出OOM的错误)这个时候只需要减少一次的喂入量就行了,添加参数: --batch_size

    - 接着就是最关键的一点,动态显示训练过程
        1. 打开一个新的终端,输入命令,定位到logs目录下: tensorboard --logdir=logs/facenet --port 6006
        2. 打开网页 TensorBoard

    - 训练结果
        - 别人训练了30个小时,最后模型正确率在98.8%左右(加大batch的数量和增加训练轮数应该提高正确率)
        - 若尝试大量数据的训练,完全ok(因为项目会自动保存最新的模型,假使一不小心训练中断了,可以加载之前训练的结果继续训练,在源码的参数中有说明)

Train: facenet.py

# Path: facenet-master/src/facenet.py
# Action: 修改facenet.py下的triplet_loss函数

# ****************Action1:修改facenet.py下的triplet_loss函数 -> 修改pos_dist和neg_dist
# 在《In Defense of the Triplet Loss for Person Re-Identification》这篇论文中提到:损失函数中去掉平方后效果还会更好一些
def triplet_loss(anchor, positive, negative, alpha):
    """Calculate the triplet loss according to the FaceNet paper
    
    Args:
      anchor: the embeddings for the anchor images.
      positive: the embeddings for the positive images.
      negative: the embeddings for the negative images.
  
    Returns:
      the triplet loss according to the FaceNet paper as a float tensor.
    """
    with tf.variable_scope('triplet_loss'):
        # pos_dist = tf.reduce_sum(tf.square(tf.subtract(anchor, positive)), 1)
        # neg_dist = tf.reduce_sum(tf.square(tf.subtract(anchor, negative)), 1)
        pos_dist = tf.sqrt(tf.reduce_sum(tf.square(tf.subtract(anchor, positive)), 1))  # tf.square:平方。tf.subtract::减法
        neg_dist = tf.sqrt(tf.reduce_sum(tf.square(tf.subtract(anchor, negative)), 1))
        
        basic_loss = tf.add(tf.subtract(pos_dist,neg_dist), alpha)
        loss = tf.reduce_mean(tf.maximum(basic_loss, 0.0), 0)
      
    return loss

Train: classifier.py

# ****************Path: src/classifier.py
# $python src/classifier.py TRAIN datasets/lfw_160 models/20180402-114759/20180402-114759.pb models/lfw.pkl

# ****************Action1:修改sess配置 -> 使用GPU加速计算(供GPU玩家修改,使用CPU的话就不用修改了,跳过这一点直接训练就好了)

# with tf.Graph().as_default():
#     with tf.Session() as sess:

with tf.Graph().as_default():
    config                                              = tf.ConfigProto()
    config.gpu_options.allow_growth                     = True      # 不全部占满显存, 按需分配
    config.gpu_options.per_process_gpu_memory_fraction  = 1.0       # 限制GPU内存占用率
    with tf.Session(config=config) as sess:

Classify: classifier.py

# ****************Path: src/classifier.py
# $python src/classifier.py TRAIN datasets/lfw_160 models/20180402-114759/20180402-114759.pb models/lfw.pkl

# ****************Action1:修改sess配置 -> 使用GPU加速计算(供GPU玩家修改,使用CPU的话就不用修改了,跳过这一点直接训练就好了)

# with tf.Graph().as_default():
#     with tf.Session() as sess:

with tf.Graph().as_default():
    config                                              = tf.ConfigProto()
    config.gpu_options.allow_growth                     = True      # 不全部占满显存, 按需分配
    config.gpu_options.per_process_gpu_memory_fraction  = 1.0       # 限制GPU内存占用率
    with tf.Session(config=config) as sess:

Predict: predict.py

# ****************Path: contributed/predict.py
# $python contributed/predict.py datasets/lfw/Aaron_Eckhart/Aaron_Eckhart_0001.jpg models/20180402-114759 models/lfw.pkl # 如果GPU够强劲
# $python contributed/predict.py datasets/lfw/Aaron_Eckhart/Aaron_Eckhart_0001.jpg models/20180402-114759 models/lfw.pkl --gpu_memory_fraction 0.5 # 如果GPU不够强劲

# ****************Action1:修改模块的引用方式
import facenet.facenet as facenet   # import facenet

# ****************Action2:修改sess配置 -> 使用GPU加速计算
# sess = tf.Session(config=tf.ConfigProto(gpu_options=gpu_options, log_device_placement=False))
config                                              = tf.ConfigProto()
config.gpu_options.allow_growth                     = True                      # 不全部占满显存, 按需分配
config.gpu_options.per_process_gpu_memory_fraction  = 0.6                       # 限制GPU内存占用率
sess                                                = tf.Session(config=config)

Explain FaceNet

  • FaceNet: mapping from faces to a position in a multidimensional space where the distance between points directly correspond to a measure of face similarity.

    “[…] we strive for an embedding f(x), from an image x into a feature space, such that the squared distance between all faces, independent of imaging conditions, of the same identity is small, whereas the squared distance between a pair of face images from different identities is large.
    This allows the faces for one identity to live on a manifold, while still enforcing the distance and thus discriminability to other identities.”
    — FaceNet paper “FaceNet: A Unified Embedding for Face Recognition and Clustering”

  • Pre-processing: a method used to take a set of images and convert them all to a uniform format — in our case, a square image containing just a person’s face. A uniform dataset is useful for decreasing variance when training as we have limited computational resources.

  • Embedding: a process, fundamental to the way FaceNet works, which learns representations of faces in a multidimensional space where distance corresponds to a measure of face similarity.

  • Classification: the final step which uses information given by the embedding process to separate distinct faces.


Use Hexo and Github to publish your blog

前提条件

  1. Install Node.js
    $node -v   # 出现版本号,就安装成功了
    $npm -v    # 出现版本号,就安装成功了
  2. Install Git
    $git --version
  3. Register a Github account.

快速开始

Reference: Molunerfinn/hexo-theme-melody

本地部署

  1. Generate a Hexo static blog
    # 新建elaine文件夹
    $npm install -g hexo-cli
    $hexo init
    $npm install
    $hexo c
    $hexo g
    $hexo s    # 本地预览 localhost:4000

与Github连接,发布到hexo服务器(Github)上

  1. Connect with Github
    1. 新建一个项目: 个人头像左边-> + -> New repository
    2. 输入博客的项目名字: ElaineXHZhong.github.io (Github账号名(必须)+github.io后缀)
    3. 勾上README初始化
    4. 项目就建成了
  2. In the hexo work folder’s root config file _config.yml:
    deploy:
        type: git
        repo: git@github.com:ElaineXHZhong/ElaineXHZhong.github.io
        branch: master
  3. Publish to Github
    $npm install hexo-deployer-git --save # before $hexo d 
    $hexo d    # 发布到你的hexo服务器(Github)上,即可在外网查看了

自定义配置

Hexo主题配置

  1. Install melody theme
    $npm install hexo-theme-melody
  2. In the hexo work folder’s root config file _config.yml:
    theme: melody
  3. If you don’t have pug & stylus renderer, please
    $npm install hexo-renderer-pug hexo-renderer-stylus --save
  4. create a _config.melody.yml in your hexo work folder.
    将 ./node_modules/hexo-theme-melody/_config.yml里的内容拷贝至 _config.melody.yml

看板娘

  1. Add 看板娘
    $npm uninstall hexo-helper-live2d
    $npm install --save hexo-helper-live2d # 成功了之后可以看到当前目录的node_modules/下有个live2d-widget目录,这是动画的主配置
    $npm audit fix
    $npm audit fix --force
    $npm install live2d-widget-model-tororo # 动画的下载模型文件
    # 安装完成可以在node_modules/下看到live2d-widget-model-tororo文件夹(model名字是tororo)
  2. In the hexo work folder’s root config file _config.yml:
    # Live2D看板娘
    live2d:
        enable: true
        scriptFrom: local
        pluginRootPath: live2dw/
        pluginJsPath: lib/
        pluginModelPath: assets/
        tagMode: false
        log: false
        model:
            use: live2d-widget-model-tororo
        display:
            position: right
            width: 200
            height: 400
        mobile:
            show: true

导航菜单

  1. Add archivestagscategoriesgalleryslides:
    • Edit _config.melody.yml:
      menu:
          Home: /
          Archives: /archives
          Tags: /tags
          Categories: /categories
          Gallery: /gallery
          Slides: /slides
    • archives 自动添加(public/archives),不需要操作
    • Add tags标签页
      $hexo new page tags # 创建source/tags
      Edit source/tags/index.md
      ---
      title: tags
      date: 2021-04-09 10:07:51
      type: "tags"
      ---
    • Add categories分类页
      $hexo new page categories # 创建source/categories
      Edit source/categories/index.md
      ---
      title: categories
      date: 2021-04-09 10:07:51
      type: "categories"
      ---
    • Add gallery相册
      $hexo new page gallery # 创建source/gallery
      Edit source/gallery/index.md
      ---
      title: gallery
      date: 2021-04-09 10:07:51
      type: "gallery"
      ---
    • Add slides幻灯片Slides页面
      $hexo new page slides # 创建source/slides
      Edit source/slides/index.md
      ---
      title: slides
      date: 2021-04-09 10:07:51
      type: "slides"
      ---
    • Add 404404页面
      $hexo new page 404 # 创建source/404
      Edit source/404/index.md
      ---
      title: 404
      date: 2021-04-09 10:14:54
      layout: 404
      permalink: /404
      ---
      打开https://elainexhzhong.github.io/404.html就能看到404页面

社交图标

  1. Edit _config.melody.yml:
    # 格式是    图标名 前缀: url
    social:
        github fab: https://github.com/ElaineXHZhong
        video fas: https://space.bilibili.com/488540235?from=search&seid=14388519767007720984
        instagram fab: https://www.instagram.com/elaine_lyre/
    # 同时你需要添加font-awesome v5的链接到
    cdn:
        css:
            fontawesome: https://cdn.jsdelivr.net/npm/font-awesome@latest/css/font-awesome.min.css
            fontawesomev5: https://use.fontawesome.com/releases/v5.7.2/css/all.css
  2. 访问font-awesome v5 free icons去找图标名

头像

  1. Edit _config.melody.yml:
    选择长宽相等的头像,否则显示上会出问题
    avatar: https://xxxx.jpg

Follow Me 按钮

  1. Edit _config.melody.yml:
    follow:
        enable: true
        url: 'https://github.com/ElaineXHZhong'
        text: 'Follow Me'

友情链接

  1. Edit _config.melody.yml:
    Please refer here
    links_title: Links   # 配置友链的标题文字
    links:
        Molunerfinn: https://molunerfinn.com # 名称:URL
        PiEgg: https://piegg.cn
        Elody: https://piegg.cn

顶部图

  1. 图片上传到图床Cloudinary
    • github邮箱
    • 密码自取
  2. Edit _config.melody.yml:
    • top_img拥有truefalse或者具体图片url三种值
      top_img: https://res.cloudinary.com/okk/image/upload/v1617958217/samples/landscapes/2_hil4eh.jpg
    • 顶部图高度控制
      top_img_height: 60

目录

  1. Edit _config.melody.yml:
    toc:
        enable: true # or false
        number: true # or false. 版本v1.5.6新增

博客年份

  1. Edit _config.melody.yml:
    since: 2013

页脚自定义文本

  1. Edit _config.melody.yml:
    • 一般
      footer_custom_text: Hi, welcome to my <a href="https://molunerfinn.com">blog</a>!
    • 底部文字生成随机的谚语
      footer_custom_text: hitokoto

特效

烟花

  1. Edit _config.melody.yml:
    fireworks: true

彩带

  1. Edit _config.melody.yml:
    canvas_ribbon:
        enable: true
        size: 150
        alpha: 0.6
        zIndex: -1
        click_to_change: false

评论系统

  1. 注册disqus
    • github邮箱
    • 密码自取
  2. Edit _config.melody.yml:
    disqus:
        enable: true # or false
        shortname: 你的disqus的short-name # elaine
        count: true # or false. 开启将展示出你的文章的评论数

字数统计

  1. 打开 hexo 工作目录
    $npm install hexo-wordcount --save
  2. Edit _config.melody.yml:
    wordcount:
        enable: true

代码高亮

  1. Edit _config.melody.yml:
    highlight_theme: light # default | darker | pale night | light | ocean
  2. Edit _config.yml:
    highlight:
        enable: true
        line_number: true
        auto_detect: false
        tab_replace: ''
        wrap: true
        hljs: true

发布文章

发布新文章

  1. 打开 hexo 工作目录
    $hexo new "文章标题"   # 在source/_posts中会生成文章post_name.md和同名文件夹post_name

文章插入图片

  1. Edit _config.yml:
    post_asset_folder: true
  2. 打开 hexo 工作目录
    $hexo new post_name
    source/_posts中会生成文章post_name.md和同名文件夹post_name
  3. 将图片资源放在post_name中,文章就可以使用相对路径引用图片资源了
    # _posts/post_name/image.jpg
    ![](image.jpg) # markdown的引用方式: 图片只能在文章中显示,但无法在首页中正常显示
    {% asset_img image.jpg This is an image %} # 标签插件语法: 图片在文章和首页中同时显示
  4. 除了在本地存储图片,还可以将图片上传到一些免费的CDN服务中
    • 比如Cloudinary
      • github邮箱
      • 密码自取
    • 在Cloudinary中上传图片后,会生成对应的url地址,将地址直接拿来引用即可

文章插入视频

  1. Edit 文章.md
    <html>
    <div style="position: relative; width: 100%; height: 0; padding-bottom: 75%;">
        <iframe src="//player.bilibili.com/player.html?aid=39807850&cid=69927212&page=1" scrolling="no" border="0" 
    frameborder="no" framespacing="0" allowfullscreen="true" style="position: absolute; width: 100%; 
    height: 100%; left: 0; top: 0;"> </iframe>
    </div>
    </html>
    
    ### 文章置顶
    
    1. 打开 hexo 工作目录
        ```bash
        $npm uninstall hexo-generator-index --save
  2. Edit 文章.md:
    title: xxxx
    tags:
        - xxx
    date: 2018-08-08 08:08:08
    top: True

文章相关项

  1. Edit _config.melody.yml:
    • 在文章页顶部显示发表日期以及文章的分类、在文章页底部显示标签
      post_meta:
          date_type: created # or updated 文章日期是创建日或者更新日
          categories: true # or false 是否显示分类
          tags: true # or false 是否显示标签
    • 文章版权
      post_copyright:
          enable: true
          license: CC BY-NC-SA 3.0 # 协议名称
          license_url: https://creativecommons.org/licenses/by-nc-sa/3.0/ # 协议说明地址
    • 文章相关二维码
      QR_code:
          - itemlist:
              img: https://xxxx1.jpg
              text: 支付宝打赏
          - itemlist:
              img: https://xxxx2.jpg
              text: 微信打赏
  2. Edit 文章.md
    • 为特定的文章配置特定的目录章节数字
      title: Hi, theme-melody!
      tags:
          - hexo
          - hexo theme
      toc_number: true   # add toc_number to here. 版本v1.5.6新增
      date: 2017-09-07

文章顶部图

  1. Edit 文章.md
    为特定的文章页配置特定的顶部图
    title: Hi, theme-melody!
    tags:
        - hexo
        - hexo theme
    top_img: https://xxxxxxx.jpg   # top_img在这里插入
    date: 2017-09-07

引用站内文章

  1. Edit 文章.md
    {% post_link 文章文件名(不要后缀) 文章标题(可选) %}
    # {% post_link FaceNet-Configuration-and-Deployment FaceNet-Configuration-and-Deployment %}

插入数学公式

  1. 安装插件mathJax
    $npm install hexo-math --save
  2. 在站点配置文件_config.yml中添加
    math:
        engine: 'mathjax' # or 'katex'
        mathjax:
            # src: custom_mathjax_source
            config:
                # MathJax config
  3. 在主题配置_config.melody.yml中将 mathJax 设为 true
    # MathJax Support
    mathjax:
        enable: true
        per_page: false
        cdn: //cdn.mathjax.org/mathjax/latest/MathJax.js?config=TeX-AMS-MML_HTMLorMML
  4. 公式插入格式
    $数学公式$ 行内 不独占一行
    $$数学公式$$ 行间 独占一行
    例如:
    $f(x)=ax+b$
    $$f(x)=ax+b$$
    $f(x)=ax+b$
  5. 语法格式
    • 上标与下标
      使用 ^ 表示上标,使用 _ 表示下标,如果上下标的内容多于一个字符,使用大括号括起来:
      $$f(x) = a_1x^n + a_2x^{n-1} + a_3x^{n-2}$$
      如果左右两边都有上下标可以使用 \sideset 语法:
      $$\sideset{^n_k}{^x_y}a$$
      $f(x) = a_1x^n + a_2x^{n-1} + a_3x^{n-2}$
      $\sideset{^n_k}{^x_y}a$
    • 括号
      在markdown语法中,, $, {, }, _都是有特殊含义的,所以需要加\转义
      小括号与方括号可以使用原始的() [] 大括号需要转义\也可以使用\lbrace和 \rbrace
      \{x*y\}
      \lbrace x*y \rbrace
      原始符号不会随着公式大小自动缩放,需要使用 \left 和 \right 来实现自动缩放:
      $$\left \lbrace \sum_{i=0}^n i^3 = \frac{(n^2+n)(n+6)}{9} \right \rbrace$$
      不使用\left 和 \right的效果:
      $$ \lbrace \sum_{i=0}^n i^3 = \frac{(n^2+n)(n+6)}{9}  \rbrace$$
      $\left \lbrace \sum_{i=0}^n i^3 = \frac{(n^2+n)(n+6)}{9} \right \rbrace$
      $ \lbrace \sum_{i=0}^n i^3 = \frac{(n^2+n)(n+6)}{9} \rbrace$
    • 分数与开方
      可以使用\frac 或者 \over 实现分数的显示:
      $\frac xy$
      $ x+3 \over y+5 $
      开方使用\sqrt:
      $ \sqrt{x^5} $
      $ \sqrt[3]{\frac xy} $
      $\frac xy$
      $ x+3 \over y+5 $
      $ \sqrt{x^5} $
      $ \sqrt[3]{\frac xy} $
    • 求和与积分
      求和使用\sum,可加上下标,积分使用\int可加上下限,双重积分用\iint:
      $ \sum_{i=0}^n $
      $ \int_1^\infty $
      $ \iint_1^\infty $    
      $ \sum_{i=0}^n $
      $ \int_1^\infty $
      $ \iint_1^\infty $
    • 极限
      极限使用\lim:
      $ \lim_{x \to 0} $
      $ \lim_{x \to 0} $
    • 表格与矩阵
      表格样式lcr表示居中,|加入一条竖线,\hline表示行间横线,列之间用&分隔,行之间用\分隔:
      $$\begin{array}{c|lcr}
      n & \text{Left} & \text{Center} & \text{Right} \\\\
      \hline
      1 & 1.97 & 5 & 12 \\\\
      2 & -11 & 19 & -80 \\\\
      3 & 70 & 209 & 1+i \\\\
      \end{array}$$
      表格的插入也可以使用以下方式:
      名称|说明
      ---|---|---
      temperature|  室内温度
      set temperature|  设定温度
      height|  室内高度
      矩阵显示和表格很相似:
      $$\left[
      \begin{matrix}
      V_A \\\\
      V_B \\\\
      V_C \\\\
      \end{matrix}
      \right] =
      \left[
      \begin{matrix}
      1 & 0 & L \\\\
      -cosψ & sinψ & L \\\\
      -cosψ & -sinψ & L
      \end{matrix}
      \right]
      \left[
      \begin{matrix}
      V_x \\\\
      V_y \\\\
      W \\\\
      \end{matrix}
      \right] $$
      $$\begin{array}{c|lcr}
      n & \text{Left} & \text{Center} & \text{Right} \\
      \hline
      1 & 1.97 & 5 & 12 \\
      2 & -11 & 19 & -80 \\
      3 & 70 & 209 & 1+i \\
      \end{array}$$
      $$\left[
      \begin{matrix}
      V_A \\
      V_B \\
      V_C \\
      \end{matrix}
      \right] =
      \left[
      \begin{matrix}
      1 & 0 & L \\
      -cosψ & sinψ & L \\
      -cosψ & -sinψ & L
      \end{matrix}
      \right]
      \left[
      \begin{matrix}
      V_x \\
      V_y \\
      W \\
      \end{matrix}
      \right] $$

配置好后快速发布文章

$hexo new "文章标题" # 在source/_posts中会生成文章post_name.md和同名文件夹post_name
$hexo clean
$hexo g
$hexo s    # localhost:4000
$hexo d    # 或者 local d -g,发布到 https://elainexhzhong.github.io/