本文较长,建议点赞收藏,以免遗失。更多AI大模型应用开发学习内容和资料尽在我个人主页
一、感知机:神经网络的基石
生物灵感:模仿神经元"激活/抑制"特性 数学表达: y = f(∑(w_i * x_i) + b) 其中:
- x_i:输入特征w_i:权重参数b:偏置项f:激活函数
Python实现:
import numpy as npclass Perceptron: def __init__(self, input_size): self.weights = np.random.randn(input_size) self.bias = np.random.randn() def forward(self, x): z = np.dot(self.weights, x) + self.bias return 1 if z >= 0 else 0 # 阶跃激活函数# 测试AND逻辑perceptron = Perceptron(2)perceptron.weights = np.array([0.6, 0.6])perceptron.bias = -1.0print("0 AND 0 =", perceptron.forward([0,0])) # 0print("1 AND 0 =", perceptron.forward([1,0])) # 0print("1 AND 1 =", perceptron.forward([1,1])) # 1
二、激活函数:引入非线性能力
核心作用:决定神经元是否激活,引入非线性变换
激活函数可视化:
import matplotlib.pyplot as pltx = np.linspace(-5, 5, 100)relu = np.maximum(0, x)sigmoid = 1/(1+np.exp(-x))plt.figure(figsize=(10,4))plt.subplot(121); plt.title("ReLU"); plt.plot(x, relu)plt.subplot(122); plt.title("Sigmoid"); plt.plot(x, sigmoid)plt.tight_layout()
三、损失函数:模型优化的指南针
核心作用:量化预测值与真实值的差距
常用损失函数对比:
交叉熵实现:
def cross_entropy(y_true, y_pred, eps=1e-15): y_pred = np.clip(y_pred, eps, 1-eps) # 防止log(0) return -np.sum(y_true * np.log(y_pred))# 三分类示例y_true = np.array([0, 1, 0]) # 真实标签:类别1y_pred = np.array([0.2, 0.7, 0.1]) # 预测概率print("CE Loss:", cross_entropy(y_true, y_pred)) # ≈0.357
损失函数曲面:
四、反向传播:神经网络的引擎
核心原理:链式法则计算梯度 ∂Loss/∂w = ∂Loss/∂y * ∂y/∂z * ∂z/∂w
前向传播与反向传播流程:
graph LR A[输入x] --> B[加权求和 z=w·x+b] B --> C[激活 a=f(z)] C --> D[输出y_pred] D --> E[计算损失 L] E -->|反向传播| D D -->|∂L/∂y_pred| C C -->|∂L/∂a * f'(z)| B B -->|∂L/∂w = ∂L/∂z * x| W[更新权重]
手动实现双层网络:
class NeuralNetwork: def __init__(self, input_size, hidden_size, output_size): self.W1 = np.random.randn(input_size, hidden_size) * 0.01 self.b1 = np.zeros(hidden_size) self.W2 = np.random.randn(hidden_size, output_size) * 0.01 self.b2 = np.zeros(output_size) def forward(self, X): self.z1 = np.dot(X, self.W1) + self.b1 self.a1 = np.tanh(self.z1) # 隐藏层激活 self.z2 = np.dot(self.a1, self.W2) + self.b2 exp_scores = np.exp(self.z2 - np.max(self.z2)) # 数值稳定 self.probs = exp_scores / np.sum(exp_scores, axis=1, keepdims=True) return self.probs def backward(self, X, y, learning_rate=0.01): # 输出层梯度 delta3 = self.probs delta3[range(len(X)), y] -= 1 # ∂L/∂z2 = y_pred - y_true # 隐藏层梯度 dW2 = np.dot(self.a1.T, delta3) db2 = np.sum(delta3, axis=0) delta2 = np.dot(delta3, self.W2.T) * (1 - np.power(self.a1, 2)) # tanh导数 # 输入层梯度 dW1 = np.dot(X.T, delta2) db1 = np.sum(delta2, axis=0) # 参数更新 self.W1 -= learning_rate * dW1 self.b1 -= learning_rate * db1 self.W2 -= learning_rate * dW2 self.b2 -= learning_rate * db2
五、梯度消失/爆炸问题深度解析
问题根源:深层网络中梯度连乘效应
数学解释: ∂L/∂w1 = ∂L/∂y * (∏ ∂a_i/∂z_i) * ∂z1/∂w1
当|∏ ∂a_i/∂z_i| → 0 → 梯度消失
当|∏ ∂a_i/∂z_i| → ∞ → 梯度爆炸
解决方案:
- 激活函数选择:ReLU系 > Sigmoid权重初始化:
# Xavier初始化 (Sigmoid/Tanh)W = np.random.randn(fan_in, fan_out) / np.sqrt(fan_in)# He初始化 (ReLU)W = np.random.randn(fan_in, fan_out) / np.sqrt(fan_in/2)
- 归一化技术:BatchNorm/LayerNorm残差连接:ResNet的跳跃连接
六、现代神经网络架构中的基础应用
Transformer中的反向传播:
# 自注意力机制梯度计算 (简化版)def attention_backward(d_output, Q, K, V, attn_weights): # d_output: 来自上一层的梯度 dV = np.dot(attn_weights.T, d_output) d_attn = np.dot(d_output, V.T) # softmax梯度 d_scores = attn_weights * (d_attn - np.sum(d_attn * attn_weights, axis=-1, keepdims=True)) dQ = np.dot(d_scores, K) dK = np.dot(d_scores.T, Q) return dQ, dK, dV
卷积网络中的反向传播特性:
- 权值共享:同一卷积核在所有位置共享梯度局部连接:每个输出仅受局部输入影响反向传播转为卷积运算:∂L/∂input = conv2d(∂L/∂output, rotated(kernel))
七、实战:手写数字识别
完整训练流程:
from sklearn.datasets import load_digitsfrom sklearn.model_selection import train_test_splitfrom sklearn.preprocessing import OneHotEncoder# 数据准备digits = load_digits()X = digits.data / 16.0 # 归一化y = digits.targetX_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2)# 标签one-hot编码encoder = OneHotEncoder(sparse_output=False)y_train_onehot = encoder.fit_transform(y_train.reshape(-1,1))# 网络初始化nn = NeuralNetwork(input_size=64, hidden_size=32, output_size=10)# 训练循环for epoch in range(1000): # 前向传播 probs = nn.forward(X_train) # 计算损失 loss = cross_entropy(y_train_onehot, probs) # 反向传播 nn.backward(X_train, y_train, learning_rate=0.01) if epoch % 100 == 0: test_pred = np.argmax(nn.forward(X_test), axis=1) acc = np.mean(test_pred == y_test) print(f"Epoch {epoch}: Loss={loss:.4f}, Acc={acc:.4f}")
八、学习路线与核心要点
知识进阶路径:
- 基础夯实:感知机 → 多层感知机 → 反向传播现代架构:CNN → RNN → Transformer → GNN优化技术:SGD → Adam → 二阶优化正则化:Dropout → BatchNorm → 权重衰减
创作不易,记得留下你的小红心。更多AI大模型应用开发学习内容和资料,尽在AI大模型技术社。