MarkTechPost@AI 03月29日 13:06
A Step by Step Guide to Solve 1D Burgers’ Equation with Physics-Informed Neural Networks (PINNs): A PyTorch Approach Using Automatic Differentiation and Collocation Methods
index_new5.html
../../../zaker_core/zaker_tpl_static/wap/tpl_guoji1.html

 

本文介绍了一种创新的方法,将深度学习与物理定律相结合,利用物理信息神经网络(PINNs)求解一维伯格斯方程。通过在Google Colab上使用PyTorch,演示了如何将控制微分方程直接编码到神经网络的损失函数中,使模型能够学习固有地尊重潜在物理学的解u(x,t)。这种技术减少了对大型标记数据集的依赖,并为使用现代计算工具解决复杂的、非线性偏微分方程提供了新的视角。

🤖 导入必要的库:首先,导入了PyTorch、NumPy和matplotlib等库,分别用于深度学习、数值计算和结果可视化,并设置默认张量数据类型为float32以确保计算精度。

🔢 定义仿真域和数据点:确定了伯格斯方程的仿真域,包括空间和时间边界、粘度以及配置点、初始点和边界点的数量。然后,为这些条件生成随机和等距的数据点,并将它们转换为PyTorch张量,以便在需要时进行梯度计算。

🧱 构建PINN模型:通过扩展PyTorch的nn.Module,定义了一个定制的物理信息神经网络(PINN)。该网络架构使用一系列层大小动态构建,每个线性层后跟一个Tanh激活函数。模型接收二维输入,通过四个隐藏层(每个层有50个神经元),并输出一个值,最后实例化模型并打印其结构。

⚙️ 训练PINN模型:使用Adam优化器设置PINN的训练循环,学习率为1×10−3。在5000个epoch中,重复计算损失(包括PDE残差、初始和边界条件误差),反向传播梯度,并更新模型参数。每500个epoch,打印当前epoch和损失以监控进展,并在训练完成后宣布。

📊 可视化结果:在定义的空间(𝑥)和时间(𝑡)域上创建一个点网格,将这些点输入到训练好的模型中以预测解𝑢(𝑥, 𝑡),并将输出重塑为二维数组。然后,使用matplotlib将预测的解可视化为等值线图,包括颜色条、轴标签和标题,以观察PINN如何逼近伯格斯方程的动力学。

In this tutorial, we explore an innovative approach that blends deep learning with physical laws by leveraging Physics-Informed Neural Networks (PINNs) to solve the one-dimensional Burgers’ equation. Using PyTorch on Google Colab, we demonstrate how to encode the governing differential equation directly into the neural network’s loss function, allowing the model to learn the solution 𝑢(𝑥,𝑡) that inherently respects the underlying physics. This technique reduces the reliance on large labeled datasets and offers a fresh perspective on solving complex, non-linear partial differential equations using modern computational tools.

!pip install torch matplotlib

First, we install the PyTorch and matplotlib libraries using pip, ensuring you have the necessary tools for building neural networks and visualizing the results in your Google Colab environment.

import torchimport torch.nn as nnimport torch.optim as optimimport numpy as npimport matplotlib.pyplot as plttorch.set_default_dtype(torch.float32)

We import essential libraries: PyTorch for deep learning, NumPy for numerical operations, and matplotlib for plotting. We set the default tensor data type to float32 for consistent numerical precision throughout your computations.

x_min, x_max = -1.0, 1.0t_min, t_max = 0.0, 1.0nu = 0.01 / np.piN_f = 10000  N_0 = 200    N_b = 200    X_f = np.random.rand(N_f, 2)X_f[:, 0] = X_f[:, 0] * (x_max - x_min) + x_min  # x in [-1, 1]X_f[:, 1] = X_f[:, 1] * (t_max - t_min) + t_min    # t in [0, 1]x0 = np.linspace(x_min, x_max, N_0)[:, None]t0 = np.zeros_like(x0)u0 = -np.sin(np.pi * x0)tb = np.linspace(t_min, t_max, N_b)[:, None]xb_left = np.ones_like(tb) * x_minxb_right = np.ones_like(tb) * x_maxub_left = np.zeros_like(tb)ub_right = np.zeros_like(tb)X_f = torch.tensor(X_f, dtype=torch.float32, requires_grad=True)x0 = torch.tensor(x0, dtype=torch.float32)t0 = torch.tensor(t0, dtype=torch.float32)u0 = torch.tensor(u0, dtype=torch.float32)tb = torch.tensor(tb, dtype=torch.float32)xb_left = torch.tensor(xb_left, dtype=torch.float32)xb_right = torch.tensor(xb_right, dtype=torch.float32)ub_left = torch.tensor(ub_left, dtype=torch.float32)ub_right = torch.tensor(ub_right, dtype=torch.float32)

We establish the simulation domain for the Burgers’ equation by defining spatial and temporal boundaries, viscosity, and the number of collocation, initial, and boundary points. It then generates random and evenly spaced data points for these conditions and converts them into PyTorch tensors, enabling gradient computation where needed.

class PINN(nn.Module):    def __init__(self, layers):        super(PINN, self).__init__()        self.activation = nn.Tanh()               layer_list = []        for i in range(len(layers) - 1):            layer_list.append(nn.Linear(layers[i], layers[i+1]))        self.layers = nn.ModuleList(layer_list)           def forward(self, x):        for i, layer in enumerate(self.layers[:-1]):            x = self.activation(layer(x))        return self.layers[-1](x)layers = [2, 50, 50, 50, 50, 1]model = PINN(layers)print(model)

Here, we define a custom Physics-Informed Neural Network (PINN) by extending PyTorch’s nn.Module. The network architecture is built dynamically using a list of layer sizes, where each linear layer is followed by a Tanh activation (except for the final output layer). In this example, the network takes a 2-dimensional input, passes it through four hidden layers (each with 50 neurons), and outputs a single value. Finally, the model is instantiated with the specified architecture, and its structure is printed.

device = torch.device("cuda" if torch.cuda.is_available() else "cpu")model.to(device)

Here, we check if a CUDA-enabled GPU is available, set the device accordingly, and move the model to that device for accelerated computation during training and inference.

def pde_residual(model, X):    x = X[:, 0:1]    t = X[:, 1:2]    u = model(torch.cat([x, t], dim=1))       u_x = torch.autograd.grad(u, x, grad_outputs=torch.ones_like(u), create_graph=True, retain_graph=True)[0]    u_t = torch.autograd.grad(u, t, grad_outputs=torch.ones_like(u), create_graph=True, retain_graph=True)[0]    u_xx = torch.autograd.grad(u_x, x, grad_outputs=torch.ones_like(u_x), create_graph=True, retain_graph=True)[0]       f = u_t + u * u_x - nu * u_xx    return fdef loss_func(model):    f_pred = pde_residual(model, X_f.to(device))    loss_f = torch.mean(f_pred**2)       u0_pred = model(torch.cat([x0.to(device), t0.to(device)], dim=1))    loss_0 = torch.mean((u0_pred - u0.to(device))**2)       u_left_pred = model(torch.cat([xb_left.to(device), tb.to(device)], dim=1))    u_right_pred = model(torch.cat([xb_right.to(device), tb.to(device)], dim=1))    loss_b = torch.mean(u_left_pred**2) + torch.mean(u_right_pred**2)       loss = loss_f + loss_0 + loss_b    return loss

Now, we compute the residual of Burgers’ equation at the collocation points by calculating the required derivatives via automatic differentiation. Then, we define a loss function that aggregates the PDE residual loss, the error from the initial condition, and the errors from the boundary conditions. This combined loss guides the network to learn a solution that satisfies both the physical law and the imposed conditions.

optimizer = optim.Adam(model.parameters(), lr=1e-3)num_epochs = 5000for epoch in range(num_epochs):    optimizer.zero_grad()    loss = loss_func(model)    loss.backward()    optimizer.step()       if (epoch+1) % 500 == 0:        print(f'Epoch {epoch+1}/{num_epochs}, Loss: {loss.item():.5e}')       print("Training complete!")

Here, we set up the PINN’s training loop using the Adam optimizer with a learning rate of 1×10−3. Over 5000 epochs, it repeatedly computes the loss (which includes the PDE residual, initial, and boundary condition errors), backpropagates the gradients, and updates the model parameters. Every 500 epochs, it prints the current epoch and loss to monitor progress and finally announces when training is complete.

N_x, N_t = 256, 100x = np.linspace(x_min, x_max, N_x)t = np.linspace(t_min, t_max, N_t)X, T = np.meshgrid(x, t)XT = np.hstack((X.flatten()[:, None], T.flatten()[:, None]))XT_tensor = torch.tensor(XT, dtype=torch.float32).to(device)model.eval()with torch.no_grad():    u_pred = model(XT_tensor).cpu().numpy().reshape(N_t, N_x)plt.figure(figsize=(8, 5))plt.contourf(X, T, u_pred, levels=100, cmap='viridis')plt.colorbar(label='u(x,t)')plt.xlabel('x')plt.ylabel('t')plt.title("Predicted solution u(x,t) via PINN")plt.show()

Finally, we create a grid of points over the defined spatial (𝑥) and temporal (𝑡) domain, feed these points to the trained model to predict the solution 𝑢(𝑥, 𝑡), and reshape the output into a 2D array. Also, it visualizes the predicted solution as a contour plot using matplotlib, complete with a colorbar, axis labels, and a title, allowing you to observe how the PINN has approximated the dynamics of the Burgers’ equation.

In conclusion, this tutorial has showcased how PINNs can be effectively implemented to solve the 1D Burgers’ equation by incorporating the physics of the problem into the training process. Through careful construction of the neural network, generation of collocation and boundary data, and automatic differentiation, we achieved a model that learns a solution consistent with the PDE and the prescribed conditions. This fusion of machine learning and traditional physics paves the way for tackling more challenging problems in computational science and engineering, inviting further exploration into higher-dimensional systems and more sophisticated neural architectures.


Here is the Colab Notebook. Also, don’t forget to follow us on Twitter and join our Telegram Channel and LinkedIn Group. Don’t Forget to join our 85k+ ML SubReddit.

The post A Step by Step Guide to Solve 1D Burgers’ Equation with Physics-Informed Neural Networks (PINNs): A PyTorch Approach Using Automatic Differentiation and Collocation Methods appeared first on MarkTechPost.

Fish AI Reader

Fish AI Reader

AI辅助创作,多种专业模板,深度分析,高质量内容生成。从观点提取到深度思考,FishAI为您提供全方位的创作支持。新版本引入自定义参数,让您的创作更加个性化和精准。

FishAI

FishAI

鱼阅,AI 时代的下一个智能信息助手,助你摆脱信息焦虑

联系邮箱 441953276@qq.com

相关标签

PINNs PyTorch 伯格斯方程 深度学习
相关文章