未知数据源 2024年10月02日
How A\u0026amp;E Engineering Uses Serverless Technology to Host Online Machine Learning Models
index_new5.html
../../../zaker_core/zaker_tpl_static/wap/tpl_guoji1.html

 

文章介绍了机器学习在制造业的重要作用,包括检测机器异常、聚类故障以研究模式和运行预测性维护等,还提到了在线机器学习方法及相关案例,如A&E Engineering的实践,以及如何部署服务器无状态的在线机器学习漂移检测系统等内容。

🎯机器学习在制造业中意义重大,可提升产品质量并减少生产时间,通过检测异常、聚类故障等方式实现工业数字化转型。

💻A&E Engineering采用在线机器学习概念,与AWS合作,使用开源技术如Python River和AWS CDK,以降低成本并快速采用新功能。

📈为实现低代码解决方案,每个ML模型仅设一个AWS Lambda函数,其包含概念漂移模型,数据流程涵盖从收集信号到模型处理的多个环节。

🛠部署该解决方案需下载特定GitHub库,其中包含多种文件,如包含代码的文件夹、实现AWS CDK栈的文件夹、单元测试文件夹等。

<section class="blog-post-content"><p>In the manufacturing domain, machine learning (ML) can have a big impact. Companies can produce better quality products and reduce throughput time by detecting machine anomalies, clustering failures to investigate patterns, and running predictive maintenance.</p><p>When data is coming in real-time, <em>online machine learning</em> methods can be used. This is a method in which the model is updated as soon as new data becomes available. This allows the model to learn incrementally, while also making a prediction for the new data point.</p><p>AWS partner <a href="https://aeengr.com/&quot;&gt;A&amp;amp;E Engineering</a> is using this concept as part of their efforts to support their clients’ industrial digital transformations. A&amp;E Engineering is headquartered in Spartanburg, South Carolina. They are a traditional systems integration company, with a twist. A&amp;E engineering has its own factory-to-cloud group, SkyIO. Through this partnership, A&amp;E and SkyIO can execute projects that begin with electrical design and end with cloud-based ML solutions. Pretty cool, huh?</p><p>A&amp;E Engineering worked closely with AWS Professional Services when they required a serverless online machine learning system. It was critical for A&amp;E Engineering to use open source technologies like <a href="https://riverml.xyz/latest&quot;&gt;Python River</a> and <a href="https://aws.amazon.com/cdk/&quot;&gt;AWS Cloud Development Kit (AWS CDK)</a> because this gives them a lower total cost of ownership (TCO) compared to closed source and proprietary alternatives. A second reason why open source is important to A&amp;E Engineering is the fact that as new features become available they can adopt them quickly. The idea that A&amp;E Engineering had was to develop a low-code solution based on open source libraries, that also does not need the introduction of too many additional AWS services. In the rest of this post, we will demonstrate their solution and show you how you can deploy a serverless online machine learning drift detection system with AWS Lambda using AWS CDK and the Python River software package.</p><h2>Prerequisites</h2><p>In order to deploy our solution successfully you will need:</p><h2>Architecture</h2><p>In order to keep things simple and in a low-code fashion there’s only one AWS Lambda function for each ML model. This function contains, for example, a <em>concept drift</em> model that detects shifts in your data for a given signal.</p><p><img class="alignnone size-full wp-image-14042" src="https://d2908q01vomqb2.cloudfront.net/ca3512f4dfa95a03169c5a670a4c91a19b3077b4/2022/09/27/architecture-diagram-ML-model.png&quot; alt="machine learning architecture diagram" width="879" height="358" /></p><p>The flow of data is as follows:</p><ul><li>For each of your shopfloor assets, e.g. a compressor, you collect signals such as pressure, temperature, etc.</li><li>One way of collecting this data from edge to cloud is <a href="https://aws.amazon.com/iot-sitewise/&quot;&gt;AWS IoT Sitewise</a>.</li><li>Each signal is sent to a deployed AWS Lambda function that:<ul><li>checks if a model already exists on Amazon Simple Storage Service (Amazon S3)</li><li>creates that model with the incoming data point if it doesn’t exist</li><li>or reads, predicts and updates the model and then writes it back to Amazon S3.</li></ul></li><li>Each use case, signal and asset, e.g. anomaly detection for pressure of a compressor, has its own model saved to Amazon S3.</li></ul><h2>Online machine learning using the open source Python River package</h2><p>Now that we understand the flow of data let’s discuss what online machine learning is and how it is done using the open source Python River package. As outlined on our <a href="https://docs.aws.amazon.com/wellarchitected/latest/machine-learning-lens/well-architected-machine-learning-lifecycle.html&quot;&gt;AWS Well-Architected</a> pages, the machine learning lifecycle has six building blocks:</p><ul><li>Business goal</li><li>ML problem framing</li><li>Data processing</li><li>Model development</li><li>Deployment</li><li>Monitoring.</li></ul><p><img class="alignnone size-full wp-image-14044" src="https://d2908q01vomqb2.cloudfront.net/ca3512f4dfa95a03169c5a670a4c91a19b3077b4/2022/09/27/machine-learning-flywheel.png&quot; alt="machine learning lifecycle" width="879" height="731" /></p><p>This process will repeat itself over time. When you use online machine learning this process does not change. However, the Deployment and Model Development stages merge slightly. In online ML the model updates itself as it sees new data. This means the model keeps updating itself over time, while it is already deployed.</p><p>A classic example for online ML is streaming data. In our context we look at streams from manufacturing equipment. As these machines produce new data points on a second level basis a model will predict and update itself at the same pace. A common use case in manufacturing is anomaly or drift detection. Engineers are interested in understanding when a machine signal starts drifting. Most often this will lead to a downtime or failure. Preventing this up front is a powerful cost saving approach.</p><p>Following the example from <a href="https://riverml.xyz/0.11.1/examples/concept-drift-detection/&quot;&gt;this&lt;/a&gt; documentation of the River Python package, you can see that a signal might drift over time. It can change its operating mean, the variance, or both.</p><p><img class="alignnone size-full wp-image-14046" src="https://d2908q01vomqb2.cloudfront.net/ca3512f4dfa95a03169c5a670a4c91a19b3077b4/2022/09/27/signal-drift-chart.png&quot; alt="signal drift chart" width="977" height="481" /></p><p>Detecting concept drifts enables near real-time alerting. While new data is sent to the model it keeps learning new patterns. It also understands when a signal changes over time and adapts quickly. You can also see that the detection is only done after receiving a couple of new data points. This happens when the model realizes there was a drift and needs to adapt.</p><p>You will still need to monitor the model performance, but the advantage of this approach is that it updates itself. There’s no need to re-train the model, as it re-trained itself already.</p><h2>Deploy the solution</h2><p>In order to deploy the solution you will need to download <a href="https://github.com/aws-samples/online-machine-learning-with-river-in-aws-lambda&quot;&gt;this&lt;/a&gt; GitHub repository first. This is an <a href="https://docs.aws.amazon.com/cdk/v2/guide/work-with-cdk-python.html&quot;&gt;AWS CDK</a> stack that will deploy an AWS Lambda function together with an Amazon S3 bucket. The AWS Lambda function contains the model and Amazon S3 is used to save the model(s) created and used by Lambda.</p><p>Navigate into the downloaded repository — there you will find the following code structure:</p><ul><li><code>lambda</code>– this folder contains the code hosted in AWS Lambda. The code will be part of a <a href="https://www.docker.com/&quot;&gt;Docker&lt;/a&gt; container that is deployed automatically, built and pushed to Amazon Elastic Container Registry (Amazon ECR) for you.</li><li><code>river_app</code>– this folder contains the implementation of the AWS CDK stack.</li><li><code>tests</code>– here you’ll find the unit tests that ensure your stack does what we intend it to do</li><li><code>app.py</code>– the main entry file for AWS CDK to deploy your solution</li><li><code>cdk.json</code>– the AWS CDK definition file</li><li><code>requirements.txt</code>– a file pointing to setup.py to install all necessary packages to deploy your solution</li><li><code>setup.py</code>– the Python setup file that will ensure all dependencies for Python are installed.</li></ul><p>Now that you know how this repository is structured, we can deploy the solution:</p><p>The <code>cdk.json</code> file tells the CDK Toolkit how to execute your application.</p><p>This project is set up like a standard Python project. We will need to create a virtual environment (virtualenv), stored under a .venv directory.  To create the virtualenv it assumes that there is a <code>python3</code> executable in your path with access to the <code>venv</code> package. But first, let’s install AWS CDK.</p><ol><li>Set the name of your stack and Amazon S3 bucket in py, e.g.:</li></ol><p><code>RiverAppStack(app, "STACK_NAME", bucket_name='BUCKET_NAME')</code></p><ol start="2"><li>Install the AWS CDK using npm:</li></ol><p><code>npm install -g aws-cdk</code></p><ol start="3"><li>Create a virtual environment on MacOS / Linux:</li></ol><p><code>$ python3 -m venv .venv</code></p><ol start="4"><li>After the init process is complete and the virtual environment is created, you can use the following step to activate your virtual environment.</li></ol><p><code>$ source .venv/bin/activate</code></p><ol start="5"><li>If you are on a Windows platform,  activate the virtual environment with:</li></ol><p><code>% .venv\Scripts\activate.bat</code></p><ol start="6"><li>Once the virtual environment is activated, install the required dependencies.</li></ol><p><code>$ pip install -r requirements.txt</code></p><ol start="7"><li>Make sure that your account is bootstrapped:</li></ol><p><code>$ cdk bootstrap</code></p><ol start="8"><li>You can now deploy the AWS CDK stack for this code, which will translate into an Amazon CloudFormation template and appear on the corresponding console.</li></ol><p><code>$ cdk deploy</code></p><p>Optional: You can run unit tests that are included. These can be run through:</p><p><code>$ pytest</code></p><p>Congratulations! With these steps you have successfully deployed the solution.</p><h2>Understanding the AWS Lambda function</h2><p>The main part of the function is the part that either predicts and updates the model or creates the model. The function is built in a way that it checks if a model was already created and stored on Amazon S3. If it is, it will load the model object, update it and make a prediction.</p><p>If it was not yet created it initializes an empty model, learns from the first data point and then stores the object on Amazon S3.</p><p><code># Initialize model - will be overwritten if</code><code># it already exists in Amazon S3. If it doesn't</code><code># this function will save a new empty model</code><code># for you.</code><code>model = drift.ADWIN()</code></p><p><code># Check if model exists...</code><code>try:</code><code>   # Load the model from Amazon S3</code><code>   logging.info("Load model...")</code><code>   response = client.get_object(</code><code>      Bucket=BUCKET,</code><code>      Key=key)</code><code>   r = BytesIO(response["Body"].read())</code><code>   model = joblib.load(r)</code></p><p><code>   # Update the model based on your newest observation</code><code>   logging.info("Update model...")</code><code>   model.update(val)</code></p><p><code>   # If drift is detected change the output</code><code>   logging.info("Detect changes...")</code><code>   if model.drift_detected:</code><code>      output_body["Drift"] = "Yes"</code><code>except botocore.exceptions.ClientError as e:</code><code>   if e.response['Error']['Code'] == "404":</code><code>      # Update the model based on your newest observation</code><code>      logging.info("Update model...")</code><code>      model.update(val)</code><code>      logging.info("Object does not exist yet. Creating...")</code><code>   else:</code><code>      logging.error("The Lambda function failed for another reason...")</code><code>      logging.error(e)</code></p><p>Another aspect we want to make you aware of is how this AWS Lambda function can be used for multiple signals from the same machine. The <code>event</code> that is passed to the AWS Lambda function will need at least two keys:</p><ul><li><code>body</code>: the value collected from the machine for a signal</li><li><code>key</code>: the name of the signal or another unique identifier. This key will be used as the name of the object stored on Amazon S3. For instance, if you send <code>event['key']</code>equals <code>compressor1_pressure</code>, then an object with that name will be created as your model and stored on Amazon S3. Alternatively, if an object with that name already exists it will be used to predict and then update itself.</li></ul><p><code>val = float(event['body'])</code><code>key = event['key']</code></p><h2>Adapt this template for other use cases</h2><p>In this section we want to discuss how you can adapt the solution for other use cases or algorithms. Let’s assume we do not want to run drift detection but rather a clustering approach based on <a href="https://en.wikipedia.org/wiki/K-means_clustering&quot;&gt;KMeans&lt;/a&gt; clustering. Following the documentation <a href="https://riverml.xyz/0.11.1/api/cluster/KMeans/&quot;&gt;here&lt;/a&gt; our code will need to source the library first. So, add the following line to the top of the AWS Lambda function found under <code>lambda/drift_detection/app.py</code></p><p><code>from river import cluster</code></p><p>Next, change the model in the code — instead of using the <code>drift.ADWIN()</code> method, change it to the following line:</p><p><code>model = cluster.KMeans(n_clusters=5)</code></p><p>This assumes that you expect the model to find at most five different clusters. If you now re-deploy the solution, you will have changed the AWS Lambda function from drift detection to clustering. Of course, you can also make use of AWS CDK and deploy multiple functions at the same time or you can deploy a separate stack for each functionality you want to implement.</p><h2>Test your application</h2><p>Once you’ve deployed your solution you will be able to test it. Navigate to the AWS Lambda console and click on the function with the name <code>drift-detection-app-DriftDetection*</code> that was deployed earlier. You can then <a href="https://docs.aws.amazon.com/lambda/latest/dg/testing-functions.html&quot;&gt;test your AWS Lambda</a>. An example test event can look like this:</p><p><code>{</code></p><p><code>  "body": "42",</code></p><p><code>  "key": "compressor1.pressure"</code></p><p><code>}</code></p><p>After clicking the <code>Test</code> button you will see an output similar to the message shown here, which states “Execution result: succeeded.”</p><p><img class="alignnone size-full wp-image-14051" src="https://d2908q01vomqb2.cloudfront.net/ca3512f4dfa95a03169c5a670a4c91a19b3077b4/2022/09/27/execution-result-succeeded-screenshot.png&quot; alt="execution result screenshot" width="977" height="143" /></p><h2>Cleanup</h2><p>If you are done testing, please make sure that you delete everything you deployed:</p><ul><li>Delete the Amazon ECR repository that was created during deployment.</li><li>Delete the Amazon S3 bucket that was created.</li><li>To delete the AWS CDK stack run the cdk destroycommand in the repository.</li></ul><h2>Conclusion</h2><p>After following this blog post, you have successfully deployed an online serverless machine learning model using open source <a href="https://riverml.xyz/latest&quot;&gt;Python River</a> and <a href="https://aws.amazon.com/cdk/&quot;&gt;AWS Cloud Development Kit (AWS CDK)</a>. You also learned how to adapt the solution of anomaly detection to another machine learning algorithm.</p></section>

Fish AI Reader

Fish AI Reader

AI辅助创作,多种专业模板,深度分析,高质量内容生成。从观点提取到深度思考,FishAI为您提供全方位的创作支持。新版本引入自定义参数,让您的创作更加个性化和精准。

FishAI

FishAI

鱼阅,AI 时代的下一个智能信息助手,助你摆脱信息焦虑

联系邮箱 441953276@qq.com

相关标签

机器学习 制造业 AWS 在线学习
相关文章