AiThority 2024年09月17日
Tecton Unveils Major Platform Expansion to Help Enterprises Productionize LLM Applications
index_new5.html
../../../zaker_core/zaker_tpl_static/wap/tpl_guoji1.html

 

Tecton宣布平台重大扩展,助力企业将LLM从实验项目转化为可靠、具有情境感知的AI应用,以提升企业效率和竞争力。

🎯Tecton平台扩展,融合实时数据与LLM,打造更智能产品能力和AI驱动体验,提升团队效率,如预测用户提及、改善产品内搜索体验等。

💪解决LLM在企业生产环境中利用率低的问题,通过提供全面实时数据,增强LLM的实用性,实现业务运营的自动化、个性化和决策能力提升。

🛠️Tecton推出一系列功能,包括管理嵌入、可扩展实时数据集成、企业级动态提示管理和创新的LLM驱动功能生成,以帮助客户构建生产级生成式AI应用。

🔒Tecton注重数据安全和隐私,其新的Feature Retrieval API在确保数据保护的前提下,让LLM能访问实时数据,提升响应准确性和相关性。

📈Tecton的动态提示管理实现了版本控制等功能,有助于提高模型迭代效率,降低合规风险,推动企业AI实践标准化。

Empowering enterprises to take LLMs from experimental projects to reliable, context-aware AI applications at scale

Tecton today announced a major platform expansion to unlock the full potential of Generative AI in enterprise applications. This release empowers AI teams to build reliable, high-performing systems by infusing LLMs with comprehensive, real-time contextual data.

Also Read: AI and Big Data Governance: Challenges and Top Benefits

“By leveraging our data and user interactions in real time, we can build smarter product capabilities and AI-driven experiences that enhance team efficiency. This technology enables predicting user mentions, improved in-product search experiences, and forming associations between people and work, ultimately transforming how teams work together.”

Generative AI, powered by LLMs, promises to transform business operations with unparalleled automation, personalization, and decision-making capabilities. However, LLMs remain strikingly underutilized in enterprise production environments. According to a study by Gartner, only 53% of AI projects ever make it from prototype to production, indicating that a significant portion of enterprise GenAI initiatives are not yet delivering tangible business value at scale.

The primary reason for this limited adoption is the unpredictable nature of LLMs when faced with dynamic business environments. This stems from LLMs’ lack of up-to-date, domain-specific knowledge and real-time contextual awareness. The true value of AI for enterprises lies in leveraging their unique, company-specific data to create customized solutions that are deeply connected to all aspects of their business.

“The AI industry is at a crossroads. We’ve seen the potential of LLMs, but their adoption in enterprise production environments has been stifled by reliability and trust issues,” says Mike Del Balso, CEO and Co-Founder of Tecton. “Our platform expansion represents a paradigm shift in how enterprises can leverage their data to build production AI applications. By focusing on better data rather than bigger models, we’re enabling companies to deploy smarter, more resilient AI applications that are customized to their unique business data and can be trusted in mission-critical scenarios.”

Tecton enhances retrieval-augmented generation (RAG) applications by integrating comprehensive, real-time data from across the enterprise. This approach augments the retrieved candidates with up-to-date, contextual information, enabling the LLM to make more informed decisions. The outcome is hyper-personalized, context-aware AI applications capable of split-second accuracy in dynamic environments. For instance, an e-commerce AI could instantly consider a customer’s browsing behavior, inventory levels, and current promotions to retrieve the most relevant product candidates, significantly improving recommendation quality and conversion rates.

Also Read: What is Return on AI – and How Do Companies Measure It

To help customers build production Generative AI applications, Tecton is launching a suite of capabilities including managed embeddings, scalable real-time data integration for LLMs, enterprise-grade dynamic prompt management, and innovative LLM-powered feature generation.

Boost Productivity and Optimize Costs with Managed Embeddings Generation

Tecton now offers a comprehensive embeddings solution that generates and manages rich representations of unstructured data to power generative AI applications. This service efficiently handles transforming text into numerical vectors that capture semantic meaning, enabling various downstream AI tasks. For instance, it can convert a customer review like “The product arrived quickly and works great!” into a numerical vector that encodes the sentiment, topic, and key aspects of the review. These vector representations can then be stored in a vector database, enabling easy comparison and candidate retrieval across thousands of such reviews.

Tecton’s comprehensive management of the embeddings lifecycle, from generation to storage and retrieval, dramatically reduces the engineering overhead typically associated with implementing a RAG architecture. As a result, data scientists and ML engineers can shift their focus from infrastructure management to improving model performance, ultimately enhancing productivity and innovation.

Tecton’s embeddings service supports both pre-trained models and custom embedding models, allowing teams to bring their own models or leverage state-of-the-art open-source options. This flexibility enables faster productionization, improved model performance, and optimized costs.

Build Hyper-Personalized AI Applications with Real-Time Context

Tecton’s new Feature Retrieval API allows developers to provide engineered features for LLMs to access when generating responses. This integration enables LLMs to access real-time or streaming data about user behavior, transactions, and operational metrics, dramatically improving their ability to provide accurate, contextually relevant responses.

For example, in a customer service application, an LLM could access up-to-date information about a customer’s recent purchases, support history, and account status, allowing it to provide personalized and accurate assistance. This capability bridges the gap between an LLM’s general knowledge and the specific, current information needed to handle real-world business scenarios. As a result, enterprises can create AI applications that are truly tailored to their business, leading to superior customer experiences, improved operational efficiency, and a significant competitive edge in the market.

The API is designed with enterprise security and privacy in mind, ensuring that sensitive data is protected and that only authorized models and agents can access specific data. This allows enterprises to maintain control over their data while still leveraging the power of LLMs.

Streamline AI Development with Dynamic, Version-Controlled Prompts

Tecton’s extended declarative framework now incorporates prompt management, introducing standardization, version control, and DevOps best practices to LLM prompts. This advancement tackles a significant challenge in LLM application development: the lack of systematic prompt management, which is crucial for guiding LLM behavior.

The tight integration between features and prompts facilitates dynamic enrichment of prompts with contextual data. Tecton enables prompt testing against historical data and provides time-correct context for fine-tuning large language models. This ensures prompt effectiveness across different time periods and enhances LLM training with historically relevant data, leading to more effective model iteration and improvement over time.

Dynamic Prompt Management empowers version control, change tracking, and easy rollback of prompts when necessary. This capability drives enterprise-wide standardization of AI practices, accelerating development and ensuring consistency across environments. It facilitates rapid adoption of best practices in prompt engineering, potentially saving hundreds of development hours while significantly reducing compliance risks. This is particularly valuable in maintaining consistency across different environments (development, staging, production) and ensuring regulatory compliance in industries where AI decision-making processes need to be auditable.

Generate Features Using LLMs and Natural Language

Tecton’s feature engineering framework now leverages LLMs to extract meaningful information from unstructured text data, transforming it into structured, usable formats and creating novel features that were previously difficult or impossible to generate. These LLM-generated features can enhance traditional ML models, deep learning applications, or enrich context for LLMs themselves. This approach bridges qualitative data processing (where LLMs excel), with quantitative analysis (where traditional ML is still crucial), enabling more sophisticated AI applications.

For instance, an e-commerce company can now automatically categorize product descriptions, extract key attributes, or generate sentiment scores from customer reviews. These LLM-generated features can then be used to improve search relevance, personalize recommendations, or enhance customer service interactions.

The framework handles the complexities of working with LLMs at scale, including automatic caching to reduce API calls and associated costs, and rate limiting to ensure compliance with API usage limits. This allows data teams to focus on defining the feature logic rather than worrying about the underlying infrastructure.

Quotes

“Tecton’s platform expansion is a game-changer for AI-powered collaboration,” said Joshua Hansen, Principal Engineer at Atlassian. “By leveraging our data and user interactions in real time, we can build smarter product capabilities and AI-driven experiences that enhance team efficiency. This technology enables predicting user mentions, improved in-product search experiences, and forming associations between people and work, ultimately transforming how teams work together.”

“Tecton’s platform has been instrumental in our efforts to enhance real-time fraud detection and personalization across millions of users,” said Joseph McAllister, Senior Software Engineer, ML Platform at Coinbase. “A single context platform that can serve relevant input signals will improve the performance of all our models, both GenAI and predictive ML. This unlocks more sophisticated, context-aware AI applications that can produce highly accurate responses at scale and drive intelligent experiences across our products in real-time.”

“With this platform expansion, we’re not just improving AI performance—we’re fundamentally changing how enterprises approach AI development,” said Kevin Stumpf, CTO and Co-Founder of Tecton. “By providing a unified framework for both predictive ML and generative AI, we’re enabling organizations to leverage their business data to build advanced AI-powered functionality directly into their applications, all fueled by a single data platform.”

[To share your insights with us as part of editorial or sponsored content, please write to psen@itechseries.com]

The post Tecton Unveils Major Platform Expansion to Help Enterprises Productionize LLM Applications appeared first on AiThority.

Fish AI Reader

Fish AI Reader

AI辅助创作,多种专业模板,深度分析,高质量内容生成。从观点提取到深度思考,FishAI为您提供全方位的创作支持。新版本引入自定义参数,让您的创作更加个性化和精准。

FishAI

FishAI

鱼阅,AI 时代的下一个智能信息助手,助你摆脱信息焦虑

联系邮箱 441953276@qq.com

相关标签

Tecton LLM应用 企业AI 数据安全 提示管理
相关文章