Confluent, Inc. announced new capabilities in Confluent Cloud for Apache Flink® that streamline and simplify the process of developing real-time artificial intelligence (AI) applications. Flink Native Inference cuts through complex workflows by enabling teams to run any open source AI model directly in Confluent Cloud. Flink search unifies data access across multiple vector databases, streamlining discovery and retrieval within a single interface.

And new built-in machine learning (ML) functions bring AI-driven use cases, such as forecasting and anomaly detection, directly into Flink SQL, making advanced data science effortless. Together these innovations redefine how businesses can harness AI for real-time customer engagement and decision-making. The AI boom is here.

According to McKinsey, 92% of companies plan to increase their AI investments over the next three years. Organizations want to seize this opportunity and capitalize on the promises of AI. However, the road to building real-time AI apps is complicated.

Developers are juggling multiple tools, languages, and interfaces to incorporate ML models and pull valuable context from the many places that data lives. This fragmented workflow leads to costly inefficiencies, slowdowns in operations, and AI hallucinations that can damage reputations. As the only serverless stream processing solution on the market that unifies real-time and batch processing, Confluent Cloud for Apache Flink empowers teams to effortlessly handle both continuous streams of data and batch workloads within a single platform.

This eliminates the complexity and operational overhead of managing separate processing solutions. With these newly released AI, ML, and analytics features, it enables businesses to streamline more workflows and unlock greater efficiency. These features are available in an early access program, which is open for signup to Confluent Cloud customers.

Flink Native Inference: Run open source AI models in Confluent Cloud without added infrastructure management. When working with ML models and data pipelines, developers often use separate tools and languages, leading to complex and fragmented workflows and outdated data. Flink Native Inference simplifies this by enabling teams to run open source or fine-tuned AI models directly in Confluent Cloud.

This approach offers greater flexibility and cost savings. Plus, the data never leaves the platform for inference, adding a greater level of security. Flink search: Use just one interface to access data from multiple vector databases.

Vector searches provide LLMs with the necessary context to prevent hallucinations and ensure trustworthy results. Flink search simplifies accessing real-time data from vector databases, such as MongoDB, Elasticsearch, and Pinecone. This eliminates the need for complex ETL processes or manual data consolidation, saving valuable time and resources, while ensuring that data is contextual and always up to date.

Built-in ML functions: Make data science skills accessible to more teams. Many data science solutions require highly specialized expertise, creating bottlenecks in the development cycles. Built-in ML functions simplify complex tasks, such as forecasting, anomaly detection, and real-time visualization, directly in Flink SQL. These features make real-time AI accessible to more developers, enabling teams to gain actionable insights faster and empowering businesses to make smarter decisions with greater speed and agility.

Confluent also announced further advancements in Confluent Cloud, making it easier for teams to connect and access their real-time data, including Tableflow, Freight Clusters, Confluent for Visual Studio (VS) Code, and the Oracle XStream CDC Source Connector.