News

Hazelcast Jet 4.0: Stream Processing for AI and ML

In-memory data grid (IMDG) provider Hazelcast this week announced new support in its Hazelcast Jet event stream processing engine for artificial intelligence (AI) and machine learning (ML) deployments of mission-critical applications.

Jet 4.0 is designed to reduce time-to-deployment through "inference runners" for any native Python- and Java-based models. The new Jet release also includes expanded database support and other updates focused on data integrity.

"We saw a real bottleneck around making ML operational," Scott McMahon, senior solutions architect at Hazelcast, told Pure AI. "For a long time, it wasn't really production ready. It was hard to deploy and you didn't really know what to use it for. Now we have the tools to build those models and train those models, and we have the infrastructure to put them into the hands of the vendors, but there was still a bottleneck around making operational in real time."

That's where the Hazelcast inference runners come in. In Jet 4.0, they allow models to be natively plugged into the stream processing pipeline, and developers can deploy Python models in a stream processing architecture that enables enterprises to feed real-time streaming data directly into the model.

"If you develop and build your models using Python technology," McMahon explained, "you can now deploy it to Jet, and Jet will run that in the Python inference in a native Python environment, but still do it in parallel, distributed, and take advantage of the scale and speed of the in-memory technology. We've been able to do that with Java-based machine learning technologies. This release adds the Python inference runner. And models in the next version, which will be coming out soon, will have C++ \inference runners."

This approach eliminates the need to call out to external services via REST, McMahon explained, which adds round-trip network latency and requires administrative overhead to maintain those external services, the company said. In Jet, the Python models are run locally to the processing jobs, eliminating latency and leveraging the built-in resilience to support mission-critical deployments. These ML Inference jobs can be scaled to the number of cores per Jet node and then scaled linearly by adding more Jet nodes to the job.

"With machine learning inferencing in Hazelcast Jet, customers can take models from their data scientists unchanged and deploy within a streaming pipeline," said Greg Luck, CTO of Hazelcast, in a statement. "This approach completely eliminates the impedance mismatch between the data scientist and data engineer since Hazelcast Jet can handle the data ingestion, transformation, scoring and post-processing."

Also, this release of Hazelcast Jet incorporates new logic that runs a two-phase commit to ensure consistency across a broader set of data sources and sinks. This new logic expands upon the "exactly once" guarantee by tracking reads and writes at the source and sink levels, and ensures no data is lost or "duplicately processed" when a failure or outage occurs. Customers can, for example, read data from a Java Message Service (JMS) topic, process the data and write it to an Apache Kafka topic with an "exactly once" guarantee. This guarantee is critical in systems where lost or duplicate data can be costly, such as in payment processing or e-commerce transaction systems.

Hazelcast Jet 4.0 also includes a change data capture (CDC) integration with the open source project Debezium, which allows databases to act as streaming sources. The CDC integration adds support for a number of popular databases, including MySQL, PostgreSQL, MongoDB, and SQL Server. Because CDC effectively creates a stream out of database updates, Hazelcast Jet can efficiently process the updates at high-speed for the applications that depend on the latest data.

About the Author

John K. Waters is the editor in chief of a number of Converge360.com sites, with a focus on high-end development, AI and future tech. He's been writing about cutting-edge technologies and culture of Silicon Valley for more than two decades, and he's written more than a dozen books. He also co-scripted the documentary film Silicon Valley: A 100 Year Renaissance, which aired on PBS.  He can be reached at jwaters@converge360.com.

Featured