Lightbend today announced the general availability of Lightbend Pipelines, a new, comprehensive system that accelerates all lifecycle aspects of streaming data pipelines and integrating them with microservices-based applications.
The real-time enterprise integrates streaming pipelines with applications in order to extract value from data in a timely manner.
* Improving customer loyalty in E-Commerce via recommendation engines;
* Better service through IoT device management for predictive maintenance and replacement;
* More efficient operations by optimizing supply chains;
* Lowering costs through fraud detection and real-time risk assessment.
“What traditionally has been seen as two completely different use cases, requiring separate systems and vastly different architectures and infrastructure are converging,” said Jonas Bonér, CTO and co-founder of Lightbend. “Lightbend Platform, with its inclusion of Lightbend Pipelines, is the first platform to address both the application developer’s microservices needs as well as the data engineer’s needs of serving analytics and machine learning models to applications in real-time.”
Lightbend Pipelines eliminates much of the complexity of developing, deploying, and operating streaming data services in the Lightbend Platform under a common set of abstractions. Leveraging Pipelines’ declarative data models to compose individual stream processors into full pipelines, as well as embedding domain-logic as part of a data pipeline, greatly accelerates the creation of data-driven applications. Lightbend Pipelines also provides monitoring and operational tooling to allow your application to be available, resilient and scalable.
This separation of developer and DevOps concerns lets developers work locally and then deploy to their choice of environment – local, on-premises, or cloud-hosted – in a familiar CI/CD lifecycle approach.
Lightbend Pipelines provides flexibility of choice to choose the right streaming engine — for example Spark or Akka Streams — for components of the pipeline. These components, or “streamlets”, can then be composed, and finally deployed via a single CLI command. Data durability, as well as other boilerplate like serialization between processing stages is handled by the Pipelines framework. Pipelines allows you to focus on business logic rather than the 90% of code that typically deals with non-business case concerns. Developing and deploying a fully-functional streaming data pipeline that creates an HTTP endpoint to ingest streaming data, post-processes that data using one or more Apache Spark jobs to expose a projection of the results as a microservice with a simple flattening function, can be done in a single workday.