Apache Apex is a platform and runtime engine that enables development of scalable and fault-tolerant distributed applications on Hadoop in a native fashion. It processes streaming or batch big data with high throughput and low latency. Applications are built from operators that run distributed across a cluster and can scale up or down dynamically. Apex provides automatic recovery from failures without reprocessing and preserves state. It includes a library of common operators to simplify application development.
Ingestion and Dimensions Compute and Enrich using Apache ApexApache Apex
Presenter: Devendra Tagare - DataTorrent Engineer, Contributor to Apex, Data Architect experienced in building high scalability big data platforms.
This talk will be a deep dive into ingesting unbounded file data and streaming data from Kafka into Hadoop. We will also cover data enrichment and dimensional compute. Customer use-case and reference architecture.
Hadoop Summit SJ 2016: Next Gen Big Data Analytics with Apache ApexApache Apex
This is an overview of architecture with use cases for Apache Apex, a big data analytics platform. It comes with a powerful stream processing engine, rich set of functional building blocks and an easy to use API for the developer to build real-time and batch applications. Apex runs natively on YARN and HDFS and is used in production in various industries. You will learn more about two use cases: A leading Ad Tech company serves billions of advertising impressions and collects terabytes of data from several data centers across the world every day. Apex was used to implement rapid actionable insights, for real-time reporting and allocation, utilizing Kafka and files as source, dimensional computation and low latency visualization. A customer in the IoT space uses Apex for Time Series service, including efficient storage of time series data, data indexing for quick retrieval and queries at high scale and precision. The platform leverages the high availability, horizontal scalability and operability of Apex.
From Batch to Streaming with Apache Apex Dataworks Summit 2017Apache Apex
This document discusses transitioning from batch to streaming data processing using Apache Apex. It provides an overview of Apex and how it can be used to build real-time streaming applications. Examples are given of how to build an application that processes Twitter data streams and visualizes results. The document also outlines Apex's capabilities for scalable stream processing, queryable state, and its growing library of connectors and transformations.
Apache Big Data EU 2016: Next Gen Big Data Analytics with Apache ApexApache Apex
Stream data processing is becoming increasingly important to support business needs for faster time to insight and action with growing volume of information from more sources. Apache Apex (http://apex.apache.org/) is a unified big data in motion processing platform for the Apache Hadoop ecosystem. Apex supports demanding use cases with:
* Architecture for high throughput, low latency and exactly-once processing semantics.
* Comprehensive library of building blocks including connectors for Kafka, Files, Cassandra, HBase and many more
* Java based with unobtrusive API to build real-time and batch applications and implement custom business logic.
* Advanced engine features for auto-scaling, dynamic changes, compute locality.
Apex was developed since 2012 and is used in production in various industries like online advertising, Internet of Things (IoT) and financial services.
Apache Apex: Stream Processing Architecture and ApplicationsThomas Weise
Slides from http://www.meetup.com/Hadoop-User-Group-Munich/events/230313355/
This is an overview of architecture with use cases for Apache Apex, a big data analytics platform. It comes with a powerful stream processing engine, rich set of functional building blocks and an easy to use API for the developer to build real-time and batch applications. Apex runs natively on YARN and HDFS and is used in production in various industries. You will learn more about two use cases: A leading Ad Tech company serves billions of advertising impressions and collects terabytes of data from several data centers across the world every day. Apex was used to implement rapid actionable insights, for real-time reporting and allocation, utilizing Kafka and files as source, dimensional computation and low latency visualization. A customer in the IoT space uses Apex for Time Series service, including efficient storage of time series data, data indexing for quick retrieval and queries at high scale and precision. The platform leverages the high availability, horizontal scalability and operability of Apex.
Big Data Berlin v8.0 Stream Processing with Apache Apex Apache Apex
This document discusses Apache Apex, an open source stream processing framework. It provides an overview of stream data processing and common use cases. It then describes key Apache Apex capabilities like in-memory distributed processing, scalability, fault tolerance, and state management. The document also highlights several customer use cases from companies like PubMatic, GE, and Silver Spring Networks that use Apache Apex for real-time analytics on data from sources like IoT sensors, ad networks, and smart grids.
Apache Apex is a stream processing framework that provides high performance, scalability, and fault tolerance. It uses YARN for resource management, can achieve single digit millisecond latency, and automatically recovers from failures without data loss through checkpointing. Apex applications are modeled as directed acyclic graphs of operators and can be partitioned for scalability. It has a large community of committers and is in the process of becoming a top-level Apache project.
Smart Partitioning with Apache Apex (Webinar)Apache Apex
Processing big data often requires running the same computations parallelly in multiple processes or threads, called partitions, with each partition handling a subset of the data. This becomes all the more necessary when processing live data streams where maintaining SLA is paramount. Furthermore, multiple different computations make up an application and each of them may have different partitioning needs. Partitioning also needs to adapt to changing data rates, input sources and other application requirements like SLA.
In this talk, we will introduce how Apache Apex, a distributed stream processing platform on Hadoop, handles partitioning. We will look at different partitioning schemes provided by Apex some of which are unique in this space. We will also look at how Apex does dynamic partitioning, a feature unique to and pioneered by Apex to handle varying data needs with examples. We will also talk about the different utilities and libraries that Apex provides for users to be able to affect their own custom partitioning.
Introduction to Apache Apex and writing a big data streaming application Apache Apex
Introduction to Apache Apex - The next generation native Hadoop platform, and writing a native Hadoop big data Apache Apex streaming application.
This talk will cover details about how Apex can be used as a powerful and versatile platform for big data. Apache apex is being used in production by customers for both streaming and batch use cases. Common usage of Apache Apex includes big data ingestion, streaming analytics, ETL, fast batch. alerts, real-time actions, threat detection, etc.
Presenter : <b>Pramod Immaneni</b> Apache Apex PPMC member and senior architect at DataTorrent Inc, where he works on Apex and specializes in big data applications. Prior to DataTorrent he was a co-founder and CTO of Leaf Networks LLC, eventually acquired by Netgear Inc, where he built products in core networking space and was granted patents in peer-to-peer VPNs. Before that he was a technical co-founder of a mobile startup where he was an architect of a dynamic content rendering engine for mobile devices.
This is a video of the webcast of an Apache Apex meetup event organized by Guru Virtues at 267 Boston Rd no. 9, North Billerica, MA, on <b>May 7th 2016</b> and broadcasted from San Jose, CA. If you are interested in helping organize i.e., hosting, presenting, community leadership Apache Apex community, please email apex-meetup@datatorrent.com
DataTorrent Presentation @ Big Data Application MeetupThomas Weise
The document introduces Apache Apex, an open source unified streaming and batch processing framework. It discusses how Apex integrates with native Hadoop components like YARN and HDFS. It then describes Apex's programming model using directed acyclic graphs of operators and streams to process data. The document outlines Apex's support for scaling applications through partitioning, windowing, fault tolerance, and guarantees on processing semantics. It provides an example of building an application pipeline and shows the logical and physical plans. In closing, it directs the reader to Apache Apex community resources for more information.
Low Latency Polyglot Model Scoring using Apache ApexApache Apex
This document discusses challenges in building low-latency machine learning applications and how Apache Apex can help address them. It introduces Apache Apex as a distributed streaming engine and describes how it allows embedding models from frameworks like R, Python, H2O through custom operators. It provides various data and model scoring patterns in Apex like dynamic resource allocation, checkpointing, exactly-once processing to meet SLAs. The document also demonstrates techniques like canary deployment, dormant models, model ensembles through logical overlays on the Apex DAG.
David Yan offers an overview of Apache Apex, a stream processing engine used in production by several large companies for real-time data analytics.
Apache Apex uses a programming paradigm based on a directed acyclic graph (DAG). Each node in the DAG represents an operator, which can be data input, data output, or data transformation. Each directed edge in the DAG represents a stream, which is the flow of data from one operator to another.
As part of Apex, the Malhar library provides a suite of connector operators so that Apex applications can read from or write to various data sources. It also includes utility operators that are commonly used in streaming applications, such as parsers, deduplicators and join, and generic building blocks that facilitate scalable state management and checkpointing.
In addition to processing based on ingression time and processing time, Apex supports event-time windows and session windows. It also supports windowing, watermarks, allowed lateness, accumulation mode, triggering, and retraction detailed by Apache Beam as well as feedback loops in the DAG for iterative processing and at-least-once and “end-to-end” exactly-once processing guarantees. Apex provides various ways to fine-tune applications, such as operator partitioning, locality, and affinity.
Apex is integrated with several open source projects, including Apache Beam, Apache Samoa (distributed machine learning), and Apache Calcite (SQL-based application specification). Users can choose Apex as the backend engine when running their application model based on these projects.
David explains how to develop fault-tolerant streaming applications with low latency and high throughput using Apex, presenting the programming model with examples and demonstrating how custom business logic can be integrated using both the declarative high-level API and the compositional DAG-level API.
IoT Ingestion & Analytics using Apache Apex - A Native Hadoop PlatformApache Apex
Internet of Things (IoT) devices are becoming more ubiquitous in consumer, business and industrial landscapes. They are being widely used in applications ranging from home automation to the industrial internet. They pose a unique challenge in terms of the volume of data they produce, and the velocity with which they produce it, and the variety of sources they need to handle. The challenge is to ingest and process this data at the speed at which it is being produced in a real-time and fault tolerant fashion. Apache Apex is an industrial grade, scalable and fault tolerant big data processing platform that runs natively on Hadoop. In this deck, you will see how Apex is being used in IoT applications and also see how the enterprise features such as dimensional analytics, real-time dashboards and monitoring play a key role.
Presented by Pramod Immaneni, Principal Architect at DataTorrent and PPMC member Apache Apex, on BrightTALK webinar on Apr 6th, 2016
Apache Big Data EU 2016: Building Streaming Applications with Apache ApexApache Apex
Stream processing applications built on Apache Apex run on Hadoop clusters and typically power analytics use cases where availability, flexible scaling, high throughput, low latency and correctness are essential. These applications consume data from a variety of sources, including streaming sources like Apache Kafka, Kinesis or JMS, file based sources or databases. Processing results often need to be stored in external systems (sinks) for downstream consumers (pub-sub messaging, real-time visualization, Hive and other SQL databases etc.). Apex has the Malhar library with a wide range of connectors and other operators that are readily available to build applications. We will cover key characteristics like partitioning and processing guarantees, generic building blocks for new operators (write-ahead-log, incremental state saving, windowing etc.) and APIs for application specification.
Apache Apex (incubating) is a next generation native Hadoop big data platform. This talk will cover details about how it can be used as a powerful and versatile platform for big data.
Presented by Pramod Immaneni at Data Riders Meetup hosted by Nexient on Apr 5th, 2016
Presenter: Kenn Knowles, Software Engineer, Google & Apache Beam (incubating) PPMC member
Apache Beam (incubating) is a programming model and library for unified batch & streaming big data processing. This talk will cover the Beam programming model broadly, including its origin story and vision for the future. We will dig into how Beam separates concerns for authors of streaming data processing pipelines, isolating what you want to compute from where your data is distributed in time and when you want to produce output. Time permitting, we might dive deeper into what goes into building a Beam runner, for example atop Apache Apex.
You must submit an application to renew your green card on USCIS Form I-90. (Don't be confused by the fact that this form is also used for other purposes, such as to request USCIS to replace errors on your green card, or to replace a lost card.) The form will ask you to provide such information as:
Головною метою діяльності районного інформаційно – методмчного центру є:
Реалізація державної політики в галузі освіти й виховання, формування та розвиток учителя третього тисячоліття, конкурентоспроможного, який володіє високим рівнем педагогічного професіоналізму, професійної компетентності.
Директор Надвірнянського РІМЦ Петрик Богдан Васильович
The document discusses the history and development of artificial intelligence over the past 70 years. It outlines some of the key milestones in AI research including the creation of logic theories, machine learning algorithms, and neural networks. Recent advances in deep learning have led to AI systems that can perform complex tasks like image recognition and natural language processing.
This document discusses GE Predix, which is described as an industry first end-to-end solution for the Industrial Internet. It consists of Predix Machine, which collects data from machines at the edge of the network, Predix Security which provides end-to-end security, Predix Core which provides the cloud infrastructure, the Predix Services Catalog containing services from GE and partners, and customer applications. The document emphasizes that Predix allows customers to focus on building and managing applications rather than infrastructure. It also notes there are 14 factors to consider when migrating existing applications to Predix.
This document discusses search engine optimization (SEO) and marketing plans. It provides an overview of the SEO process, which involves understanding business objectives, conducting keyword research, optimizing websites both on-page and off-page, implementing changes, and ongoing monitoring and reporting. Technical aspects of SEO are also outlined such as website analysis, content optimization, and link building. The document also discusses measuring SEO success through improved rankings, increased traffic, and higher revenue or conversions.
Apache Apex: Stream Processing Architecture and Applications Comsysto Reply GmbH
• Architecture highlights: high throughput, low-latency, operability with stateful fault tolerance, strong processing guarantees, auto-scaling etc
• Application development model, unified approach for real-time and batch use cases
• Tools for ease of use, ease of operability and ease of management
• How customers use Apache Apex in production
This document discusses the Apache Apex stream processing platform. It provides an overview of Apex's architecture, including its native integration with Hadoop YARN and HDFS, its application programming model based on operators and streams, and its support for advanced features like windowing, partitioning, dynamic scaling, fault tolerance, and data processing guarantees. It also shows examples of monitoring dashboards and describes how Apex can be used to build real-time data analytics pipelines.
Presentation on Apache Apex, the enterprise-grade big data analytics platform and how it is used in production use cases. In this talk you will learn about:
• Architecture highlights: high throughput, low-latency, operability with stateful fault tolerance, strong processing guarantees, auto-scaling etc
• Application development model, unified approach for real-time and batch use cases
• Tools for ease of use, ease of operability and ease of management
• How customers use Apache Apex in production
Speakers:
Pramod Immaneni is Apache Apex (incubating) PPMC member, committer and senior architect at DataTorrent Inc, where he works on Apex and specializes in big data applications. Prior to DataTorrent he was a co-founder and CTO of Leaf Networks LLC, eventually acquired by Netgear Inc, where he built products in core networking space and was granted patents in peer-to-peer VPNs. Prior to that he was a technical co-founder of a mobile startup where he was an architect of a dynamic content rendering engine for mobile devices.
BigDataSpain 2016: Introduction to Apache ApexThomas Weise
Apache Apex is an open source stream processing platform, built for large scale, high-throughput, low-latency, high availability and operability. With a unified architecture it can be used for real-time and batch processing. Apex is Java based and runs natively on Apache Hadoop YARN and HDFS.
We will discuss the key features of Apache Apex and architectural differences from similar platforms and how these differences affect use cases like ingestion, fast real-time analytics, data movement, ETL, fast batch, low latency SLA, high throughput and large scale ingestion.
Apex APIs and libraries of operators and examples focus on developer productivity. We will present the programming model with examples and how custom business logic can be easily integrated based on the Apex operator API.
We will cover integration with connectors to sources/destinations (including Kafka, JMS, SQL, NoSQL, files etc.), scalability with advanced partitioning, fault tolerance and processing guarantees, computation and scheduling model, state management, windowing and dynamic changes. Attendees will also learn how these features affect time to market and total cost of ownership and how they are important in existing Apex production deployments.
https://www.bigdataspain.org/
Next Gen Big Data Analytics with Apache Apex discusses Apache Apex, an open source stream processing framework. It provides an overview of Apache Apex's capabilities for processing continuous, real-time data streams at scale. Specifically, it describes how Apache Apex allows for in-memory, distributed stream processing using a programming model of operators in a directed acyclic graph. It also covers Apache Apex's features for fault tolerance, dynamic scaling, and integration with Hadoop and YARN.
- Apache Apex is a platform and framework for building highly scalable and fault-tolerant distributed applications on Hadoop.
- It allows developers to build any custom logic as distributed applications and ensures fault tolerance, scalability and data flow. Applications can process streaming or batch data with high throughput and low latency.
- Apex applications are composed of operators that perform processing on streams of data tuples. Operators can run in a distributed fashion across a cluster and automatically recover from failures without reprocessing data from the beginning.
Stream data from Apache Kafka for processing with Apache ApexApache Apex
Meetup presentation: How Apache Apex consumes from Kafka topics for real-time time processing and analytics. Learn about features of the Apex Kafka Connector, which is one of the most popular operators in the Apex Malhar operator library, and powers several production use cases. We explain the advanced features this operator provides for high throughput, low latency ingest and how it enables fault tolerant topologies with exactly once processing semantics.
Real-time Stream Processing using Apache ApexApache Apex
Apache Apex is a stream processing framework that provides high performance, scalability, and fault tolerance. It uses YARN for resource management, can achieve single digit millisecond latency, and automatically recovers from failures without data loss through checkpointing. Apex applications are modeled as directed acyclic graphs of operators and can be partitioned for scalability. It has a large community of committers and is in the process of becoming a top-level Apache project.
Thomas Weise, Apache Apex PMC Member and Architect/Co-Founder, DataTorrent - ...Dataconomy Media
Thomas Weise, Apache Apex PMC Member and Architect/Co-Founder of DataTorrent presented "Streaming Analytics with Apache Apex" as part of the Big Data, Berlin v 8.0 meetup organised on the 14th of July 2016 at the WeWork headquarters.
Apache Apex Fault Tolerance and Processing SemanticsApache Apex
Apache Apex (http://apex.incubator.apache.org/) is an open source stream processing and next generation analytics platform incubating at the Apache Software Foundation. Apex is Hadoop native and was built from ground up for scalability, low-latency processing, high availability and operability.
In this webinar, you will learn about Apache Apex fault tolerance, high availability and processing guarantees.
From the users perspective, fault tolerance of a stream processing platform should cover the state of the application/processor and the in-flight data. In the event of failure, the platform should recover, restore state and resume processing with no loss of data. We will cover:
* Components of an Apex application and how they are made fault tolerant
* How native YARN support is leveraged for fault tolerance
* How operator checkpointing works and how the user can tune it
* Failure scenarios, recovery from failures, incremental recovery
* Processing guarantees and which option is appropriate for your application
* Sample topology for highly available, low latency real-time processing
* How is fault-tolerance in Apex different from similar platforms such as Storm, Spark Streaming and Flink.
Presented by Thomas Weise, Architect & Co-founder; Pramod Immaneni, Architect on BrightTALK on Mar 24th, 2016
SAND: A Fault-Tolerant Streaming Architecture for Network Traffic AnalyticsQin Liu
SAND is a fault-tolerant streaming architecture designed for real-time network traffic analytics. It uses a C++ implementation to achieve high performance processing speeds unable to be matched by Java-based systems like Storm. SAND introduces a new fault tolerance scheme using a checkpointing protocol that allows workers to reliably recover state after failures while providing correct results. Evaluation shows SAND can process over 9.6 million packets per second, outperforming alternatives by 3.7-37x, and reliably recover from failures within seconds.
Project Slides for Website 2020-22.pptxAkshitAgiwal1
This document describes a resource efficient ternary content addressable memory (TCAM) architecture. It uses a pipelined layer architecture to increase operating frequency. The TCAM was implemented on an FPGA with a size of 64x32 and achieved a 10.52% higher speed, 7.62% lower power, and 50% lower resource utilization compared to existing designs. When implemented in ASIC using a 45nm process, the proposed TCAM achieved 121.10% higher speed, 70.12% lower power, and an 18.7x smaller area compared to existing designs.
Apache Apex Fault Tolerance and Processing SemanticsApache Apex
Components of an Apex application running on YARN, how they are made fault tolerant, how checkpointing works, recovery from failures, incremental recovery, processing guarantees.
February 2017 HUG: Exactly-once end-to-end processing with Apache ApexYahoo Developer Network
Apache Apex (http://apex.apache.org/) is a stream processing platform that helps organizations to build processing pipelines with fault tolerance and strong processing guarantees. It was built to support low processing latency, high throughput, scalability, interoperability, high availability and security. The platform comes with Malhar library - an extensive collection of processing operators and a wide range of input and output connectors for out-of-the-box integration with an existing infrastructure. In the talk I am going to describe how connectors together with the distributed checkpointing (a mechanism used by the Apex to support fault tolerance and high availability) provide exactly-once end-to-end processing guarantees.
Speakers:
Vlad Rozov is Apache Apex PMC member and back-end engineer at DataTorrent where he focuses on the buffer server, Apex platform network layer, benchmarks and optimizing the core components for low latency and high throughput. Prior to DataTorrent Vlad worked on distributed BI platform at Huawei and on multi-dimensional database (OLAP) at Hyperion Solutions and Oracle.
Strata Singapore: GearpumpReal time DAG-Processing with Akka at ScaleSean Zhong
Gearpump is a Akka based realtime streaming engine, it use Actor to model everything. It has super performance and flexibility. It has performance of 18000000 messages/second and latency of 8ms on a cluster of 4 machines.
GE IOT Predix Time Series & Data Ingestion Service using Apache Apex (Hadoop)Apache Apex
This presentation will introduce usage of Apache Apex for Time Series & Data Ingestion Service by General Electric Internet of things Predix platform. Apache Apex is a native Hadoop data in motion platform that is being used by customers for both streaming as well as batch processing. Common use cases include ingestion into Hadoop, streaming analytics, ETL, database off-loads, alerts and monitoring, machine model scoring, etc.
Abstract: Predix is an General Electric platform for Internet of Things. It helps users develop applications that connect industrial machines with people through data and analytics for better business outcomes. Predix offers a catalog of services that provide core capabilities required by industrial internet applications. We will deep dive into Predix Time Series and Data Ingestion services leveraging fast, scalable, highly performant, and fault tolerant capabilities of Apache Apex.
Speakers:
- Venkatesh Sivasubramanian, Sr Staff Software Engineer, GE Predix & Committer of Apache Apex
- Pramod Immaneni, PPMC member of Apache Apex, and DataTorrent Architect
This document provides an overview of Apache Flink, an open-source platform for distributed stream and batch data processing. Flink allows for unified batch and stream processing with a simple yet powerful programming model. It features native stream processing, exactly-once fault tolerance based on consistent snapshots, and high performance optimized for streaming workloads. The document outlines Flink's APIs, state management, fault tolerance approach, and roadmap for continued improvements in 2015.
(Mike Graham + Dan Carroll, Comcast) Kafka Summit SF 2018
Comcast manages over 2 million miles of fiber and coax, and over 40 million in home devices. This “outside plant” is subject to adverse conditions from severe weather to power grid outages to construction-related disruptions. Maintaining the health of this large and important infrastructure requires a distributed, scalable, reliable and fast information system capable of real-time processing and rapid analysis and response. Using Apache Kafka and the Kafka Streams Processor API, Comcast built an innovative new system for monitoring, problem analysis, metrics reporting and action response for the outside plant.
In this talk, you’ll learn how topic partitions, state stores, key mapping, source and sink topics and processors from the Kafka Streams Processor API work together to build a powerful dynamic system. We will dive into the details about the inner workings of the state store—how it is backed by a Kafka “changelog” topic, how it is scaled horizontally by partition and how the instances are rebuilt on startup or on processor failure. We will discuss how these state stores essentially become like materialized views in a SQL database but are updated incrementally as data flows through the system, and how this allows the developers to maintain the data in the optimal structures for performing the processing. The best part is that the data is readily available when needed by the processors. You will see how a REST API using Kafka Streams “interactive queries” can be used to retrieve the data in the state stores. We will explore the deployment and monitoring mechanisms used to deliver this system as a set of independently deployed components.
Similar to Intro to Apache Apex - Next Gen Native Hadoop Platform - Hackac (20)
YARN was introduced as part of Hadoop 2.0 to address limitations in the original MapReduce (MR1) architecture like scalability bottlenecks and underutilization of resources. YARN introduces a global ResourceManager and per-node NodeManagers to allocate cluster resources to distributed applications. It allows various distributed processing frameworks beyond MapReduce to share common cluster resources. Applications request containers for ApplicationMasters that then negotiate resources from YARN to run application components in containers across nodes. Existing MapReduce jobs can also run unchanged on YARN.
Here is how you can solve this problem using MapReduce and Unix commands:
Map step:
grep -o 'Blue\|Green' input.txt | wc -l > output
This uses grep to search the input file for the strings "Blue" or "Green" and print only the matches. The matches are piped to wc which counts the lines (matches).
Reduce step:
cat output
This isn't really needed as there is only one mapper. Cat prints the contents of the output file which has the count of Blue and Green.
So MapReduce has been simulated using grep for the map and cat for the reduce functionality. The key aspects are - grep extracts the relevant data (map
HDFS stores files as blocks that are by default 64 MB in size to minimize disk seek times. The namenode manages the file system namespace and metadata, tracking which datanodes store each block. When writing a file, HDFS breaks it into blocks and replicates each block across multiple datanodes. The secondary namenode periodically merges namespace and edit log changes to prevent the log from growing too large. Small files are inefficient in HDFS due to each file requiring namespace metadata regardless of size.
Kafka to Hadoop Ingest with Parsing, Dedup and other Big Data TransformationsApache Apex
Presenter:
Chaitanya Chebolu, Committer for Apache Apex and Software Engineer at DataTorrent.
In this session we will cover the use-case of ingesting data from Kafka and writing to HDFS with a couple of processing operators - Parser, Dedup, Transform.
Building Your First Apache Apex (Next Gen Big Data/Hadoop) ApplicationApache Apex
This document provides an overview of building a first Apache Apex application. It describes the main concepts of an Apex application including operators that implement interfaces to process streaming data within windows. The document outlines a "Sorted Word Count" application that uses various operators like LineReader, WordReader, WindowWordCount, and FileWordCount. It also demonstrates wiring these operators together in a directed acyclic graph and running the application to process streaming data.
Intro to YARN (Hadoop 2.0) & Apex as YARN App (Next Gen Big Data)Apache Apex
Presenter:
Priyanka Gugale, Committer for Apache Apex and Software Engineer at DataTorrent.
In this session we will cover introduction to Yarn, understanding yarn architecture as well as look into Yarn application lifecycle. We will also learn how Apache Apex is one of the Yarn applications in Hadoop.
Ingesting Data from Kafka to JDBC with Transformation and EnrichmentApache Apex
Presenter - Dr Sandeep Deshmukh, Committer Apache Apex, DataTorrent engineer
Abstract:
Ingesting and extracting data from Hadoop can be a frustrating, time consuming activity for many enterprises. Apache Apex Data Ingestion is a standalone big data application that simplifies the collection, aggregation and movement of large amounts of data to and from Hadoop for a more efficient data processing pipeline. Apache Apex Data Ingestion makes configuring and running Hadoop data ingestion and data extraction a point and click process enabling a smooth, easy path to your Hadoop-based big data project.
In this series of talks, we would cover how Hadoop Ingestion is made easy using Apache Apex. The third talk in this series would focus on ingesting unbounded data from Kafka to JDBC with couple of processing operators -Transform and enrichment.
Making sense of Apache Bigtop's role in ODPi and how it matters to Apache ApexApache Apex
Roman Shaposhnik: Director of Open Source, Pivotal; Committer, Apache Hadoop; Founder, Apache Bigtop
Making sense of Apache Bigtop's role in ODPi and how it matters to Apache Apex.
Chinmay Kolhatkar: Engineer, DataTorrent & Committer, Apache Apex
For ease of use and deployment, Apache Apex leverages Apache Bigtop. Apex, being part of bigtop stack, can be easily deployed in both debian and rpm based cluster system and run validation tests for installation. This talk will cover a demo on how to install apex-bigtop and use it. It also covers a test sandbox docker environment, having pre-installed bigtop-hadoop and bigtop-apex, for quickly getting started with apex.
Building Your First Apache Apex ApplicationApache Apex
This document provides an overview of building an Apache Apex application, including key concepts like DAGs, operators, and ports. It also includes an example "word count" application and demonstrates how to define the application and operators, and build Apache Apex from source code. The document outlines the sample application workflow and includes information on resources for learning more about Apache Apex.
"Making .NET Application Even Faster", Sergey Teplyakov.pptxFwdays
In this talk we're going to explore performance improvement lifecycle, starting with setting the performance goals, using profilers to figure out the bottle necks, making a fix and validating that the fix works by benchmarking it. The talk will be useful for novice and seasoned .NET developers and architects interested in making their application fast and understanding how things work under the hood.
DefCamp_2016_Chemerkin_Yury-publish.pdf - Presentation by Yury Chemerkin at DefCamp 2016 discussing mobile app vulnerabilities, data protection issues, and analysis of security levels across different types of mobile applications.
Discovery Series - Zero to Hero - Task Mining Session 1DianaGray10
This session is focused on providing you with an introduction to task mining. We will go over different types of task mining and provide you with a real-world demo on each type of task mining in detail.
"Building Future-Ready Apps with .NET 8 and Azure Serverless Ecosystem", Stan...Fwdays
.NET 8 brought a lot of improvements for developers and maturity to the Azure serverless container ecosystem. So, this talk will cover these changes and explain how you can apply them to your projects. Another reason for this talk is the re-invention of Serverless from a DevOps perspective as a Platform Engineering trend with Backstage and the recent Radius project from Microsoft. So now is the perfect time to look at developer productivity tooling and serverless apps from Microsoft's perspective.
Finetuning GenAI For Hacking and DefendingPriyanka Aash
Generative AI, particularly through the lens of large language models (LLMs), represents a transformative leap in artificial intelligence. With advancements that have fundamentally altered our approach to AI, understanding and leveraging these technologies is crucial for innovators and practitioners alike. This comprehensive exploration delves into the intricacies of GenAI, from its foundational principles and historical evolution to its practical applications in security and beyond.
Keynote : AI & Future Of Offensive SecurityPriyanka Aash
In the presentation, the focus is on the transformative impact of artificial intelligence (AI) in cybersecurity, particularly in the context of malware generation and adversarial attacks. AI promises to revolutionize the field by enabling scalable solutions to historically challenging problems such as continuous threat simulation, autonomous attack path generation, and the creation of sophisticated attack payloads. The discussions underscore how AI-powered tools like AI-based penetration testing can outpace traditional methods, enhancing security posture by efficiently identifying and mitigating vulnerabilities across complex attack surfaces. The use of AI in red teaming further amplifies these capabilities, allowing organizations to validate security controls effectively against diverse adversarial scenarios. These advancements not only streamline testing processes but also bolster defense strategies, ensuring readiness against evolving cyber threats.
Retrieval Augmented Generation Evaluation with RagasZilliz
Retrieval Augmented Generation (RAG) enhances chatbots by incorporating custom data in the prompt. Using large language models (LLMs) as judge has gained prominence in modern RAG systems. This talk will demo Ragas, an open-source automation tool for RAG evaluations. Christy will talk about and demo evaluating a RAG pipeline using Milvus and RAG metrics like context F1-score and answer correctness.
The Challenge of Interpretability in Generative AI Models.pdfSara Kroft
Navigating the intricacies of generative AI models reveals a pressing challenge: interpretability. Our blog delves into the complexities of understanding how these advanced models make decisions, shedding light on the mechanisms behind their outputs. Explore the latest research, practical implications, and ethical considerations, as we unravel the opaque processes that drive generative AI. Join us in this insightful journey to demystify the black box of artificial intelligence.
Dive into the complexities of generative AI with our blog on interpretability. Find out why making AI models understandable is key to trust and ethical use and discover current efforts to tackle this big challenge.
2. What is Apex
2
• Platform and runtime engine that enables development of
scalable and fault-tolerant distributed applications
• Hadoop native
• Process streaming or batch big data
• High throughput and low latency
• Library of commonly needed business logic
• Write any custom business logic in your application
3. Applications on Apex
3
• Distributed processing
• Application logic broken into components called operators that run in a distributed fashion
across your cluster
• Scalable
• Operators can be scaled up or down at runtime according to the load and SLA
• Fault tolerant
• Automatically recover from node outages without having to reprocess from beginning
• State is preserved
• Long running applications
• Operators
• Use library to build applications quickly
• Write your own in Java using the API
• Operational insight – DataTorrent RTS
• See how each operator is performing and even record data
7. Application Development Model
7
A Stream is a sequence of data tuples
A typical Operator takes one or more input streams, performs computations & emits one or more output streams
• Each Operator is YOUR custom business logic in java, or built-in operator from our open source library
• Operator has many instances that run in parallel and each instance is single-threaded
Directed Acyclic Graph (DAG) is made up of operators and streams
Directed Acyclic Graph (DAG)
Output
Stream
Tupl
e
Tupl
e
er
Operator
er
Operator
er
Operator
er
Operator
er
Operator
er
Operator
8. Advanced Windowing Support
8
Application window
Sliding window and tumbling window
Checkpoint window
No artificial latency
12. Partitioning and unification
12
NxM PartitionsUnifier
0 1 2 3
Logical DAG
0 1 2
1
1 Unifier
1
20
Logical Diagram
Physical Diagram with operator 1 with 3 partitions
0
Unifier
1a
1b
1c
2a
2b
Unifier 3
Physical DAG with (1a, 1b, 1c) and (2a, 2b): No bottleneck
Unifier
Unifier0
1a
1b
1c
2a
2b
Unifier 3
Physical DAG with (1a, 1b, 1c) and (2a, 2b): Bottleneck on intermediate Unifier
13. Advanced Partitioning
13
0
1a
1b
2 3 4Unifier
Physical DAG
0 4
3a2a1a
1b 2b 3b
Unifier
Physical DAG with Parallel Partition
Parallel Partition
Container
uopr
uopr1
uopr2
uopr3
uopr4
uopr1
uopr2
uopr3
uopr4
dopr
dopr
doprunifier
unifier
unifier
unifier
Container
Container
NICNIC
NICNIC
NIC
Container
NIC
Logical Plan
Execution Plan, for N = 4; M = 1
Execution Plan, for N = 4; M = 1, K = 2 with cascading unifiers
Cascading Unifiers
0 1 2 3 4
Logical DAG
14. Dynamic Partitioning
14
• Partitioning change while application is running
ᵒ Change number of partitions at runtime based on stats
ᵒ Determine initial number of partitions dynamically
• Kafka operators scale according to number of kafka partitions
ᵒ Supports re-distribution of state when number of partitions change
ᵒ API for custom scaler or partitioner
2b
2c
3
2a
2d
1b
1a1a 2a
1b 2b
3
1a 2b
1b 2c 3b
2a
2d
3a
Unifiers not shown
15. How tuples are partitioned
15
• Tuple hashcode and mask used to determine destination partition
ᵒ Mask picks the last n bits of the hashcode of the tuple
ᵒ hashcode method can be overridden
• StreamCodec can be used to specify custom hashcode for tuples
ᵒ Can also be used for specifying custom serialization
tuple: {
Name,
24204842,
San Jose
}
Hashcode:
00101010001
0101
Mask
(0x11)
Partition
00 1
01 2
10 3
11 4
16. Custom partitioning
16
• Custom distribution of tuples
ᵒ E.g.. Broadcast
tuple:{
Name,
24204842,
San Jose
}
Hashcode:
00101010001
0101
Mask
(0x00)
Partition
00 1
00 2
00 3
00 4
17. Fault Tolerance
17
• Operator state is checkpointed to a persistent store
ᵒ Automatically performed by engine, no additional work needed by operator
ᵒ In case of failure operators are restarted from checkpoint state
ᵒ Frequency configurable per operator
ᵒ Asynchronous and distributed by default
ᵒ Default store is HDFS
• Automatic detection and recovery of failed operators
ᵒ Heartbeat mechanism
• Buffering mechanism to ensure replay of data from recovered point so
that there is no loss of data
• Application master state checkpointed
18. Processing Guarantees - Recovery
18
Atleast once
• On recovery data will be replayed from a previous checkpoint
ᵒ Messages will not be lost
ᵒ Default mechanism and is suitable for most applications
• Can be used in conjunction with following mechanisms to achieve
exactly-once behavior in fault recovery scenarios
ᵒ Transactions with meta information, Rewinding output, Feedback from
external entity, Idempotent operations
Atmost once
• On recovery the latest data is made available to operator
ᵒ Useful in use cases where some data loss is acceptable and latest data is
sufficient
Exactly once
• At least once + state recovery + operator logic to achieve end-to-end
exactly once
19. Stream Locality
19
• By default operators are deployed in containers (processes) randomly
on different nodes across the Hadoop cluster
• Custom locality for streams
ᵒ Rack local: Data does not traverse network switches
ᵒ Node local: Data is passed via loopback interface and frees up network
bandwidth
ᵒ Container local: Messages are passed via in memory queues between
operators and does not require serialization
ᵒ Thread local: Messages are passed between operators in a same thread
equivalent to calling a subsequent function on the message
22. App Ideas
22
• Ingestion with some ETL
• Social media trending
• Dimensional Analytics on large data sets like weather data
• Location tracking
• Alerting with IoT streams
• Free your imagination
26. We Are Hiring
26
• jobs@datatorrent.com
• Developers/Architects
• QA Automation Developers
• Information Developers
• Build and Release
• Community Leaders