Fine-grained data protection at a column level in data lake environments has become a mandatory requirement to demonstrate compliance with multiple local and international regulations across many industries today. ORC is a self-describing type-aware columnar file format designed for Hadoop workloads that provides optimized streaming reads, but with integrated support for finding required rows quickly. In this talk, we will outline the progress made in Apache community for adding fine-grained column level encryption natively into ORC format that will also provide capabilities to mask or redact data on write while protecting sensitive column metadata such as statistics to avoid information leakage. The column encryption capabilities will be fully compatible with Hadoop Key Management Server (KMS) and use the KMS to manage master keys providing the additional flexibility to use and manage keys per column centrally.
Hadoop clusters can store nearly everything in a cheap and blazingly fast way to your data lake. Answering questions and gaining insights out of this ever growing stream becomes the decisive part for many businesses. Increasingly data has a natural structure as a graph, with vertices linked by edges, and many questions arising about the data involve graph traversals or other complex queries, for which one does not have an a priori given bound on the length of paths.
Spark with GraphX is great for answering relatively simple graph questions which are worth starting a Spark job for, because they essentially involve the whole graph. But does it make sense to start one for every ad-hoc query or is it suitable for complex real-time queries?
In this talk I will introduce an alternative solution that adds those features to an existing Hadoop/Spark setup and enables real-time insights. I will address the following topics:
* Challenges in gaining deeper insights from large amounts of graph data
* Benefits and limitations of graph analysis with Spark
* Introduction to ArangoDB SmartGraphs
* Deployment of Hadoop, Spark and ArangoDB using DC/OS
* Performing complex queries on billions of nodes and vertices leveraging ArangoDB SmartGraphs (Live Demo)
This deck presents the best practices of using Apache Hive with good performance. It covers getting data into Hive, using ORC file format, getting good layout into partitions and files based on query patterns, execution using Tez and YARN queues, memory configuration, and debugging common query performance issues. It also describes Hive Bucketing and reading Hive Explain query plans.
Hadoop Distributed File System (HDFS) evolves from a MapReduce-centric storage system to a generic, cost-effective storage infrastructure where HDFS stores all data of inside the organizations. The new use case presents a new sets of challenges to the original HDFS architecture. One challenge is to scale the storage management of HDFS - the centralized scheme within NameNode becomes a main bottleneck which limits the total number of files stored. Although a typical large HDFS cluster is able to store several hundred petabytes of data, it is inefficient to handle large amounts of small files under the current architecture.
In this talk, we introduce our new design and in-progress work that re-architects HDFS to attack this limitation. The storage management is enhanced to a distributed scheme. A new concept of storage container is introduced for storing objects. HDFS blocks are stored and managed as objects in the storage containers instead of being tracked only by NameNode. Storage containers are replicated across DataNodes using a newly-developed high-throughput protocol based on the Raft consensus algorithm. Our current prototype shows that under the new architecture the storage management of HDFS scales 10x better, demonstrating that HDFS is capable of storing billions of files.
Building Data Pipelines for Solr with Apache NiFiBryan Bende
This document provides an overview of using Apache NiFi to build data pipelines that index data into Apache Solr. It introduces NiFi and its capabilities for data routing, transformation and monitoring. It describes how Solr accepts data through different update handlers like XML, JSON and CSV. It demonstrates how NiFi processors can be used to stream data to Solr via these update handlers. Example use cases are presented for indexing tweets, commands, logs and databases into Solr collections. Future enhancements are discussed like parsing documents and distributing commands across a Solr cluster.
This document discusses enterprise-grade big data solutions from HPE. It outlines HPE's reference architecture for big data workloads including components like data lakes, data warehouses, archival storage, event processing, and in-memory analytics. It also discusses HPE's investments in Hortonworks and collaboration to optimize Hadoop for performance. The document promotes attending an HPE session at the Hadoop Summit on modernizing data warehouses and visiting the HPE booth for demos and a trivia game.
Hive on spark is blazing fast or is it finalHortonworks
This presentation was given at the Strata + Hadoop World, 2015 in San Jose.
Apache Hive is the most popular and most widely used SQL solution for Hadoop. To keep pace with Hadoop’s increasingly vital role in the Enterprise, Hive has transformed from a batch-only, high-latency system into a modern SQL engine capable of both batch and interactive queries over large datasets. Hive’s momentum is accelerating: With Spark integration and a shift to in-memory processing on the horizon, Hive continues to expand the boundaries of Big Data.
In this talk the speakers examined Hive performance, past, present and future. In particular they looked at Hive’s origins as a petabyte scale SQL engine.
Through some numbers and graphs, they showed how Hive became 100x faster by moving beyond MapReduce, by vectorizing execution and by introducing a cost-based optimizer.
They detailed and discussed the challenges of scalable SQL on Hadoop.
The looked into Hive’s sub-second future, powered by LLAP and Hive on Spark.
And showed just how fast Hive on Spark really is.
Apache Hadoop 3 is coming! As the next major milestone for hadoop and big data, it attracts everyone's attention as showcase several bleeding-edge technologies and significant features across all components of Apache Hadoop: Erasure Coding in HDFS, Docker container support, Apache Slider integration and Native service support, Application Timeline Service version 2, Hadoop library updates and client-side class path isolation, etc. In this talk, first we will update the status of Hadoop 3.0 releasing work in apache community and the feasible path through alpha, beta towards GA. Then we will go deep diving on each new feature, include: development progress and maturity status in Hadoop 3. Last but not the least, as a new major release, Hadoop 3.0 will contain some incompatible API or CLI changes which could be challengeable for downstream projects and existing Hadoop users for upgrade - we will go through these major changes and explore its impact to other projects and users.
Speaker: Sanjay Radia, Founder and Chief Architect, Hortonworks
To provide better security, ORC files are adding column encryption. Column encryption provides the ability to grant access to different columns within the same file. All of the encryption is handled transparently to the user.
Protect your Private Data in your Hadoop Clusters with ORC Column EncryptionDataWorks Summit
Fine-grained data protection at a column level in data lake environments has become a mandatory requirement to demonstrate compliance with multiple local and international regulations across many industries today. ORC is a self-describing type-aware columnar file format designed for Hadoop workloads that provides optimized streaming reads, but with integrated support for finding required rows quickly. In this talk, we will outline the progress made in Apache community for adding fine-grained column level encryption natively into ORC format that will also provide capabilities to mask or redact data on write while protecting sensitive column metadata such as statistics to avoid information leakage. The column encryption capabilities will be fully compatible with Hadoop Key Management Server (KMS) and use the KMS to manage master keys providing the additional flexibility to use and manage keys per column centrally. An end to end scenario that demonstrates how this capability can be leveraged will be also demonstrated.
Using Spark Streaming and NiFi for the Next Generation of ETL in the EnterpriseDataWorks Summit
In recent years, big data has moved from batch processing to stream-based processing since no one wants to wait hours or days to gain insights. Dozens of stream processing frameworks exist today and the same trend that occurred in the batch-based big data processing realm has taken place in the streaming world so that nearly every streaming framework now supports higher level relational operations.
On paper, combining Apache NiFi, Kafka, and Spark Streaming provides a compelling architecture option for building your next generation ETL data pipeline in near real time. What does this look like in an enterprise production environment to deploy and operationalized?
The newer Spark Structured Streaming provides fast, scalable, fault-tolerant, end-to-end exactly-once stream processing with elegant code samples, but is that the whole story?
We discuss the drivers and expected benefits of changing the existing event processing systems. In presenting the integrated solution, we will explore the key components of using NiFi, Kafka, and Spark, then share the good, the bad, and the ugly when trying to adopt these technologies into the enterprise. This session is targeted toward architects and other senior IT staff looking to continue their adoption of open source technology and modernize ingest/ETL processing. Attendees will take away lessons learned and experience in deploying these technologies to make their journey easier.
Speaker: Andrew Psaltis, Principal Solution Engineer, Hortonworks
The document discusses accelerating Apache Hadoop through high-performance networking and I/O technologies. It describes how technologies like InfiniBand, RoCE, SSDs, and NVMe can benefit big data applications by alleviating bottlenecks. It outlines projects from the High-Performance Big Data project that implement RDMA for Hadoop, Spark, HBase and Memcached to improve performance. Evaluation results demonstrate significant acceleration of HDFS, MapReduce, and other workloads through the high-performance designs.
Dancing elephants - efficiently working with object stores from Apache Spark ...DataWorks Summit
As Hadoop applications move into cloud deployments, object stores become more and more the source and destination of data. But object stores are not filesystems: sometimes they are slower; security is different,
What are the secret settings to get maximum performance from queries against data living in cloud object stores? That's at the filesystem client, the file format and the query engine layers? It's even how you lay out the files —the directory structure and the names you give them.
We know these things, from our work in all these layers, from the benchmarking we've done —and the support calls we get when people have problems. And now: we'll show you.
This talk will start from the ground up "why isn't an object store a filesystem?" issue, showing how that breaks fundamental assumptions in code, and so causes performance issues which you don't get when working with HDFS. We'll look at the ways to get Apache Hive and Spark to work better, looking at optimizations which have been done to enable this —and what work is ongoing. Finally, we'll consider what your own code needs to do in order to adapt to cloud execution.
Hadoop has traditionally been an on-premises workload, with very few notable implementations on the cloud. With Organizations either having jumped on the cloud bandwagon or have started planning their expansion into the ecosystem, it is imperative for us to explore how Hadoop conforms to the cloud paradigm. With the coming off age of some very useful cloud paradigms and the nature of Big Data with high seasonality of workloads, this is becoming a very common ask from customers. Robust architectures, elastic scale, open platforms, OSS integrations, and addressing complex pain points will all be part of this lively talk. To be able to implement effective solutions for Big Data in the cloud it is imperative that you understand the core principles and grasp the design principles of how the cloud can enhance the benefits of parallelized analytics. Join this session to understand the nitty-gritties of implementing Big Data in the cloud and the various options therein. Big Data + Cloud is definitely a deadly combination.
Scaling real time streaming architectures with HDF and Dell EMC IsilonHortonworks
Streaming Analytics are the new normal. Customers are exploring use cases that have quickly transitioned from batch to near real time. Hortonworks Data Flow / Apache NiFi and Isilon provide a robust scalable architecture to enable real time streaming architectures. Explore our use cases and demo on how Hortonworks Data Flow and Isilon can empower your business for real time success
This document discusses using Apache NiFi and Spark to build a smarter home. It describes using NiFi on a Raspberry Pi and EC2 to collect sensor data from smart home devices and transmit it to an HDP cluster for storage and analysis with Pig and Spark. It outlines the architecture as a hub-and-spoke model and shows the evolution of the NiFi flows from sequential blocking writes to attribute-based routing. Key discoveries include privacy issues, using MAC addresses to predict arrivals, and motion sensors being less useful alone. Challenges involved Oracle vs OpenJDK, backpressure, and site-to-site configuration.
An agile data fabric powered by Brocade networking solutions provides benefits for business intelligence initiatives. It allows data and applications to be deployed flexibly across distributed infrastructure in a way that is automated, intelligent, and optimized for performance. Case studies demonstrate how Brocade fabrics have enabled multi-tenant data lakes and analytics platforms by integrating diverse storage, computing, and networking resources.
Data processing at the speed of 100 Gbps@Apache Crail (Incubating)DataWorks Summit
Apache Crail (Incubating) is a distributed data store platform designed to share data at hardware speeds of 100+ Gbps with microsecond access latencies. It uses high-performance user-level I/O, careful software design, and data orchestration techniques to achieve these speeds. Evaluation shows it can deliver full hardware bandwidth and ultra-low access latencies. When used with Apache Spark, it provides significant performance gains for workloads like broadcast, shuffle, TeraSort, and joins. The project is seeking more language and framework integrations and aims to optimize for cloud deployments.
The document discusses how EMC Isilon scale-out NAS storage improves Hadoop resiliency and operational efficiency. It analyzes the impact of DataNode and TaskTracker failures on Hadoop jobs. EMC Isilon provides high availability, independent scalability of storage and compute, data protection features, and support for multiple Hadoop distributions and protocols like HDFS, NFS, SMB. This allows using existing data for analysis without replication and reduces time-to-results for Hadoop jobs.
The document outlines Renault's big data initiatives from 2014-2016 which progressed from an initial sandbox to a full industrialized big data platform. Key steps included implementing a new Hadoop infrastructure in 2015, industrializing the platform in 2016 to host production projects and POCs, and designing for scalability, isolation, simplified operations, and data protection. The document also discusses deploying quality projects to the data lake, ingestion scenarios, interactive SQL analytics, security measures including tokenization, and the next steps of federation and dynamic data change management.
The columnar roadmap: Apache Parquet and Apache ArrowJulien Le Dem
This document discusses Apache Parquet and Apache Arrow, open source projects for columnar data formats. Parquet is an on-disk columnar format that optimizes I/O performance through compression and projection pushdown. Arrow is an in-memory columnar format that maximizes CPU efficiency through vectorized processing and SIMD. It aims to serve as a standard in-memory format between systems. The document outlines how Arrow builds on Parquet's success and provides benefits like reduced serialization overhead and ability to share functionality through its ecosystem. It also describes how Parquet and Arrow representations are integrated through techniques like vectorized reading and predicate pushdown.
Protect your private data with ORC column encryptionOwen O'Malley
Fine-grained data protection at a column level in data lake environments has become a mandatory requirement to demonstrate compliance with multiple local and international regulations across many industries today. ORC is a self-describing type-aware columnar file format designed for Hadoop workloads that provides optimized streaming reads but with integrated support for finding required rows quickly.
Owen O’Malley dives into the progress the Apache community made for adding fine-grained column-level encryption natively into ORC format, which also provides capabilities to mask or redact data on write while protecting sensitive column metadata such as statistics to avoid information leakage. The column encryption capabilities will be fully compatible with Hadoop Key Management Server (KMS) and use the KMS to manage master keys, providing the additional flexibility to use and manage keys per column centrally.
GDPR/CCPA Compliance and Data Governance in HadoopEyad Garelnabi
This document provides an overview of how Hortonworks Data Platform (HDP) can help with data governance, security, and compliance requirements. It discusses key challenges around data mapping, consent management, and the right to be forgotten. The document then demonstrates how various HDP components like Apache Atlas, Apache Ranger, Ranger KMS, and Data Steward Studio address these challenges. Specifically, it shows how Atlas can help identify and classify personal data, Ranger provides centralized authorization and auditing, and Ranger KMS supports data encryption. Finally, the document describes a demo scenario and setup to illustrate dynamic row filtering, column masking, and tag-based security policies in HDP.
Don't Let the Spark Burn Your House: Perspectives on Securing SparkDataWorks Summit
Apache Spark is emerging as a key enabler for various enterprise use cases including customer intelligence applications, data warehousing, real-time or streaming, recommendation engines, and log processing. Even the most common use case for Spark around business intelligence (BI) or customer intelligence applications via data science encompasses the complete data worker lifecycle from file processing, workflows, cleansing, enrichment, model building and deployments to dash boarding and reporting. However, many aspects of security and governance with Spark are still emerging and pose challenges to enterprise adoption including areas of authorization, authentication, and comprehensive auditing as well as metadata harvesting and governance. We will demonstrate some examples of the current the state of the art in terms of different open source approaches to Spark security and governance. For example, we will show how Spark technologies can be integrated with enterprise identity providers, and how we can enable fine-grained access control for processes, and how to harvest process metadata while providing detailed audits. We will also provide best practices and common usage patterns to secure your Spark clusters and how best to support enterprise compliance and governance needs when using Spark.
Keeping your Enterprise’s Big Data Secure by Owen O’Malley at Big Data Spain ...Big Data Spain
Security is a tradeoff between usability and safety and should be driven by the perceived threats.
https://www.bigdataspain.org/2017/talk/keeping-enterprises-big-data-secure
Big Data Spain 2017
November 16th - 17th Kinépolis Madrid
Dynamic Column Masking and Row-Level Filtering in HDPHortonworks
As enterprises around the world bring more of their sensitive data into Hadoop data lakes, balancing the need for democratization of access to data without sacrificing strong security principles becomes paramount. In this webinar, Srikanth Venkat, director of product management for security & governance will demonstrate two new data protection capabilities in Apache Ranger – dynamic column masking and row level filtering of data stored in Apache Hive. These features have been introduced as part of HDP 2.5 platform release.
Big data security challenges are bit different from traditional client-server applications and are distributed in nature, introducing unique security vulnerabilities. Cloud Security Alliance (CSA) has categorized the different security and privacy challenges into four different aspects of the big data ecosystem. These aspects are infrastructure security, data privacy, data management and, integrity and reactive security. Each of these aspects are further divided into following security challenges:
1. Infrastructure security
a. Secure distributed processing of data
b. Security best practices for non-relational data stores
2. Data privacy
a. Privacy-preserving analytics
b. Cryptographic technologies for big data
c. Granular access control
3. Data management
a. Secure data storage and transaction logs
b. Granular audits
c. Data provenance
4. Integrity and reactive security
a. Endpoint input validation/filtering
b. Real-time security/compliance monitoring
In this talk, we are going to refer above classification and identify existing security controls, best practices, and guidelines. We will also paint a big picture about how collective usage of all discussed security controls (Kerberos, TDE, LDAP, SSO, SSL/TLS, Apache Knox, Apache Ranger, Apache Atlas, Ambari Infra, etc.) can address fundamental security and privacy challenges that encompass the entire Hadoop ecosystem. We will also discuss briefly recent security incidents involving Hadoop systems.
Speakers
Krishna Pandey, Staff Software Engineer, Hortonworks
Kunal Rajguru, Premier Support Engineer, Hortonworks
Running Enterprise Workloads with an open source Hybrid Cloud Data Architectu...DataWorks Summit
Cloud accelerates corporate IT landscapes with agility and flexibility. Today, discussion of cloud architecture dominates corporate IT. The cloud enables a number of temporary on-demand use cases that revolutionize analytical workload opportunities. But all of this involves the task of running corporate workloads safely and easily in the cloud.
With the convergence of cloud, IoT, and big data technology, enterprises are increasingly using multiple on-premises Data Lake and multiple Public on different geographies, for example due to regulations and compliance requirements restricting cross- It now distributes data to the cloud Data Lake store of the cloud vendor platform. Diffusion of data types and sources in this complex landscape makes the discovery process, provisioning, and getting insight by performing the appropriate workload on this data more complicated. In addition, to obtain business context, usage, and visibility of data trustworthiness worldwide, it is necessary to display all data and metadata, security management, data access, and monitoring in a centralized way .
All these problems create cracks during the creation of data insights to promote initial data capture and subsequent value creation. As a result, companies now look for compromises between appropriate rules and data control policies while providing a trusted environment that allows them to share data and partner with users responsibly to create value We need "Global Insight Fabric".
In this talk, how the Hortonworks DataPlane Service (DPS) analyzes the data in the data center to expand the storage, implement the open source hybrid architecture utilizing cloud flexibility and new use cases, global in Describes how site fabrics can help customers create. Securely migrate data from on-premises data centers to multiple public clouds, protect the data with replication, then apply consistent safety and governance policies to a wide variety of environments to ensure trustworthy data and inn We provide personal views on the challenges we face in providing the site to the business. I will explain how the DetaPlane service can be useful for traveling to this hybrid architecture and how the open source architecture enables the transformation of the entire enterprise.
Ozone is an object store for Hadoop. Ozone solves the small file problem of HDFS, which allows users to store trillions of files in Ozone and access them as if there are on HDFS. Ozone plugs into existing Hadoop deployments seamlessly, and programs like Hive, LLAP, and Spark work without any modifications. This talk looks at the architecture, reliability, and performance of Ozone.
In this talk, we will also explore Hadoop distributed storage layer, a block storage layer that makes this scaling possible, and how we plan to use the Hadoop distributed storage layer for scaling HDFS.
We will demonstrate how to install an Ozone cluster, how to create volumes, buckets, and keys, how to run Hive and Spark against HDFS and Ozone file systems using federation, so that users don’t have to worry about where the data is stored. In other words, a full user primer on Ozone will be part of this talk.
Speakers
Anu Engineer, Software Engineer, Hortonworks
Xiaoyu Yao, Software Engineer, Hortonworks
Treat your enterprise data lake indigestion: Enterprise ready security and go...DataWorks Summit
Most enterprises with large data lakes today are flying blind when it comes to the extent to which they can understand how the data in their data lakes is organized, accessed, and utilized to create real business value. Couple this with the need to democratize data, enterprises often realize they have created a data swamp loaded with all kinds of data assets without any curation and without appropriate security controls hoping that developers and analysts can responsibly collaborate to generate insights. In this talk we will provide a broad overview of how organizations can use open source frameworks such as Apache Ranger and Apache Knox to secure their data lakes and Apache Atlas to effectively provide open metadata and governance services for Hadoop ecosystem. We will provide an overview of the new features that have been added in each of these Apache projects recently and how enterprises can leverage these new features to build a robust security and governance model for their data lakes.
Speaker
Owen O'Malley, Co-Founder & Technical Fellow, Hortonworks
Security and Governance on Hadoop with Apache Atlas and Apache Ranger by Srik...Artem Ervits
This document discusses security features of the Hortonworks Data Platform including Apache Ranger, which provides centralized security policies across Hadoop components. It also describes Apache Atlas for metadata tagging and lineage tracking. The demo scenario outlines setting up security policies in Ranger for a bank to control access to customer data for different user groups based on location and data sensitivity. Data masking and row filtering policies are also configured.
Solving the Really Big Tech Problems with IoTEric Kavanagh
The Briefing Room with Dr. Robin Bloor and HPE Security
The Internet of Things brings new technological problems: sensor communications are bi-directional, the scale of data generation points has no precedent and, in this new world, security, privacy and data protection need to go out to the edge. Likely, most of that data lands in Hadoop and Big Data platforms. With the need for rapid analytics never greater, companies try to seize opportunities in tighter time windows. Yet, cyber-threats are at an all-time high, targeting the most valuable of assets—the data.
Register for this episode of The Briefing Room to hear Analyst Dr. Robin Bloor explain the implications of today's divergent data forces. He’ll be briefed by Reiner Kappenberger of HPE, who will discuss how a recent innovation -- NiFi -- is revolutionizing the big data ecosystem. He’ll explain how this technology dramatically simplifies data flow design, enabling a new era of business-driven analysis, while also protecting sensitive data.
Using Apache Hadoop and related technologies as a data warehouse has been an area of interest since the early days of Hadoop. In recent years Hive has made great strides towards enabling data warehousing by expanding its SQL coverage, adding transactions, and enabling sub-second queries with LLAP. But data warehousing requires more than a full powered SQL engine. Security, governance, data movement, workload management, monitoring, and user tools are required as well. These functions are being addressed by other Apache projects such as Ranger, Atlas, Falcon, Ambari, and Zeppelin. This talk will examine how these projects can be assembled to build a data warehousing solution. It will also discuss features and performance work going on in Hive and the other projects that will enable more data warehousing use cases. These include use cases like data ingestion using merge, support for OLAP cubing queries via Hive’s integration with Druid, expanded SQL coverage, replication of data between data warehouses, advanced access control options, data discovery, and user tools to manage, monitor, and query the warehouse.
Running Enterprise Workloads with an Open Source Hybrid Cloud Data ArchitectureDataWorks Summit
Cloud is turbocharging the Enterprise IT landscape with agility and flexibility. And now, discussions of cloud architecture dominate Enterprise IT. Cloud is enabling many ephemeral on-demand use cases which is a game-changing opportunity for analytic workloads. But all of this comes with the challenges of running enterprise workloads in the cloud securely and with ease.
With the convergence of cloud, IoT, and big data technologies, enterprises increasingly have their data spread across multiple data lakes on-prem and in cloud data lake stores in many geographies and across multiple public cloud vendor platforms, for example, due to regulatory and compliance mandates that limit cross-border data transfer. With the proliferation of data types and sources in this complex landscape, the process of discovery, provisioning, and running relevant workloads on this data to gather insights has become more complex. Additionally, gaining global visibility into the business context, usage, and trustworthiness of data requires a centralized view of all data and metadata, security controls, data access, and monitoring.
All of these challenges create a significant chasm between initial data capture and subsequent data insights generation to drive value creation. Therefore, enterprises now require a “global insight fabric” that can find a happy medium between adequate rules and policies of data governance while providing a trusted environment for users to collaborate and share data responsibly in order to create value.
In this talk, we will outline how Hortonworks DataPlane Service(DPS) can help customers build a global insight fabric that can span storing and analyzing data within data centers to implementing an open source hybrid architecture that takes advantage of cloud's elasticity and new use cases. We will get a personal view of the challenges faced in safely moving data from on-premises data centers into multiple public clouds, safeguarding it through replication, and then applying consistent security and governance policies across diverse environments to deliver trusted data and insights to the business. We will highlight how DataPlane Service can help enterprises with this hybrid architectural journey, and how open source architectures are enabling this transformation across enterprises.
Speaker: Alan Gates, Co-Founder, Hortonworks
You got your cluster installed and configured. You celebrate, until the party is ruined by your company's Security officer stamping a big "Deny" on your Hadoop cluster. And oops!! You cannot place any data onto the cluster until you can demonstrate it is secure In this session you would learn the tips and tricks to fully secure your cluster for data at rest, data in motion and all the apps including Spark. Your Security officer can then join your Hadoop revelry (unless you don't authorize him to, with your newly acquired admin rights)
This document discusses securing Hadoop and Spark clusters. It begins with an overview of Hadoop security in four steps: authentication, authorization, data protection, and audit. It then discusses specific Hadoop security components like Kerberos, Apache Ranger, HDFS encryption, Knox gateway, and data encryption in motion and at rest. For Spark security, it covers authentication using Kerberos, authorization with Ranger, and encrypting data channels. The document provides demos of HDFS encryption and discusses common gotchas with Spark security.
Understanding Your Crown Jewels: Finding, Organizing, and Profiling Sensitive...DataWorks Summit
Emerging regulations such as GDPR and increasing incidence of data breaches such as those at Equifax are bringing a firm’s handling and processing of sensitive data such as personal data of its customers and employees into focus. Enterprises need to now be able to discover and manage sensitive data usage to answer compliance and regulatory reporting requirements and to prevent any reputational damage in the event of a data breach. In this talk, we will outline how using the foundation of open source technologies such as Apache Ranger, Apache Atlas and the recently announced Hortonworks DataPlane Service platform components data stewards, analysts, and data engineers can better understand their sensitive data assets across multiple data lakes at scale. We will demonstrate how enterprises can get a comprehensive 360-degree view of their sensitive data including where such data is located, who is accessing what data and how frequently, when was such data accessed, deleted, moved, how is the data protected, and where did this data come from. In addition we will show how such data can be discovered and profiled to understand their characteristics. We will also demonstrate organization and classification use cases for such sensitive data to facilitate their curation into collections for various business purposes and how such collections can be aggregated and summarized to provide a single view of sensitive data footprint in an enterprise from risk management and audit/compliance/forensics perspectives.
Speakers
Srikanth Venkat, Senior Director, Product Management, Hortonworks
Ashwin Rajeeva, Founder, Vidyash OU
Big Data is an increasingly powerful enterprise asset and this talk will explore the relationship between big data and cyber security, how we preserve privacy whilst exploiting the advantages of data collection and processing. Big Data technologies provide both governments and corporations powerful tools to offer more efficient and personalized services. The rapid adoption of these technologies has of course created tremendous social benefits. Unfortunately unwanted side effects are the potential rich pickings available to those with malicious intentions. Increasingly, the sophisticated cyber attacker is able to exploit the rich array public data to build detailed profiles on their adversaries to support their malicious intentions
Similar to Fine Grain Access Control for Big Data: ORC Column Encryption (20)
Running An Apache Project: 10 Traps and How to Avoid ThemOwen O'Malley
This document discusses 10 common mistakes made when running an Apache project and how to avoid them. These include starting development on GitHub without an IP agreement, keeping the project secret instead of promoting it, not fostering diversity among contributors, holding exclusive in-person meetings, including binary artifacts, having a high barrier for committer status, ignoring trademark issues, not rewarding open source work, engaging in stealth development, and licensing problems. The key recommendations are to build an open community, do work transparently, train employees on open source best practices, and ensure proper licensing.
This document summarizes several big data systems and their approaches to providing ACID transactions. It finds that supporting object stores like S3 is becoming critical as streaming data ingest grows. Hive ACID is adding support for Presto and Impala queries. Iceberg is improving by adding delta files and Hive support. Overall, this is an active area of development that will continue changing in the next six months to better handle tasks like GDPR data removal and updating large datasets.
This document provides an overview of the ORC file format. It describes the key requirements and design decisions, including file structure, stripe structure, encoding columns, run length encoding, compression, indexing, and versioning. It also discusses optimizations, debugging, and using ORC from SQL, Java, C++, and the command line. The document is intended to help users and developers better understand how ORC works.
Hive tables are an integral part of the big data ecosystem, but the simple directory-based design that made them ubiquitous is increasingly problematic. Netflix uses tables backed by S3 that, like other object stores, don’t fit this directory-based model: listings are much slower, renames are not atomic, and results are eventually consistent. Even tables in HDFS are problematic at scale, and reliable query behavior requires readers to acquire locks and wait.
Owen O’Malley and Ryan Blue offer an overview of Iceberg, a new open source project that defines a new table layout addresses the challenges of current Hive tables, with properties specifically designed for cloud object stores, such as S3. Iceberg is an Apache-licensed open source project. It specifies the portable table format and standardizes many important features, including:
* All reads use snapshot isolation without locking.
* No directory listings are required for query planning.
* Files can be added, removed, or replaced atomically.
* Full schema evolution supports changes in the table over time.
* Partitioning evolution enables changes to the physical layout without breaking existing queries.
* Data files are stored as Avro, ORC, or Parquet.
* Support for Spark, Hive, and Presto.
Fast Spark Access To Your Complex Data - Avro, JSON, ORC, and ParquetOwen O'Malley
The landscape for storing your big data is quite complex, with several competing formats and different implementations of each format. Understanding your use of the data is critical for picking the format. Depending on your use case, the different formats perform very differently. Although you can use a hammer to drive a screw, it isn’t fast or easy to do so.
The use cases that we’ve examined are:
reading all of the columns
reading a few of the columns
filtering using a filter predicate
While previous work has compared the size and speed from Hive, this presentation will present benchmarks from Spark including the new work that radically improves the performance of Spark on ORC. This presentation will also include tips and suggestions to optimize the performance of your application while reading and writing the data.
File Format Benchmarks - Avro, JSON, ORC, & ParquetOwen O'Malley
Hadoop Summit June 2016
The landscape for storing your big data is quite complex, with several competing formats and different implementations of each format. Understanding your use of the data is critical for picking the format. Depending on your use case, the different formats perform very differently. Although you can use a hammer to drive a screw, it isn’t fast or easy to do so. The use cases that we’ve examined are: * reading all of the columns * reading a few of the columns * filtering using a filter predicate * writing the data Furthermore, it is important to benchmark on real data rather than synthetic data. We used the Github logs data available freely from http://githubarchive.org We will make all of the benchmark code open source so that our experiments can be replicated.
Protecting Enterprise Data in Apache HadoopOwen O'Malley
From Hadoop Summit 2015, San Jose
From Apache BigData 2016, Vancouver
Hadoop has long had strong authentication via integration with Kerberos, authorization via User/Group/Other HDFS permissions, and auditing via the audit log. Recent developments in Hadoop have added HDFS file access control lists, pluggable encryption key provider APIs, HDFS snapshots, and HDFS encryption zones. These features combine to give important new data protection features that every company should be using to protect their data. This talk will cover what the new features are and when and how to use them in enterprise production environments. Upcoming features including columnar encryption in the ORC columnar format will also be covered.
Hadoop has long had strong authentication via integration with Kerberos, authorization via user/group/other HDFS permissions and auditing via the audit log. Recent developments in Hadoop have added HDFS file access control lists, pluggable encryption key provider APIs, HDFS snapshots, and HDFS encryption zones. These features combine to given important new data protection features that every company should be using to protect their data. This talk will cover what the new features are and when and how to use them in enterprise production environments. Upcoming features including columnar encryption in the ORC file format will also be covered.
Structor - Automated Building of Virtual Hadoop ClustersOwen O'Malley
This document describes Structor, a tool that automates the creation of virtual Hadoop clusters using Vagrant and Puppet. It allows users to quickly set up development, testing, and demo environments for Hadoop without manual configuration. Structor addresses the difficulties of manually setting up Hadoop clusters, particularly around configuration, security testing, and experimentation. It provides pre-defined profiles that stand up clusters of different sizes on various operating systems with or without security enabled. Puppet modules configure and provision the Hadoop services while Vagrant manages the underlying virtual machines.
The document discusses adding ACID properties like delete, update, and insert capabilities to the Hive data warehouse software. It describes the design which involves stitching together buckets of data and compacting files to support modifications. The document also covers limitations, development phases, and reasons for not using HBase for the task.
ORC File and Vectorization - Hadoop Summit 2013Owen O'Malley
Eric Hanson and I gave this presentation at Hadoop Summit 2013:
Hive’s RCFile has been the standard format for storing Hive data for the last 3 years. However, RCFile has limitations because it treats each column as a binary blob without semantics. Hive 0.11 added a new file format named Optimized Row Columnar (ORC) file that uses and retains the type information from the table definition. ORC uses type specific readers and writers that provide light weight compression techniques such as dictionary encoding, bit packing, delta encoding, and run length encoding — resulting in dramatically smaller files. Additionally, ORC can apply generic compression using zlib, LZO, or Snappy on top of the lightweight compression for even smaller files. However, storage savings are only part of the gain. ORC supports projection, which selects subsets of the columns for reading, so that queries reading only one column read only the required bytes. Furthermore, ORC files include light weight indexes that include the minimum and maximum values for each column in each set of 10,000 rows and the entire file. Using pushdown filters from Hive, the file reader can skip entire sets of rows that aren’t important for this query.
Columnar storage formats like ORC reduce I/O and storage use, but it’s just as important to reduce CPU usage. A technical breakthrough called vectorized query execution works nicely with column store formats to do this. Vectorized query execution has proven to give dramatic performance speedups, on the order of 10X to 100X, for structured data processing. We describe how we’re adding vectorized query execution to Hive, coupling it with ORC with a vectorized iterator.
The document describes the ORC file format. It discusses the file structure, stripe structure, file layout, integer and string column serialization, compression techniques, projection and predicate filtering capabilities, example file sizes, and compares ORC files to RCFiles and Trevni. The document was authored by Owen O'Malley of Hortonworks for December 2012.
This document summarizes techniques for optimizing Hive queries, including recommendations around data layout, format, joins, and debugging. It discusses partitioning, bucketing, sort order, normalization, text format, sequence files, RCFiles, ORC format, compression, shuffle joins, map joins, sort merge bucket joins, count distinct queries, using explain plans, and dealing with skew.
Andrew Ryan describes how Facebook operates Hadoop to provide access as a shared resource between groups.
More information and video at:
http://developer.yahoo.com/blogs/hadoop/posts/2011/02/hug-feb-2011-recap/
The next generation of Hadoop MapReduce
Arun C. Murthy presented the plans for the next generation of Apache Hadoop MapReduce. The MapReduce framework has hit a scalability limit around 4,000 machines. We are developing the next generation of MapReduce that factors the framework into a generic resource scheduler and a per-job, user-defined component that manages the application execution. Since downtime is more expensive at scale high-availability is built-in from the beginning; as are security and multi-tenancy to support many users on the larger clusters. The new architecture will also increase innovation, agility and hardware utilization.
More information and video available at:
http://developer.yahoo.com/blogs/hadoop/posts/2011/02/hug-feb-2011-recap/
Bay Area HUG Feb 2011 introduction and Yahoo refocusing on Apache Hadoop releases.
More information at:
http://developer.yahoo.com/blogs/hadoop/posts/2011/02/hug-feb-2011-recap/
Mastering MicroStation DGN: How to Integrate CAD and GISSafe Software
Dive deep into the world of CAD-GIS integration and elevate your workflows to nexl-level efficiency levels. Discover how to seamlessly transfer data between Bentley MicroStation and leading GIS platforms, such as Esri ArcGIS.
This session goes beyond mere CAD/GIS conversion, showcasing techniques to precisely transform MicroStation elements including cells, text, lines, and symbology. We’ll walk you through tags versus item types, and understanding how to leverage both. You’ll also learn how to reproject to any coordinate system. Finally, explore cutting-edge automated methods for managing database links, and delve into innovative strategies for enabling self-serve data collection and validation services.
Join us to overcome the common hurdles in CAD and GIS integration and enhance the efficiency of your workflows. This session is perfect for professionals, both new to FME and seasoned users, seeking to streamline their processes and leverage the full potential of their CAD and GIS systems.
Understanding Automated Testing Tools for Web Applications.pdfkalichargn70th171
Automated testing tools for web applications are revolutionizing how we ensure quality and performance in software development. These tools help save time, reduce human error, and increase the efficiency of web application testing processes. This guide delves into automated testing, discusses the available tools, and highlights how to choose the right tool for your needs.
iBirds Services - Comprehensive Salesforce CRM and Software Development Solut...vijayatibirds
Unlock the full potential of your business with iBirds Services. As a trusted Salesforce Consulting Partner, iBirds Software Pvt. Ltd. offers a wide range of customer-centric consulting services to help you seamlessly integrate, customize, and optimize your Salesforce CRM. Our team of experts specializes in delivering innovative software development solutions tailored to meet your unique business needs.
In this document, you will discover:
An overview of iBirds Services and our expertise in Salesforce CRM implementation.
Detailed insights into our software development services, including custom applications, integrations, and automation.
Case studies highlighting our successful projects and satisfied clients.
Key benefits of partnering with iBirds Services for your CRM and software development needs.
Whether you are a small business or a large enterprise, our proven strategies and cutting-edge technologies ensure your business stays ahead of the competition. Explore our services and learn how iBirds can transform your business operations with scalable and efficient solutions.
Test Polarity: Detecting Positive and Negative Tests (FSE 2024)Andre Hora
Positive tests (aka, happy path tests) cover the expected behavior of the program, while negative tests (aka, unhappy path tests) check the unexpected behavior. Ideally, test suites should have both positive and negative tests to better protect against regressions. In practice, unfortunately, we cannot easily identify whether a test is positive or negative. A better understanding of whether a test suite is more positive or negative is fundamental to assessing the overall test suite capability in testing expected and unexpected behaviors. In this paper, we propose test polarity, an automated approach to detect positive and negative tests. Our approach runs/monitors the test suite and collects runtime data about the application execution to classify the test methods as positive or negative. In a first evaluation, test polarity correctly classified 117 tests as as positive or negative. Finally, we provide a preliminary empirical study to analyze the test polarity of 2,054 test methods from 12 real-world test suites of the Python Standard Library. We find that most of the analyzed test methods are negative (88%) and a minority is positive (12%). However, there is a large variation per project: while some libraries have an equivalent number of positive and negative tests, others have mostly negative ones.
Empowering Businesses with Intelligent Software Solutions - GrawlixAarisha Shaikh
Explore Grawlix's comprehensive suite of intelligent software solutions designed to drive transformative growth and scalability for businesses. This presentation covers our expertise in bespoke software development, digital marketing, web design, cloud solutions, cybersecurity, AI/ML, and IT consulting. Discover how Grawlix's customized solutions enhance productivity, streamline processes, and enable data-driven decision-making. Learn about our key projects, technologies, and the dedicated team who ensures exceptional client satisfaction through innovation and excellence.
Tube Magic Software | Youtube Software | Best AI Tool For Growing Youtube Cha...David D. Scott
Tube Magic Software is your ultimate tool for creating stunning video content with ease. Designed with both beginners and professionals in mind, it offers a user-friendly interface packed with powerful features. From seamless editing to eye-catching effects, Tube Magic helps you bring your creative vision to life. Elevate your videos and captivate your audience effortlessly. Join our community of content creators and experience the magic today!
Three available editions of Windows Servers crucial to your organization’s op...Q-Advise
Three available editions of Windows Servers crucial to your organization’s operations
Windows Server, Microsoft’s robust operating system, is the cornerstone of enterprise IT infrastructure, tailored for mission-critical operations. It helps in managing enterprise-level tasks, including data storage, applications, and communication.
Proper licensing of Windows Server is essential for both legal compliance and optimal functionality within business environments.
Windows Server comes in various edition and before any edition is used in your organization, it is required you license them appropriately. The licensing can be complex and capital demanding when you don’t know what you want or understand the licensing requirements.
Even if successfully licensed, there are various activities you can practice as an organization to make sure your Server is operating optimally and there is real value for money. This requires a deeper understanding of best practices and our team of cloud and licensing experts can be of support.
Send the team an email, info@q-advise.com let’s have a look at your needs, together with you decide which licensing model will best work in your case, assist you with savings options and share with you how pre-owned licensing can help you get licensed adequately also.
AI is revolutionizing DevOps by advancing algorithmic optimizations in pipelines, elevating efficiency levels, and introducing predictive functionalities. This article examines how AI is reshaping continuous integration, deployment strategies, monitoring practices, and incident management within DevOps ecosystems, ultimately amplifying efficiency and dependability.
In today's dynamic business landscape, ERP software systems are essential tools for businesses worldwide, including those in the UAE. These systems cater to the unique needs of the UAE's rapidly changing economy and expanding industries.
This blog examines the top 10 ERP companies in the UAE, highlighting their innovative products, exceptional customer support, and significant impact on the regional business community. These companies excel in providing ERP solutions that enhance efficiency and growth for businesses throughout the UAE.
1. **Odoo**
- Odoo ERP is a comprehensive business management solution with features like accounting, HR, sales, inventory control, and CRM. Its user-friendly interface simplifies processes and boosts productivity. Banibro IT Solutions leverages Odoo to transform business operations.
- **Details:**
- Suitable for: Small, Medium, Large Businesses
- Open Source: Yes
- Cloud-based: Yes (Cloud and On-premises)
- Support: Phone, Chat, Email
- Payment: Yearly, Monthly
- Multi-Language: Yes
- OS Support: Windows, Mac, iOS, Android
- API: Available
2. **Microsoft Dynamics 365**
- Dynamics 365 offers a centralized platform for small and medium-sized businesses, integrating with Microsoft apps and cloud services for scalability. It simplifies data processing with user-friendly interfaces and customizable reporting.
- **Details:**
- Suitable for: Small, Medium, Large Businesses
- Support: Phone, Chat, Email, Knowledge Base
- Payment: One-Time, Yearly, Monthly
- Multi-Language: No
- OS Support: Web App, Windows, iOS, Android
- API: Not specified
3. **FirstBIT ERP**
- Known for serving small and medium-sized businesses, FirstBIT ERP offers comprehensive solutions and exceptional customer service, enhancing productivity and efficiency.
- **Details:**
- Suitable for: Medium, Large Businesses
- Open Source: Yes/No
- Cloud-based: Yes (Cloud and On-premises)
- Support: Phone, Email, Video Tutorials
- Payment: Yearly, Monthly
- Multi-Language: Yes
- OS Support: Web App, Windows, Mac, iOS, Android
- API: Available
4. **Ezware Technologies**
- Ezware Technologies provides top-notch ERP solutions for various industries with user-friendly modules that streamline complex business processes.
- **Details:**
- Suitable for: Small, Medium, Large Businesses
- Support: Phone, Chat, Email, Knowledge Base
- Payment: One-Time, Yearly, Monthly
- Multi-Language: No
- OS Support: Web App, Windows, Mac, iOS, Android
- API: Not specified
5. **RealSoft**
- RealSoft by Coral is popular in Dubai, offering modules for contracting, real estate, job costing, manufacturing, trading, and finance. It's VAT-enabled and affordable for medium-sized businesses.
- **Details:**
- Suitable for: Small, Medium, Large Businesses
- Open Source: No
- Cloud-based: On-premises
-
Literals - A Machine Independent Feature21h16charis
Introduction to Literals, A machine independent feature. The presentation is based on the prescribed textbook for System Software and Compiler Design, Computer Science and Engineering - System Software by Leland. L. Beck,
D Manjula.
PathSpotter: Exploring Tested Paths to Discover Missing Tests (FSE 2024)Andre Hora
When creating test cases, ideally, developers should test both the expected and unexpected behaviors of the program to catch more bugs and avoid regressions. However, the literature has provided evidence that developers are more likely to test expected behaviors than unexpected ones. In this paper, we propose PathSpotter, a tool to automatically identify tested paths and support the detection of missing tests. Based on PathSpotter, we provide an approach to guide us in detecting missing tests. To evaluate it, we submitted pull requests with test improvements to open-source projects. As a result, 6 out of 8 pull requests were accepted and merged in relevant systems, including CPython, Pylint, and Jupyter Client. These pull requests created/updated 32 tests and added 80 novel assertions covering untested cases. This indicates that our test improvement solution is well received by open-source projects.
Alluxio Webinar | What’s new in Alluxio Enterprise AI 3.2: Leverage GPU Anywh...Alluxio, Inc.
Alluxio Webinar
July.23, 2024
For more Alluxio Events: https://www.alluxio.io/events/
Speaker:
- Shouwei Chen (core maintainer and product manager, Alluxio)
In today's AI-driven world, organizations face unprecedented demands for powerful AI infrastructure to fuel their model training and serving workloads. Performance bottlenecks, cost inefficiencies, and management complexities pose significant challenges for AI platform teams supporting large-scale model training and serving. On July 9, 2024, we introduced Alluxio Enterprise AI 3.2, a groundbreaking solution designed to address these critical issues in the ever-evolving AI landscape.
In this webinar, Shouwei Chen will introduce exciting new features of Alluxio Enterprise AI 3.2:
- Leveraging GPU resources anywhere accessing remote data with the same local performance
- Enhanced I/O performance with 97%+ GPU utilization for popular language model training benchmarks
- Achieving the same performance as HPC storage on existing data lake without additional HPC storage infrastructure
- New Python FileSystem API to seamlessly integrate with Python applications like Ray
- Other new features, include advanced cache management, rolling upgrades, and CSI failover
CrushFTP 10.4.0.29 PC Software - WhizNewsEman Nisar
Introduction:
In this never-ending digital world, the essence of a smooth and safe file transfer solution is vital. CrushFTP 10.4.0.29 is a kind of full-featured, robust, and easy-to-use PC software designed for a smooth file transfer process without compromising security. In this review, we will dig in deep regarding the CrushFTP features, functions, and system requirements to have a 360-degree view of its capabilities and possible applications.
Description:
CrushFTP, LLC develop the software, and it comes in a bundle of new features and improvements, which are set to deliver a great experience to the user.With CrushFTP, from the smallest to the most extensive scale of businesses, all kinds of file transfer operations can be centrally managed on a single platform.
You May Also Like :: Alt-Tab Terminator Pro 6.0 PC Software – WhizzNews
Abstract:
At its heart, CrushFTP is a powerful server that allows users to exchange files over the networks safely. Many features of the FTP servers have been extended in CrushFTP. It supports protocols like FTPS, SFTP, SCP, HTTP, and HTTPS for maximum flexibility with client applications and devices.
The intuitive web interface enables users to use file management tools simply without installing complex client software.
Software Characteristics:
Security:
CrushFTP ensures security through the use of protocols for encryption, such as SSL/TLS, to secure transmitted data. It also offers user authentication mechanisms using LDAP, Active Directory, and OAuth for proper secure access control.
Automation:
The automation capability of CrushFTP allows automating the everyday routine tasks through schedule-based transfer, event-based triggers, and custom flow. This ensures that the batch processing is effective with minimum manual interruption, improving productivity.
You May Also Like :: VovSoft Copy Files Into Multiple Folders PC Software – WhizzNews
Remote Administration:
CrushFTP supports remote administration through the web interface. This allows an administrator to manage server settings, user permissions, and file operations from any part of the world that is connected to the Internet. In this regard, it gives a very nice distributed team and remote work environment.
Integration:
The software easily integrates with third-party applications and services through a very extensive API, as well as through support for plenty of plugins. This way, it becomes straightforward for organizations to fit CrushFTP into their already existing infrastructure to promote interoperability and ensure scalability.
Monitoring and Logging:
CrushFTP provides very detailed tracking and logging where an administrator can trace all user activities, monitor the performance of the server, and analyze network traffic. It also offers real-time alerts and notifications for proactive management and troubleshooting.
Customization:
Make CrushFTP work with any possible parameters in mind through configurable settings, themes, and extensions
Fine Grain Access Control for Big Data: ORC Column Encryption
1. Fine Grained Access Control for Big
Data: ORC Column Encryption
Owen O’Malley
owen@cloudera.com
@owen_omalley
March 2019
Srikanth Venkat
svenkat@cloudera.com
@srikvenk