Mark Rittman gave a presentation on the future of analytics on Oracle Big Data Appliance. He discussed how Hadoop has enabled highly scalable and affordable cluster computing using technologies like MapReduce, Hive, Impala, and Parquet. Rittman also talked about how these technologies have improved query performance and made Hadoop suitable for both batch and interactive/ad-hoc querying of large datasets.
Data Integration and Data Warehousing for Cloud, Big Data and IoT: What’s Ne...
Mark Rittman presented at Big Data World in London in March 2017 on data integration and data warehousing for cloud, big data, and IoT. He discussed the history of data warehousing and how it has evolved from traditional RDBMS implementations to embrace big data technologies like Hadoop. He described how cloud data warehouse offerings from Google BigQuery and Amazon Redshift combine the scalability of big data with the structure of data warehousing. Rittman also covered new approaches to ETL using data pipelines, schema discovery using machine learning, emerging open-source BI tools, and his current work in these areas.
The document discusses a company's migration from their in-house computation engine to Apache Spark. It describes five key issues encountered during the migration process: 1) difficulty adapting to Spark's low-level RDD API, 2) limitations of DataSource predicates, 3) incomplete Spark SQL functionality, 4) performance issues with round trips between Spark and other systems, and 5) OutOfMemory errors due to large result sizes. Lessons learned include being aware of new Spark features and data formats, and designing architectures and data structures to minimize data movement between systems.
A Walk Through the Kimball ETL Subsystems with Oracle Data Integration - Coll...
Big Data integration is an excellent feature in the Oracle Data Integration product suite (Oracle Data Integrator, GoldenGate, & Enterprise Data Quality). But not all analytics require big data technologies, such as labor cost, revenue, or expense reporting. Ralph Kimball, an original architect of the dimensional model in data warehousing, spent much of his career working to build an enterprise data warehouse methodology that can meet these reporting needs. His book, "The Data Warehouse ETL Toolkit", is a guide for many ETL developers. This session will walk you through his ETL Subsystem categories; Extracting, Cleaning & Conforming, Delivering, and Managing, describing how the Oracle Data Integration products are perfectly suited for the Kimball approach.
Presented at Collaborate16 in Las Vegas.
Build a simple data lake on AWS using a combination of services, including AWS Glue Data Catalog, AWS Glue Crawlers, AWS Glue Jobs, AWS Glue Studio, Amazon Athena, Amazon Relational Database Service (Amazon RDS), and Amazon S3.
Link to the blog post and video: https://garystafford.medium.com/building-a-simple-data-lake-on-aws-df21ca092e32
The document discusses building data lakes on AWS. It describes how data lakes extend the traditional data warehouse approach by allowing storage of both structured and unstructured data at massive scales. Amazon S3 provides durable, available, scalable, and easy-to-use storage for the data lake. AWS Glue crawls data to create a data catalog and can automate ETL processes. Amazon Athena and Amazon EMR enable interactive analysis and big data processing through SQL and Spark. The data lake architecture on AWS supports a variety of analytical use cases.
Doug Bateman, a principal data engineering instructor at Databricks, presented on how to build a Lakehouse architecture. He began by introducing himself and his background. He then discussed the goals of describing key Lakehouse features, explaining how Delta Lake enables it, and developing a sample Lakehouse using Databricks. The key aspects of a Lakehouse are that it supports diverse data types and workloads while enabling using BI tools directly on source data. Delta Lake provides reliability, consistency, and performance through its ACID transactions, automatic file consolidation, and integration with Spark. Bateman concluded with a demo of creating a Lakehouse.
knowIT is a collaborative semantic wiki used by Johnson & Johnson to map their IT systems, applications, servers and stakeholders. It aims to capture knowledge about these informatics systems, their relationships and components to answer questions, facilitate knowledge sharing and enable self-service. The wiki uses Semantic MediaWiki and has grown to include systems portfolio management, configuration management and other features to increase IT systems knowledge across the organization.
Data Con LA 2020
Description
In this session, I introduce the Amazon Redshift lake house architecture which enables you to query data across your data warehouse, data lake, and operational databases to gain faster and deeper insights. With a lake house architecture, you can store data in open file formats in your Amazon S3 data lake.
Speaker
Antje Barth, Amazon Web Services, Sr. Developer Advocate, AI and Machine Learning
The document discusses machine learning and artificial intelligence applications inside and outside of Snowflake's cloud data warehouse. It provides an overview of Snowflake and its architecture. It then discusses how machine learning can be implemented directly in the database using SQL, user-defined functions, and stored procedures. However, it notes that pure coding is not suitable for all users and that automated machine learning outside the database may be preferable to enable more business analysts and power users. It provides an example of using Amazon Forecast for time series forecasting and integrating it with Snowflake.
INFORMATICA ONLINE TRAINING BY QUONTRA SOLUTIONS WITH PLACEMENT ASSISTANCE
We offer online IT training with placements, project assistance in different platforms with real time industry consultants to provide quality training for all it professionals, corporate clients and students etc. Special features by Quontra Solutions are Extensive Training will be in both Informatica Online Training and Placement. We help you in resume preparation and conducting Mock Interviews.
Emphasis is given on important topics which are essential and mostly used in real time projects. Quontra Solutions is an Online Training Leader when it comes to high-end effective and efficient I.T Training. We have always been and still are focusing on the key aspects which are providing utmost effective and competent training to both students and professionals who are eager to enrich their technical skills.
Training Features at Quontra Solutions:
We believe that online training has to be measured by three major aspects viz., Quality, Content and Relationship with the Trainer and Student. Not only our online training classes are important but apart from that the material which we provide are in tune with the latest IT training standards, so a student has not to worry at all whether the training imparted is outdated or latest.
Course content:
• Basics of data warehousing concepts
• Power center components
• Informatica concepts and overview
• Sources
• Targets
• Transformations
• Advanced Informatica concepts
Please Visit us for the Demo Classes, we have regular batches and weekend batches.
QUONTRASOLUTIONS
204-226 Imperial Drive,Rayners Lane, Harrow-HA2 7HH
Phone : +44 (0)20 3734 1498 / 99
Email: info@quontrasolutions.co.uk
How To Leverage OBIEE Within A Big Data Architecture
If you've invested in OBIEE and want to start exploring the use of Big Data technology, this presentation talks about how and why you might want to use OBIEE as the common visualization layer across both.
Enterprise Data World 2018 - Building Cloud Self-Service Analytical Solution
This session will cover building the modern Data Warehouse by migration from the traditional DW platform into the cloud, using Amazon Redshift and Cloud ETL Matillion in order to provide Self-Service BI for the business audience. This topic will cover the technical migration path of DW with PL/SQL ETL to the Amazon Redshift via Matillion ETL, with a detailed comparison of modern ETL tools. Moreover, this talk will be focusing on working backward through the process, i.e. starting from the business audience and their needs that drive changes in the old DW. Finally, this talk will cover the idea of self-service BI, and the author will share a step-by-step plan for building an efficient self-service environment using modern BI platform Tableau.
The document discusses Talend's big data solutions and sandbox. It introduces Rajan Kanitkar as a senior solutions engineer at Talend with 15 years of experience in data integration. It then summarizes Talend's big data platform and ecosystem including Hadoop, MapReduce, HDFS, Hive and more. The rest of the document describes Talend's sandbox, which provides a pre-configured virtual image with Hadoop distributions, Talend software, and data scenarios to demonstrate ingesting, transforming and delivering big data.
As a follow-on to the presentation "Building an Effective Data Warehouse Architecture", this presentation will explain exactly what Big Data is and its benefits, including use cases. We will discuss how Hadoop, the cloud and massively parallel processing (MPP) is changing the way data warehouses are being built. We will talk about hybrid architectures that combine on-premise data with data in the cloud as well as relational data and non-relational (unstructured) data. We will look at the benefits of MPP over SMP and how to integrate data from Internet of Things (IoT) devices. You will learn what a modern data warehouse should look like and how the role of a Data Lake and Hadoop fit in. In the end you will have guidance on the best solution for your data warehouse going forward.
The document discusses Oracle's new approach to business analytics and visualization. It notes that traditional corporate BI systems are viewed as inflexible and analytics are only for a privileged few. However, it argues there is still hope as analytics can provide a 10x ROI. The new approach involves visual analytics embedded in every Oracle solution across mobile, cloud, on-premises and big data to provide a single, integrated platform that allows business users to easily access, blend and scale insights from various data sources.
The Yorkshire Terrier, commonly known as the Yorkie, is descended from the Clydesdale Terrier and Paisley Terrier, which are now extinct. It also has ancestry from the Skye Terrier and Airdale Terrier. Yorkies are small dogs with black fur and a brown underside. They are intelligent but can be snappy. This document provides a list of supplies needed to care for a Yorkie and background on the breed. It encourages adopting from rescues, sharing details about a Yorkie named Magellan available for adoption.
El documento habla sobre Julian Beever, un artista que ha estado creando ilusiones ópticas en aceras de Europa, Estados Unidos y Australia durante los últimos 10 años. Sus creaciones se ven tan realistas que parecen tridimensionales aunque en realidad son dibujos en el pavimento, lo que sorprende a quienes las ven.
The document discusses important conditions for starting new businesses and defines economic terms. The three most important conditions are low taxes, skilled staff, and low interest rates. It also provides definitions for various economic terms like interest rate, exchange rate, inflation rate, labor force, tax incentives, and balance of trade.
This infographic document provides information on what infographics are and why they are effective communication tools. It discusses that infographics tell visual stories using images and graphics to engage audiences better than plain text. Effective infographics are simple, visually pleasing, and help explain complex topics. They improve comprehension and retention of information by leveraging human visual processing abilities.
Ashutosh Rubber Pvt Ltd is an Indian manufacturer and exporter of rubber products established in 2007. It produces over 9000 rubber products for industries like automotive, generators, and oil. The company exports to countries including Australia, Singapore, and Denmark. It has 26-50 employees and certifications for quality standards.
This document discusses building a city as a hands-on project to develop fine motor, visual motor, problem solving, imagination, and creativity skills in children. The project uses construction materials to build a cityscape that allows children to use their hands to physically construct buildings and structures while also using their visual skills to envision what they are creating, their problem solving to determine how pieces fit together, and their imagination to design the city layout.
Dokumen tersebut membahas mengenai kejuruteraan genetik yang merupakan teknik modifikasi genetika untuk meningkatkan kualitas tanaman dan hewan. Teknologi ini telah banyak digunakan dalam industri pertanian, peternakan, dan kedokteran untuk menghasilkan produk yang lebih baik dan bermanfaat bagi masyarakat. Contoh penerapannya adalah tanaman yang tahan penyakit dan cuaca ekstrem, hewan ternak
Dokumen ini membahas tentang sumberdaya alam, sumberdaya manusia, dan klasifikasi sumberdaya alam. Sumberdaya alam dibedakan menjadi sumberdaya terbarukan dan tidak terbarukan, sedangkan sumberdaya manusia diukur berdasarkan jumlah penduduk yang bekerja di sektor pertanian.
Teks tersebut berisi soalan-soalan latihan semantik dan peribahasa. Teks tersebut memberikan contoh-contoh kata dan ungkapan yang memiliki makna ganda atau kiasan, serta meminta pembaca untuk mengidentifikasi makna sebenarnya atau ungkapan yang tepat. Teks tersebut juga memberikan jawaban untuk soalan-soalan tersebut.
CFMS provides facility management services including housekeeping, maintenance, and production support staff out of its headquarters in Pune and offices in Ahmedabad. It has a highly trained team and uses quality equipment and chemicals. CFMS is committed to understanding client needs and creating customized service programs. It has an in-house training center to deploy trained staff from day one and implements consistent management systems. Some of CFMS' clients include restaurants, hotels, and automation companies in Pune.
Este documento habla sobre una persona extraordinaria que ayudó a Lorenzo a superar sus miedos y sacar la cabeza de su cazo. Esta persona le mostró sus puntos fuertes y le ayudó a expresar sus temores. Ahora Lorenzo puede jugar con los demás y los demás ven sus cualidades, aunque él sigue siendo el mismo.
IlOUG Tech Days 2016 - Big Data for Oracle Developers - Towards Spark, Real-T...
Mark Rittman, CTO of Rittman Mead, gave a keynote presentation on big data for Oracle developers and DBAs with a focus on Apache Spark, real-time analytics, and predictive analytics. He discussed how Hadoop can provide flexible, cheap storage for logs, feeds, and social data. He also explained several Hadoop processing frameworks like Apache Spark, Apache Tez, Cloudera Impala, and Apache Drill that provide faster alternatives to traditional MapReduce processing.
Gluent New World #02 - SQL-on-Hadoop : A bit of History, Current State-of-the...
Hadoop and NoSQL platforms initially focused on Java developers and slow but massively-scalable MapReduce jobs as an alternative to high-end but limited-scale analytics RDBMS engines. Apache Hive opened-up Hadoop to non-programmers by adding a SQL query engine and relational-style metadata layered over raw HDFS storage, and since then open-source initiatives such as Hive Stinger, Cloudera Impala and Apache Drill along with proprietary solutions from closed-source vendors have extended SQL-on-Hadoop’s capabilities into areas such as low-latency ad-hoc queries, ACID-compliant transactions and schema-less data discovery – at massive scale and with compelling economics.
In this session we’ll focus on technical foundations around SQL-on-Hadoop, first reviewing the basic platform Apache Hive provides and then looking in more detail at how ad-hoc querying, ACID-compliant transactions and data discovery engines work along with more specialised underlying storage that each now work best with – and we’ll take a look to the future to see how SQL querying, data integration and analytics are likely to come together in the next five years to make Hadoop the default platform running mixed old-world/new-world analytics workloads.
New World Hadoop Architectures (& What Problems They Really Solve) for Oracle...
Most DBAs are aware something interesting is going on with big data and the Hadoop product ecosystem that underpins it, but aren't so clear about what each component in the stack does, what problem each part solves and why those problems couldn't be solved using the old approach. We'll look at where it's all going with the advent of Spark and machine learning, what's happening with ETL, metadata and analytics on this platform ... why IaaS and datawarehousing-as-a-service will have such a big impact, sooner than you think
Microsoft's Big Play for Big Data- Visual Studio Live! NY 2012
This document discusses Microsoft's efforts to make big data technologies like Hadoop more accessible through its products. It describes Hadoop, MapReduce, HDFS, and other big data concepts. It then outlines Microsoft's project to create a Hadoop distribution that runs on Windows Server and Windows Azure, including building an ODBC driver to allow tools like Excel to query Hadoop. This will help bring big data to more business users and integrate it with Microsoft's existing BI technologies.
Big Data Developers Moscow Meetup 1 - sql on hadoop
This document summarizes a meetup about Big Data and SQL on Hadoop. The meetup included discussions on what Hadoop is, why SQL on Hadoop is useful, what Hive is, and introduced IBM's BigInsights software for running SQL on Hadoop with improved performance over other solutions. Key topics included HDFS file storage, MapReduce processing, Hive tables and metadata storage, and how BigInsights provides a massively parallel SQL engine instead of relying on MapReduce.
This document discusses big data analytics platforms and techniques. It describes various open-source projects like Hadoop, Spark, and Mahout that can perform analytics on large datasets. It also discusses commercial analytics platforms from vendors like SAS, Alpine, and Revolution Analytics. Spark is highlighted as gaining rapid adoption for its speed and expanding machine learning capabilities. Key questions are raised about which open-source projects and commercial offerings will emerge as leaders in their categories.
The document discusses cloud computing systems and MapReduce. It provides background on MapReduce, describing how it works and how it was inspired by functional programming concepts like map and reduce. It also discusses some limitations of MapReduce, noting that it is not designed for general-purpose parallel processing and can be inefficient for certain types of workloads. Alternative approaches like MRlite and DCell are proposed to provide more flexible and efficient distributed processing frameworks.
This document provides an overview of big data processing techniques including batch processing using MapReduce and Hive, iterative batch processing using Spark, stream processing using Apache Storm, and OLAP over big data using Dremel and Druid. It discusses techniques such as MapReduce, Hive, Spark RDDs, and Storm tuples for processing large datasets and compares small versus big data approaches. Example usages and technologies for different processing types are also outlined.
This document provides an agenda and overview for a conference session on Big Data and NoSQL for database and BI professionals held from April 10-12 in Chicago, IL. The session will include an overview of big data and NoSQL technologies, then deeper dives into Hadoop, NoSQL databases like HBase, and tools like Hive, Pig, and Sqoop. There will also be demos of technologies like HDInsight, Elastic MapReduce, Impala, and running MapReduce jobs.
Using Oracle Big Data Discovey as a Data Scientist's ToolkitMark Rittman
As delivered at Trivadis Tech Event 2016 - how Big Data Discovery along with Python and pySpark was used to build predictive analytics models against wearables and smart home data
Unlock the value in your big data reservoir using oracle big data discovery a...Mark Rittman
The document discusses Oracle Big Data Discovery and how it can be used to analyze and gain insights from data stored in a Hadoop data reservoir. It provides an example scenario where Big Data Discovery is used to analyze website logs, tweets, and website posts and comments to understand popular content and influencers for a company. The data is ingested into the Big Data Discovery tool, which automatically enriches the data. Users can then explore the data, apply additional transformations, and visualize relationships to gain insights.
OBIEE12c and Embedded Essbase 12c - An Initial Look at Query Acceleration Use...Mark Rittman
OBIEE12c comes with an updated version of Essbase that focuses entirely in this release on the query acceleration use-case. This presentation looks at this new release and explains how the new BI Accelerator Wizard manages the creation of Essbase cubes to accelerate OBIEE query performance
Data Integration and Data Warehousing for Cloud, Big Data and IoT: What’s Ne...Rittman Analytics
Mark Rittman presented at Big Data World in London in March 2017 on data integration and data warehousing for cloud, big data, and IoT. He discussed the history of data warehousing and how it has evolved from traditional RDBMS implementations to embrace big data technologies like Hadoop. He described how cloud data warehouse offerings from Google BigQuery and Amazon Redshift combine the scalability of big data with the structure of data warehousing. Rittman also covered new approaches to ETL using data pipelines, schema discovery using machine learning, emerging open-source BI tools, and his current work in these areas.
The document discusses a company's migration from their in-house computation engine to Apache Spark. It describes five key issues encountered during the migration process: 1) difficulty adapting to Spark's low-level RDD API, 2) limitations of DataSource predicates, 3) incomplete Spark SQL functionality, 4) performance issues with round trips between Spark and other systems, and 5) OutOfMemory errors due to large result sizes. Lessons learned include being aware of new Spark features and data formats, and designing architectures and data structures to minimize data movement between systems.
A Walk Through the Kimball ETL Subsystems with Oracle Data Integration - Coll...Michael Rainey
Big Data integration is an excellent feature in the Oracle Data Integration product suite (Oracle Data Integrator, GoldenGate, & Enterprise Data Quality). But not all analytics require big data technologies, such as labor cost, revenue, or expense reporting. Ralph Kimball, an original architect of the dimensional model in data warehousing, spent much of his career working to build an enterprise data warehouse methodology that can meet these reporting needs. His book, "The Data Warehouse ETL Toolkit", is a guide for many ETL developers. This session will walk you through his ETL Subsystem categories; Extracting, Cleaning & Conforming, Delivering, and Managing, describing how the Oracle Data Integration products are perfectly suited for the Kimball approach.
Presented at Collaborate16 in Las Vegas.
Build a simple data lake on AWS using a combination of services, including AWS Glue Data Catalog, AWS Glue Crawlers, AWS Glue Jobs, AWS Glue Studio, Amazon Athena, Amazon Relational Database Service (Amazon RDS), and Amazon S3.
Link to the blog post and video: https://garystafford.medium.com/building-a-simple-data-lake-on-aws-df21ca092e32
The document discusses building data lakes on AWS. It describes how data lakes extend the traditional data warehouse approach by allowing storage of both structured and unstructured data at massive scales. Amazon S3 provides durable, available, scalable, and easy-to-use storage for the data lake. AWS Glue crawls data to create a data catalog and can automate ETL processes. Amazon Athena and Amazon EMR enable interactive analysis and big data processing through SQL and Spark. The data lake architecture on AWS supports a variety of analytical use cases.
Doug Bateman, a principal data engineering instructor at Databricks, presented on how to build a Lakehouse architecture. He began by introducing himself and his background. He then discussed the goals of describing key Lakehouse features, explaining how Delta Lake enables it, and developing a sample Lakehouse using Databricks. The key aspects of a Lakehouse are that it supports diverse data types and workloads while enabling using BI tools directly on source data. Delta Lake provides reliability, consistency, and performance through its ACID transactions, automatic file consolidation, and integration with Spark. Bateman concluded with a demo of creating a Lakehouse.
knowIT is a collaborative semantic wiki used by Johnson & Johnson to map their IT systems, applications, servers and stakeholders. It aims to capture knowledge about these informatics systems, their relationships and components to answer questions, facilitate knowledge sharing and enable self-service. The wiki uses Semantic MediaWiki and has grown to include systems portfolio management, configuration management and other features to increase IT systems knowledge across the organization.
Data Con LA 2020
Description
In this session, I introduce the Amazon Redshift lake house architecture which enables you to query data across your data warehouse, data lake, and operational databases to gain faster and deeper insights. With a lake house architecture, you can store data in open file formats in your Amazon S3 data lake.
Speaker
Antje Barth, Amazon Web Services, Sr. Developer Advocate, AI and Machine Learning
The document discusses machine learning and artificial intelligence applications inside and outside of Snowflake's cloud data warehouse. It provides an overview of Snowflake and its architecture. It then discusses how machine learning can be implemented directly in the database using SQL, user-defined functions, and stored procedures. However, it notes that pure coding is not suitable for all users and that automated machine learning outside the database may be preferable to enable more business analysts and power users. It provides an example of using Amazon Forecast for time series forecasting and integrating it with Snowflake.
INFORMATICA ONLINE TRAINING BY QUONTRA SOLUTIONS WITH PLACEMENT ASSISTANCE
We offer online IT training with placements, project assistance in different platforms with real time industry consultants to provide quality training for all it professionals, corporate clients and students etc. Special features by Quontra Solutions are Extensive Training will be in both Informatica Online Training and Placement. We help you in resume preparation and conducting Mock Interviews.
Emphasis is given on important topics which are essential and mostly used in real time projects. Quontra Solutions is an Online Training Leader when it comes to high-end effective and efficient I.T Training. We have always been and still are focusing on the key aspects which are providing utmost effective and competent training to both students and professionals who are eager to enrich their technical skills.
Training Features at Quontra Solutions:
We believe that online training has to be measured by three major aspects viz., Quality, Content and Relationship with the Trainer and Student. Not only our online training classes are important but apart from that the material which we provide are in tune with the latest IT training standards, so a student has not to worry at all whether the training imparted is outdated or latest.
Course content:
• Basics of data warehousing concepts
• Power center components
• Informatica concepts and overview
• Sources
• Targets
• Transformations
• Advanced Informatica concepts
Please Visit us for the Demo Classes, we have regular batches and weekend batches.
QUONTRASOLUTIONS
204-226 Imperial Drive,Rayners Lane, Harrow-HA2 7HH
Phone : +44 (0)20 3734 1498 / 99
Email: info@quontrasolutions.co.uk
How To Leverage OBIEE Within A Big Data ArchitectureKevin McGinley
If you've invested in OBIEE and want to start exploring the use of Big Data technology, this presentation talks about how and why you might want to use OBIEE as the common visualization layer across both.
Enterprise Data World 2018 - Building Cloud Self-Service Analytical SolutionDmitry Anoshin
This session will cover building the modern Data Warehouse by migration from the traditional DW platform into the cloud, using Amazon Redshift and Cloud ETL Matillion in order to provide Self-Service BI for the business audience. This topic will cover the technical migration path of DW with PL/SQL ETL to the Amazon Redshift via Matillion ETL, with a detailed comparison of modern ETL tools. Moreover, this talk will be focusing on working backward through the process, i.e. starting from the business audience and their needs that drive changes in the old DW. Finally, this talk will cover the idea of self-service BI, and the author will share a step-by-step plan for building an efficient self-service environment using modern BI platform Tableau.
The document discusses Talend's big data solutions and sandbox. It introduces Rajan Kanitkar as a senior solutions engineer at Talend with 15 years of experience in data integration. It then summarizes Talend's big data platform and ecosystem including Hadoop, MapReduce, HDFS, Hive and more. The rest of the document describes Talend's sandbox, which provides a pre-configured virtual image with Hadoop distributions, Talend software, and data scenarios to demonstrate ingesting, transforming and delivering big data.
As a follow-on to the presentation "Building an Effective Data Warehouse Architecture", this presentation will explain exactly what Big Data is and its benefits, including use cases. We will discuss how Hadoop, the cloud and massively parallel processing (MPP) is changing the way data warehouses are being built. We will talk about hybrid architectures that combine on-premise data with data in the cloud as well as relational data and non-relational (unstructured) data. We will look at the benefits of MPP over SMP and how to integrate data from Internet of Things (IoT) devices. You will learn what a modern data warehouse should look like and how the role of a Data Lake and Hadoop fit in. In the end you will have guidance on the best solution for your data warehouse going forward.
The document discusses Oracle's new approach to business analytics and visualization. It notes that traditional corporate BI systems are viewed as inflexible and analytics are only for a privileged few. However, it argues there is still hope as analytics can provide a 10x ROI. The new approach involves visual analytics embedded in every Oracle solution across mobile, cloud, on-premises and big data to provide a single, integrated platform that allows business users to easily access, blend and scale insights from various data sources.
The Yorkshire Terrier, commonly known as the Yorkie, is descended from the Clydesdale Terrier and Paisley Terrier, which are now extinct. It also has ancestry from the Skye Terrier and Airdale Terrier. Yorkies are small dogs with black fur and a brown underside. They are intelligent but can be snappy. This document provides a list of supplies needed to care for a Yorkie and background on the breed. It encourages adopting from rescues, sharing details about a Yorkie named Magellan available for adoption.
El documento habla sobre Julian Beever, un artista que ha estado creando ilusiones ópticas en aceras de Europa, Estados Unidos y Australia durante los últimos 10 años. Sus creaciones se ven tan realistas que parecen tridimensionales aunque en realidad son dibujos en el pavimento, lo que sorprende a quienes las ven.
The document discusses important conditions for starting new businesses and defines economic terms. The three most important conditions are low taxes, skilled staff, and low interest rates. It also provides definitions for various economic terms like interest rate, exchange rate, inflation rate, labor force, tax incentives, and balance of trade.
This infographic document provides information on what infographics are and why they are effective communication tools. It discusses that infographics tell visual stories using images and graphics to engage audiences better than plain text. Effective infographics are simple, visually pleasing, and help explain complex topics. They improve comprehension and retention of information by leveraging human visual processing abilities.
Ashutosh Rubber Pvt Ltd is an Indian manufacturer and exporter of rubber products established in 2007. It produces over 9000 rubber products for industries like automotive, generators, and oil. The company exports to countries including Australia, Singapore, and Denmark. It has 26-50 employees and certifications for quality standards.
This document discusses building a city as a hands-on project to develop fine motor, visual motor, problem solving, imagination, and creativity skills in children. The project uses construction materials to build a cityscape that allows children to use their hands to physically construct buildings and structures while also using their visual skills to envision what they are creating, their problem solving to determine how pieces fit together, and their imagination to design the city layout.
Dokumen tersebut membahas mengenai kejuruteraan genetik yang merupakan teknik modifikasi genetika untuk meningkatkan kualitas tanaman dan hewan. Teknologi ini telah banyak digunakan dalam industri pertanian, peternakan, dan kedokteran untuk menghasilkan produk yang lebih baik dan bermanfaat bagi masyarakat. Contoh penerapannya adalah tanaman yang tahan penyakit dan cuaca ekstrem, hewan ternak
Dokumen ini membahas tentang sumberdaya alam, sumberdaya manusia, dan klasifikasi sumberdaya alam. Sumberdaya alam dibedakan menjadi sumberdaya terbarukan dan tidak terbarukan, sedangkan sumberdaya manusia diukur berdasarkan jumlah penduduk yang bekerja di sektor pertanian.
Teks tersebut berisi soalan-soalan latihan semantik dan peribahasa. Teks tersebut memberikan contoh-contoh kata dan ungkapan yang memiliki makna ganda atau kiasan, serta meminta pembaca untuk mengidentifikasi makna sebenarnya atau ungkapan yang tepat. Teks tersebut juga memberikan jawaban untuk soalan-soalan tersebut.
CFMS provides facility management services including housekeeping, maintenance, and production support staff out of its headquarters in Pune and offices in Ahmedabad. It has a highly trained team and uses quality equipment and chemicals. CFMS is committed to understanding client needs and creating customized service programs. It has an in-house training center to deploy trained staff from day one and implements consistent management systems. Some of CFMS' clients include restaurants, hotels, and automation companies in Pune.
Este documento habla sobre una persona extraordinaria que ayudó a Lorenzo a superar sus miedos y sacar la cabeza de su cazo. Esta persona le mostró sus puntos fuertes y le ayudó a expresar sus temores. Ahora Lorenzo puede jugar con los demás y los demás ven sus cualidades, aunque él sigue siendo el mismo.
IlOUG Tech Days 2016 - Big Data for Oracle Developers - Towards Spark, Real-T...Mark Rittman
Mark Rittman, CTO of Rittman Mead, gave a keynote presentation on big data for Oracle developers and DBAs with a focus on Apache Spark, real-time analytics, and predictive analytics. He discussed how Hadoop can provide flexible, cheap storage for logs, feeds, and social data. He also explained several Hadoop processing frameworks like Apache Spark, Apache Tez, Cloudera Impala, and Apache Drill that provide faster alternatives to traditional MapReduce processing.
Gluent New World #02 - SQL-on-Hadoop : A bit of History, Current State-of-the...Mark Rittman
Hadoop and NoSQL platforms initially focused on Java developers and slow but massively-scalable MapReduce jobs as an alternative to high-end but limited-scale analytics RDBMS engines. Apache Hive opened-up Hadoop to non-programmers by adding a SQL query engine and relational-style metadata layered over raw HDFS storage, and since then open-source initiatives such as Hive Stinger, Cloudera Impala and Apache Drill along with proprietary solutions from closed-source vendors have extended SQL-on-Hadoop’s capabilities into areas such as low-latency ad-hoc queries, ACID-compliant transactions and schema-less data discovery – at massive scale and with compelling economics.
In this session we’ll focus on technical foundations around SQL-on-Hadoop, first reviewing the basic platform Apache Hive provides and then looking in more detail at how ad-hoc querying, ACID-compliant transactions and data discovery engines work along with more specialised underlying storage that each now work best with – and we’ll take a look to the future to see how SQL querying, data integration and analytics are likely to come together in the next five years to make Hadoop the default platform running mixed old-world/new-world analytics workloads.
New World Hadoop Architectures (& What Problems They Really Solve) for Oracle...Rittman Analytics
Most DBAs are aware something interesting is going on with big data and the Hadoop product ecosystem that underpins it, but aren't so clear about what each component in the stack does, what problem each part solves and why those problems couldn't be solved using the old approach. We'll look at where it's all going with the advent of Spark and machine learning, what's happening with ETL, metadata and analytics on this platform ... why IaaS and datawarehousing-as-a-service will have such a big impact, sooner than you think
Microsoft's Big Play for Big Data- Visual Studio Live! NY 2012Andrew Brust
This document discusses Microsoft's efforts to make big data technologies like Hadoop more accessible through its products. It describes Hadoop, MapReduce, HDFS, and other big data concepts. It then outlines Microsoft's project to create a Hadoop distribution that runs on Windows Server and Windows Azure, including building an ODBC driver to allow tools like Excel to query Hadoop. This will help bring big data to more business users and integrate it with Microsoft's existing BI technologies.
Big Data Developers Moscow Meetup 1 - sql on hadoopbddmoscow
This document summarizes a meetup about Big Data and SQL on Hadoop. The meetup included discussions on what Hadoop is, why SQL on Hadoop is useful, what Hive is, and introduced IBM's BigInsights software for running SQL on Hadoop with improved performance over other solutions. Key topics included HDFS file storage, MapReduce processing, Hive tables and metadata storage, and how BigInsights provides a massively parallel SQL engine instead of relying on MapReduce.
This document discusses big data analytics platforms and techniques. It describes various open-source projects like Hadoop, Spark, and Mahout that can perform analytics on large datasets. It also discusses commercial analytics platforms from vendors like SAS, Alpine, and Revolution Analytics. Spark is highlighted as gaining rapid adoption for its speed and expanding machine learning capabilities. Key questions are raised about which open-source projects and commercial offerings will emerge as leaders in their categories.
The document discusses cloud computing systems and MapReduce. It provides background on MapReduce, describing how it works and how it was inspired by functional programming concepts like map and reduce. It also discusses some limitations of MapReduce, noting that it is not designed for general-purpose parallel processing and can be inefficient for certain types of workloads. Alternative approaches like MRlite and DCell are proposed to provide more flexible and efficient distributed processing frameworks.
This document provides an overview of big data processing techniques including batch processing using MapReduce and Hive, iterative batch processing using Spark, stream processing using Apache Storm, and OLAP over big data using Dremel and Druid. It discusses techniques such as MapReduce, Hive, Spark RDDs, and Storm tuples for processing large datasets and compares small versus big data approaches. Example usages and technologies for different processing types are also outlined.
Big Data and NoSQL for Database and BI ProsAndrew Brust
This document provides an agenda and overview for a conference session on Big Data and NoSQL for database and BI professionals held from April 10-12 in Chicago, IL. The session will include an overview of big data and NoSQL technologies, then deeper dives into Hadoop, NoSQL databases like HBase, and tools like Hive, Pig, and Sqoop. There will also be demos of technologies like HDInsight, Elastic MapReduce, Impala, and running MapReduce jobs.
This document provides an overview of the Apache Spark framework. It covers Spark fundamentals including the Spark execution model using Resilient Distributed Datasets (RDDs), basic Spark programming, and common Spark libraries and use cases. Key topics include how Spark improves on MapReduce by operating in-memory and supporting general graphs through its directed acyclic graph execution model. The document also reviews Spark installation and provides examples of basic Spark programs in Scala.
Technologies for Data Analytics PlatformN Masahiro
This document discusses building a data analytics platform and summarizes various technologies that can be used. It begins by outlining reasons for analyzing data like reporting, monitoring, and exploratory analysis. It then discusses using relational databases, parallel databases, Hadoop, and columnar storage to store and process large volumes of data. Streaming technologies like Storm, Kafka, and services like Redshift, BigQuery, and Treasure Data are also summarized as options for a complete analytics platform.
IlOUG Tech Days 2016 - Unlock the Value in your Data Reservoir using Oracle B...Mark Rittman
Mark Rittman from Rittman Mead presented on Oracle Big Data Discovery. He discussed how many organizations are running big data initiatives involving loading large amounts of raw data into data lakes for analysis. Oracle Big Data Discovery provides a visual interface for exploring, analyzing, and transforming this raw data. It allows users to understand relationships in the data, perform enrichments, and prepare the data for use in tools like Oracle Business Intelligence.
The Future of Analytics, Data Integration and BI on Big Data PlatformsMark Rittman
The document discusses the future of analytics, data integration, and business intelligence (BI) on big data platforms like Hadoop. It covers how BI has evolved from old-school data warehousing to enterprise BI tools to utilizing big data platforms. New technologies like Impala, Kudu, and dataflow pipelines have made Hadoop fast and suitable for analytics. Machine learning can be used for automatic schema discovery. Emerging open-source BI tools and platforms, along with notebooks, bring new approaches to BI. Hadoop has become the default platform and future for analytics.
Philly Code Camp 2013 Mark Kromer Big Data with SQL ServerMark Kromer
These are my slides from May 2013 Philly Code Camp at Penn State Abington. I will post the samples, code and scripts on my blog here following the event this Saturday: http://www.kromerbigdata.com
This document provides an overview of architecting a first big data implementation. It defines key concepts like Hadoop, NoSQL databases, and real-time processing. It recommends asking questions about data, technology stack, and skills before starting a project. Distributed file systems, batch tools, and streaming systems like Kafka are important technologies for big data architectures. The document emphasizes moving from batch to real-time processing as a major opportunity.
Big Data Strategy for the Relational World Andrew Brust
1) Andrew Brust is the CEO of Blue Badge Insights and a big data expert who writes for ZDNet and GigaOM Research.
2) The document discusses trends in databases including the growth of NoSQL databases like MongoDB and Cassandra and Hadoop technologies.
3) It also covers topics like SQL convergence with Hadoop, in-memory databases, and recommends that organizations look at how widely database products are deployed before adopting them to avoid being locked into niche products.
The document provides an overview of big data concepts and frameworks. It discusses the dimensions of big data including volume, velocity, variety, veracity, value and variability. It then describes the traditional approach to data processing and its limitations in dealing with large, complex data. Hadoop and its core components HDFS and YARN are introduced as the solution. Spark is presented as a faster alternative to Hadoop for processing large datasets in memory. Other frameworks like Hive, Pig and Presto are also briefly mentioned.
Big Data visualization with Apache Spark and Zeppelinprajods
This presentation gives an overview of Apache Spark and explains the features of Apache Zeppelin(incubator). Zeppelin is the open source tool for data discovery, exploration and visualization. It supports REPLs for shell, SparkSQL, Spark(scala), python and angular. This presentation was made on the Big Data Day, at the Great Indian Developer Summit, Bangalore, April 2015
Similar to Enkitec E4 Barcelona : SQL and Data Integration Futures on Hadoop : (20)
Adding a Data Reservoir to your Oracle Data Warehouse for Customer 360-Degree...Mark Rittman
This document summarizes a presentation about adding a Hadoop-based data reservoir to an Oracle data warehouse. The presentation discusses using a data reservoir to store large amounts of raw customer data from various sources to enable 360-degree customer analysis. It describes loading and integrating the data reservoir with the data warehouse using Oracle tools and how organizations can use it for more personalized customer marketing through advanced analytics and machine learning.
What is Big Data Discovery, and how it complements traditional business anal...Mark Rittman
Data Discovery is an analysis technique that complements traditional business analytics, and enables users to combine, explore and analyse disparate datasets to spot opportunities and patterns that lie hidden within your data. Oracle Big Data discovery takes this idea and applies it to your unstructured and big data datasets, giving users a way to catalogue, join and then analyse all types of data across your organization.
In this session we'll look at Oracle Big Data Discovery and how it provides a "visual face" to your big data initatives, and how it complements and extends the work that you currently do using business analytics tools.
Deploying Full Oracle BI Platforms to Oracle Cloud - OOW2015Mark Rittman
- Mark Rittman presented on deploying full OBIEE systems to Oracle Cloud. This involves migrating the data warehouse to Oracle Database Cloud Service, updating the RPD to connect to the cloud database, and uploading the RPD to Oracle BI Cloud Service. Using the wider Oracle PaaS ecosystem allows hosting a full BI platform in the cloud.
Delivering the Data Factory, Data Reservoir and a Scalable Oracle Big Data Ar...Mark Rittman
Presentation from the Rittman Mead BI Forum 2015 masterclass, pt.2 of a two-part session that also covered creating the Discovery Lab. Goes through setting up Flume log + twitter feeds into CDH5 Hadoop using ODI12c Advanced Big Data Option, then looks at the use of OBIEE11g with Hive, Impala and Big Data SQL before finally using Oracle Big Data Discovery for faceted search and data mashup on-top of Hadoop
End to-end hadoop development using OBIEE, ODI, Oracle Big Data SQL and Oracl...Mark Rittman
This document discusses an end-to-end example of using Hadoop, OBIEE, ODI and Oracle Big Data Discovery to analyze big data from various sources. It describes ingesting website log data and Twitter data into a Hadoop cluster, processing and transforming the data using tools like Hive and Spark, and using the results for reporting in OBIEE and data discovery in Oracle Big Data Discovery. ODI is used to automate the data integration process.
OBIEE11g Seminar by Mark Rittman for OU Expert Summit, Dubai 2015Mark Rittman
Slides from a two-day OBIEE11g seminar in Dubai, February 2015, at the Oracle University Expert Summit. Covers the following topics:
1. OBIEE 11g Overview & New Features
2. Adding Exalytics and In-Memory Analytics to OBIEE 11g
3. Source Control and Concurrent Development for OBIEE
4. No Silver Bullets - OBIEE 11g Performance in the Real World
5. Oracle BI Cloud Service Overview, Tips and Techniques
6. Moving to Oracle BI Applications 11g + ODI
7. Oracle Essbase and Oracle BI EE 11g Integration Tips and Techniques
8. OBIEE 11g and Predictive Analytics, Hadoop & Big Data
BIWA2015 - Bringing Oracle Big Data SQL to OBIEE and ODIMark Rittman
The document discusses Oracle's Big Data SQL, which brings Oracle SQL capabilities to Hadoop data stored in Hive tables. It allows querying Hive data using standard SQL from Oracle Database and viewing Hive metadata in Oracle data dictionary tables. Big Data SQL leverages the Hive metastore and uses direct reads and SmartScan to optimize queries against HDFS and Hive data. This provides a unified SQL interface and optimized query processing for both Oracle and Hadoop data.
UKOUG Tech'14 Super Sunday : Deep-Dive into Big Data ETL with ODI12cMark Rittman
This document discusses using Hadoop and Hive for ETL work. It provides an overview of using Hadoop for distributed processing and storage of large datasets. It describes how Hive provides a SQL interface for querying data stored in Hadoop and how various Apache tools can be used to load, transform and store data in Hadoop. Examples of using Hive to view table metadata and run queries are also presented.
Part 1 - Introduction to Hadoop and Big Data Technologies for Oracle BI & DW ...Mark Rittman
Delivered as a one-day seminar at the SIOUG and HROUG Oracle User Group Conferences, October 2014
In this presentation we cover some key Hadoop concepts including HDFS, MapReduce, Hive and NoSQL/HBase, with the focus on Oracle Big Data Appliance and Cloudera Distribution including Hadoop. We explain how data is stored on a Hadoop system and the high-level ways it is accessed and analysed, and outline Oracle’s products in this area including the Big Data Connectors, Oracle Big Data SQL, and Oracle Business Intelligence (OBI) and Oracle Data Integrator (ODI).
Part 4 - Hadoop Data Output and Reporting using OBIEE11gMark Rittman
Delivered as a one-day seminar at the SIOUG and HROUG Oracle User Group Conferences, October 2014.
Once insights and analysis have been produced within your Hadoop cluster by analysts and technical staff, it’s usually the case that you want to share the output with a wider audience in the organisation. Oracle Business Intelligence has connectivity to Hadoop through Apache Hive compatibility, and other Oracle tools such as Oracle Big Data Discovery and Big Data SQL can be used to visualise and publish Hadoop data. In this final session we’ll look at what’s involved in connecting these tools to your Hadoop environment, and also consider where data is optimally located when large amounts of Hadoop data need to be analysed alongside more traditional data warehouse datasets
Part 2 - Hadoop Data Loading using Hadoop Tools and ODI12cMark Rittman
Delivered as a one-day seminar at the SIOUG and HROUG Oracle User Group Conferences, October 2014.
There are many ways to ingest (load) data into a Hadoop cluster, from file copying using the Hadoop Filesystem (FS) shell through to real-time streaming using technologies such as Flume and Hadoop streaming. In this session we’ll take a high-level look at the data ingestion options for Hadoop, and then show how Oracle Data Integrator and Oracle GoldenGate leverage these technologies to load and process data within your Hadoop cluster. We’ll also consider the updated Oracle Information Management Reference Architecture and look at the best places to land and process your enterprise data, using Hadoop’s schema-on-read approach to hold low-value, low-density raw data, and then use the concept of a “data factory” to load and process your data into more traditional Oracle relational storage, where we hold high-density, high-value data.
[D2T2S04] SageMaker를 활용한 Generative AI Foundation Model Training and TuningDonghwan Lee
이 세션에서는 SageMaker Training Jobs / SageMaker Jumpstart를 사용하여 Foundation Model 을 Pre-Triaining 하거나 Fine Tuing 하는 방안을 제시합니다. 이 세션을 통해 아래 3가지가 소개됩니다.
1. 파운데이션 모델을 처음부터 Training
2. 오픈 소스 모델을 사용하여 파운데이션 모델을 Pre-Training
3. 도메인에 맞게 모델을 Fine Tuning하는 방안
발표자:
Miron Perel, Principal ML GTM Specialist, AWS
Kristine Pearce, Principal ML BD, AWS
Amazon Aurora 클러스터를 초당 수백만 건의 쓰기 트랜잭션으로 확장하고 페타바이트 규모의 데이터를 관리할 수 있으며, 사용자 지정 애플리케이션 로직을 생성하거나 여러 데이터베이스를 관리할 필요 없이 Aurora에서 관계형 데이터베이스 워크로드를 단일 Aurora 라이터 인스턴스의 한도 이상으로 확장할 수 있는 Amazon Aurora Limitless Database를 소개합니다.
❻❸❼⓿❽❻❷⓿⓿❼ SATTA MATKA DPBOSS KALYAN MATKA RESULTS KALYAN CHART KALYAN MATKA MATKA RESULT KALYAN MATKA TIPS SATTA MATKA MATKA COM MATKA PANA JODI TODAY
Airline Satisfaction Project using Azure
This presentation is created as a foundation of understanding and comparing data science/machine learning solutions made in Python notebooks locally and on Azure cloud, as a part of Course DP-100 - Designing and Implementing a Data Science Solution on Azure.
2. info@rittmanmead.com www.rittmanmead.com @rittmanmead 2
•Mark Rittman, Co-Founder of Rittman Mead
‣Oracle ACE Director, specialising in Oracle BI&DW
‣14 Years Experience with Oracle Technology
‣Regular columnist for Oracle Magazine
•Author of two Oracle Press Oracle BI books
‣Oracle Business Intelligence Developers Guide
‣Oracle Exalytics Revealed
‣Writer for Rittman Mead Blog :
http://www.rittmanmead.com/blog
•Email : mark.rittman@rittmanmead.com
•Twitter : @markrittman
About the Speaker
3. info@rittmanmead.com www.rittmanmead.com @rittmanmead 3
•Started back in 1997 on a bank Oracle DW project
•Our tools were Oracle 7.3.4, SQL*Plus, PL/SQL
and shell scripts
•Went on to use Oracle Developer/2000 and Designer/2000
•Our initial users queried the DW using SQL*Plus
•And later on, we rolled-out Discoverer/2000 to everyone else
•And life was fun…
15+ Years in Oracle BI and Data Warehousing
4. info@rittmanmead.com www.rittmanmead.com @rittmanmead 4
•Over time, this data warehouse architecture developed
•Added Oracle Warehouse Builder to
automate and model the DW build
•Oracle 9i Application Server (yay!)
to deliver reports and web portals
•Data Mining and OLAP in the database
•Oracle 9i for in-database ETL (and RAC)
•Data was typically loaded from
Oracle RBDMS and EBS
•It was turtles Oracle all the way down…
The Oracle-Centric DW Architecture
6. info@rittmanmead.com www.rittmanmead.com @rittmanmead 6
•Many customers and organisations are now running initiatives around “big data”
•Some are IT-led and are looking for cost-savings around data warehouse storage + ETL
•Others are “skunkworks” projects in the marketing department that are now scaling-up
•Projects now emerging from pilot exercises
•And design patterns starting to emerge
Many Organisations are Running Big Data Initiatives
7. info@rittmanmead.com www.rittmanmead.com @rittmanmead
Highly Scalable (and Affordable) Cluster Computing
•Enterprise High-End RDBMSs such as Oracle can scale into the petabytes, using clustering
‣Sharded databases (e.g. Netezza) can scale further but with complexity / single workload trade-offs
•Hadoop was designed from outside for massive horizontal scalability - using cheap hardware
•Anticipates hardware failure and makes multiple copies of data as protection
•More nodes you add, more stable it becomes
•And at a fraction of the cost of traditional
RDBMS platforms
8. info@rittmanmead.com www.rittmanmead.com @rittmanmead
•Store and analyze huge volumes of structured and unstructured data
•In the past, we had to throw away the detail
•No need to define a data model during ingest
•Supports multiple, flexible schemas
•Separation of storage from compute engine
•Allows multiple query engines and frameworks
to work on the same raw datasets
Store Everything Forever - And Process in Many Ways
Hadoop Data Lake
Webserver
Log Files (txt)
Social Media
Logs (JSON)
DB Archives
(CSV)
Sensor Data
(XML)
`Spatial & Graph
(XML, txt)
IoT Logs
(JSON, txt)
Chat Transcripts
(Txt)
DB Transactions
(CSV, XML)
Blogs, Articles
(TXT, HTML)
Raw Data Processed Data
NoSQL Key-Value
Store DB Tabular Data
(Hive Tables)
Aggregates
(Impala Tables) NoSQL Document
Store DB
9. info@rittmanmead.com www.rittmanmead.com @rittmanmead 9
•Typical implementation of Hadoop and big data in an analytic context is the “data lake”
•Additional data storage platform with cheap storage, flexible schema support + compute
•Data lands in the data lake or reservoir in raw form, then minimally processed
•Data then accessed directly by “data scientists”, or processed further into DW
In the Context of BI & Analytics : The Data Reservoir
Data Transfer Data Access
Data Factory
Data Reservoir
Business
Intelligence Tools
Hadoop Platform
File Based
Integration
Stream
Based
Integration
Data streams
Discovery & Development Labs
Safe & secure Discovery and Development
environment
Data sets and
samples
Models and
programs
Marketing /
Sales Applications
Models
Machine
Learning
Segments
Operational Data
Transactions
Customer
Master ata
Unstructured Data
Voice + Chat
Transcripts
ETL Based
Integration
Raw
Customer Data
Data stored in
the original
format (usually
files) such as
SS7, ASN.1,
JSON etc.
Mapped
Customer Data
Data sets
produced by
mapping and
transforming
raw data
10. info@rittmanmead.com www.rittmanmead.com @rittmanmead 10
•Oracle Engineered system for big data processing and analysis
•Start with Oracle Big Data Appliance Starter Rack - expand up to 18 nodes per rack
•Cluster racks together for horizontal scale-out using enterprise-quality infrastructure
Oracle Big Data Appliance
Starter Rack + Expansion
• Cloudera CDH + Oracle software
• 18 High-spec Hadoop Nodes with
InfiniBand switches for internal Hadoop
traffic, optimised for network throughput
• 1 Cisco Management Switch
• Single place for support for H/W + S/W
Deployed on Oracle Big Data Appliance
Oracle Big Data Appliance
Starter Rack + Expansion
• Cloudera CDH + Oracle software
• 18 High-spec Hadoop Nodes with
InfiniBand switches for internal Hadoop
traffic, optimised for network throughput
• 1 Cisco Management Switch
• Single place for support for H/W + S/W
Enriched
Customer Profile
Modeling
Scoring
Infiniband
11. info@rittmanmead.com www.rittmanmead.com @rittmanmead
•Programming model for processing large data sets in parallel on a cluster
•Not specific to a particular language, but usually written in Java
•Inspired by the map and reduce functions commonly used in functional programming
‣Map() performs filtering and sorting
‣Reduce() aggregates the output of mappers
‣and a Shuffle() step to redistribute output by keys
•Resolved several complications of distributed computing:
‣Allows unlimited computations on unlimited data
‣Map and reduce functions can be easily distributed
‣Originated at Google; Hadoop was Yahoo’s open-source
implementation of MapReduce, + two are synonymous
MapReduce - The Original Big Data Query Framework
Mapper
Filter, Project
Mapper
Filter, Project
Mapper
Filter, Project
Reducer
Aggregate
Reducer
Aggregate
Output
One HDFS file per reducer,
in a directory
12. info@rittmanmead.com www.rittmanmead.com @rittmanmead 12
•Original developed at Facebook, now foundational within the Hadoop project
•Allows users to query Hadoop data using SQL-like language
•Tabular metadata layer that overlays files, can interpret semi-structured data (e.g. JSON)
•Generates MapReduce code to return required data
•Extensible through SerDes and Storage Handlers
•JDBC and ODBC drivers for most platforms/tools
•Perfect for set-based access + batch ETL work
Apache Hive : SQL Metadata + Engine over Hadoop
13. info@rittmanmead.com www.rittmanmead.com @rittmanmead
•Data integration tools such as Oracle Data Integrator can load and process Hadoop data
•BI tools such as Oracle Business Intelligence 12c can report on Hadoop data
•Generally use MapReduce and Hive to access data
‣ODBC and JDBC access to Hive tabular data
‣Allows Hadoop unstructured/semi-structured
data on HDFS to be accessed like RDBMS
Hive Provides a SQL Interface for BI + ETL Tools
Access direct Hive or extract using ODI12c
for structured OBIEE dashboard analysis
What pages are people visiting?
Who is referring to us on Twitter?
What content has the most reach?
27. info@rittmanmead.com www.rittmanmead.com @rittmanmead 27
•MapReduce’s great innovation was to break processing down into distributed jobs
•Jobs that have no functional dependency on each other, only upstream tasks
•Provides a framework that is infinitely scalable and very fault tolerant
•Hadoop handled job scheduling and resource management
‣All MapReduce code had to do was provide the “map” and “reduce” functions
‣Automatic distributed processing
‣Slow but extremely powerful
Hadoop 1.0 and MapReduce
28. info@rittmanmead.com www.rittmanmead.com @rittmanmead 28
•A typical Hive or Pig script compiles down into multiple MapReduce jobs
•Each job stages its intermediate results to disk
•Safe, but slow - write to disk, spin-up separate JVMs for each job
MapReduce - Scales By Writing Intermediate Results to Disk
SELECT
LOWER(hashtags.text),
COUNT(*) AS total_count
FROM (
SELECT * FROM tweets WHERE regexp_extract(created_at,"(2015)*",1) = "2015"
) tweets
LATERAL VIEW EXPLODE(entities.hashtags) t1 AS hashtags
GROUP BY LOWER(hashtags.text)
ORDER BY total_count DESC
LIMIT 15
MapReduce Jobs Launched:
Stage-Stage-1: Map: 1 Reduce: 1 Cumulative CPU: 5.34 sec HDFS Read: 10952994 HDFS Write: 5239 SUCCESS
Stage-Stage-2: Map: 1 Reduce: 1 Cumulative CPU: 2.1 sec HDFS Read: 9983 HDFS Write: 164 SUCCESS
Total MapReduce CPU Time Spent: 7 seconds 440 msec
OK
1
2
29. info@rittmanmead.com www.rittmanmead.com @rittmanmead 29
•MapReduce 2 (MR2) splits the functionality of the JobTracker
by separating resource management and job scheduling/monitoring
•Introduces YARN (Yet Another Resource Manager)
•Permits other processing frameworks to MR
‣For example, Apache Spark
•Maintains backwards compatibility with MR1
•Introduced with CDH5+
MapReduce 2 and YARN
Node
Manager
Node
Manager
Node
Manager
Resource
Manager
Client
Client
30. info@rittmanmead.com www.rittmanmead.com @rittmanmead 30
•Runs on top of YARN, provides a faster execution engine than MapReduce for Hive, Pig etc
•Models processing as an entire data flow graph (DAG), rather than separate job steps
‣DAG (Directed Acyclic Graph) is a new programming style for distributed systems
‣Dataflow steps pass data between them as streams, rather than writing/reading from disk
•Supports in-memory computation, enables Hive on Tez (Stinger) and Pig on Tez
•Favoured In-memory / Hive v2
route by Hortonworks
Apache Tez
InputData
TEZ DAG
Map()
Map()
Map()
Reduce()
OutputData
Reduce()
Reduce()
Reduce()
InputData
Map()
Map()
Reduce()
Reduce()
33. info@rittmanmead.com www.rittmanmead.com @rittmanmead 33
•Cloudera’s answer to Hive query response time issues
•MPP SQL query engine running on Hadoop, bypasses MapReduce for
direct data access
•Mostly in-memory, but spills to disk if required
•Uses Hive metastore to access Hive table metadata
•Similar SQL dialect to Hive - not as rich though and no support for Hive
SerDes, storage handlers etc
Cloudera Impala - Fast, MPP-style Access to Hadoop Data
34. info@rittmanmead.com www.rittmanmead.com @rittmanmead 34
•A replacement for Hive, but uses Hive concepts and
data dictionary (metastore)
•MPP (Massively Parallel Processing) query engine
that runs within Hadoop
‣Uses same file formats, security,
resource management as Hadoop
•Processes queries in-memory
•Accesses standard HDFS file data
•Option to use Apache AVRO, RCFile,
LZO or Parquet (column-store)
•Designed for interactive, real-time
SQL-like access to Hadoop
How Impala Works
Impala
Hadoop
HDFS etc
BI Server
Presentation Svr
Cloudera Impala
ODBC Driver
Impala
Hadoop
HDFS etc
Impala
Hadoop
HDFS etc
Impala
Hadoop
HDFS etc
Impala
Hadoop
HDFS etc
Multi-Node
Hadoop Cluster
35. info@rittmanmead.com www.rittmanmead.com @rittmanmead 35
•Log into Impala Shell, run INVALIDATE METADATA command to refresh Impala table list
•Run SHOW TABLES Impala SQL command to view tables available
•Run COUNT(*) on main ACCESS_PER_POST table to see typical response time
Enabling Hive Tables for Impala
[oracle@bigdatalite ~]$ impala-shell
Starting Impala Shell without Kerberos authentication
[bigdatalite.localdomain:21000] > invalidate metadata;
Query: invalidate metadata
Fetched 0 row(s) in 2.18s
[bigdatalite.localdomain:21000] > show tables;
Query: show tables
+-----------------------------------+
| name |
+-----------------------------------+
| access_per_post |
| access_per_post_cat_author |
| … |
| posts |
|——————————————————————————————————-+
Fetched 45 row(s) in 0.15s
[bigdatalite.localdomain:21000] > select count(*)
from access_per_post;
Query: select count(*) from access_per_post
+----------+
| count(*) |
+----------+
| 343 |
+----------+
Fetched 1 row(s) in 2.76s
36. info@rittmanmead.com www.rittmanmead.com @rittmanmead 36
•Significant improvement over Hive response time
•Now makes Hadoop suitable for ad-hoc querying
Significantly-Improved Ad-Hoc Query Response Time vs Hive
|
Logical Query Summary Stats: Elapsed time 2, Response time 1, Compilation time 0 (seconds)
Logical Query Summary Stats: Elapsed time 50, Response time 49, Compilation time 0 (seconds)
Simple Two-Table Join against Hive Data Only
Simple Two-Table Join against Impala Data Only
vs
37. info@rittmanmead.com www.rittmanmead.com @rittmanmead 37
•Beginners usually store data in HDFS using text file formats (CSV) but these have limitations
•Apache AVRO often used for general-purpose processing
‣Splitability, schema evolution, in-built metadata, support for block compression
•Parquet now commonly used with Impala due to column-orientated storage
‣Mirrors work in RDBMS world around column-store
‣Only return (project) the columns you require across a wide table
Apache Parquet - Column-Orientated Storage for Analytics
38. info@rittmanmead.com www.rittmanmead.com @rittmanmead 38
•But Parquet (and HDFS) have significant limitation for real-time analytics applications
‣Append-only orientation, focus on column-store
makes streaming ingestion harder
•Cloudera Kudu aims to combine best of HDFS + HBase
‣Real-time analytics-optimised
‣Supports updates to data
‣Fast ingestion of data
‣Accessed using SQL-style tables
and get/put/update/delete API
Cloudera Kudu - Combining Best of HBase and Column-Store
39. info@rittmanmead.com www.rittmanmead.com @rittmanmead 39
•Part of Oracle Big Data 4.0 (BDA-only)
‣Also requires Oracle Database 12c, Oracle Exadata Database Machine
•Extends Oracle Data Dictionary to cover Hive
•Extends Oracle SQL and SmartScan to Hadoop
•Extends Oracle Security Model over Hadoop
‣Fine-grained access control
‣Data redaction, data masking
‣Uses fast c-based readers where possible
(vs. Hive MapReduce generation)
‣Map Hadoop parallelism to Oracle PQ
‣Big Data SQL engine works on top of YARN
‣Like Spark, Tez, MR2
Oracle Big Data SQL
Exadata
Storage Servers
Hadoop
Cluster
Exadata Database
Server
Oracle Big
Data SQL
SQL Queries
SmartScan SmartScan
44. info@rittmanmead.com www.rittmanmead.com @rittmanmead
•Apache Drill is another SQL-on-Hadoop project that focus on schema-free data discovery
•Inspired by Google Dremel, innovation is querying raw data with schema optional
•Automatically infers and detects schema from semi-structured datasets and NoSQL DBs
•Join across different silos of data e.g. JSON records, Hive tables and HBase database
•Aimed at different use-cases than Hive -
low-latency queries, discovery
(think Endeca vs OBIEE)
Introducing Apache Drill - “We Don’t Need No Roads”
45. info@rittmanmead.com www.rittmanmead.com @rittmanmead
•Most modern datasource formats embed their schema in the data (“schema-on-read”)
•Apache Drill makes these as easy to join to traditional datasets as “point me at the data”
•Cuts out unnecessary work in defining Hive schemas for data that’s self-describing
•Supports joining across files,
databases, NoSQL etc
Self-Describing Data - Parquet, AVRO, JSON etc
46. info@rittmanmead.com www.rittmanmead.com @rittmanmead
•Files can exist either on the local filesystem, or on HDFS
•Connection to directory or file defined in storage configuration
•Can work with CSV, TXT, TSV etc
•First row of file can provide schema (column names)
Apache Drill and Text Files
SELECT * FROM dfs.`/tmp/csv_with_header.csv2`;
+-------+------+------+------+
| name | num1 | num2 | num3 |
+-------+------+------+------+
| hello | 1 | 2 | 3 |
| hello | 1 | 2 | 3 |
| hello | 1 | 2 | 3 |
| hello | 1 | 2 | 3 |
| hello | 1 | 2 | 3 |
| hello | 1 | 2 | 3 |
| hello | 1 | 2 | 3 |
+-------+------+------+------+
7 rows selected (0.12 seconds)
SELECT * FROM dfs.`/tmp/csv_no_header.csv`;
+------------------------+
| columns |
+------------------------+
| ["hello","1","2","3"] |
| ["hello","1","2","3"] |
| ["hello","1","2","3"] |
| ["hello","1","2","3"] |
| ["hello","1","2","3"] |
| ["hello","1","2","3"] |
| ["hello","1","2","3"] |
+------------------------+
7 rows selected (0.112 seconds)
47. info@rittmanmead.com www.rittmanmead.com @rittmanmead
•JSON (Javascript Object Notation) documents are
often used for data interchange
•Exports from Twitter and other consumer services
•Web service responses and other B2B interfaces
•A more lightweight form of XML that is “self-
describing”
•Handles evolving schemas, and optional attributes
•Drill treats each document as a row, and has features
to
•Flatten nested data (extract elements from arrays)
•Generate key/value pairs for loosely structured data
Apache Drill and JSON Documents
use dfs.iot;
show files;
select in_reply_to_user_id, text from `all_tweets.json`
limit 5;
+---------------------+------+
| in_reply_to_user_id | text |
+---------------------+------+
| null | BI Forum 2013 in Brighton has now sold-out |
| null | "Football has become a numbers game |
| null | Just bought Lyndsay Wise’s Book |
| null | An Oracle BI "Blast from the Past" |
| 14716125 | Dilbert on Agile Programming |
+---------------------+------+
5 rows selected (0.229 seconds)
select name, flatten(fillings) as f
from dfs.users.`/donuts.json`
where f.cal < 300;
48. info@rittmanmead.com www.rittmanmead.com @rittmanmead
•Drill can connect to Hive to make use of metastore (incl. multiple Hive metastores)
•NoSQL databases (HBase etc)
•Parquet files (native storage format - columnar + self describing)
Apache Drill and Hive, HBase, Parquet Sources etc
USE hbase;
SELECT * FROM students;
+-------------+-----------------------+-----------------------------------------------------+
| row_key | account | address |
+-------------+-----------------------+------------------------------------------------------+
| [B@e6d9eb7 | {"name":"QWxpY2U="} | {"state":"Q0E=","street":"MTIzIEJhbGxtZXIgQXY="} |
| [B@2823a2b4 | {"name":"Qm9i"} | {"state":"Q0E=","street":"MSBJbmZpbml0ZSBMb29w"} |
| [B@3b8eec02 | {"name":"RnJhbms="} | {"state":"Q0E=","street":"NDM1IFdhbGtlciBDdA=="} |
| [B@242895da | {"name":"TWFyeQ=="} | {"state":"Q0E=","street":"NTYgU291dGhlcm4gUGt3eQ=="} |
+-------------+-----------------------+----------------------------------------------------------------------+
SELECT firstname,lastname FROM
hiveremote.`customers` limit 10;`
+------------+------------+
| firstname | lastname |
+------------+------------+
| Essie | Vaill |
| Cruz | Roudabush |
| Billie | Tinnes |
| Zackary | Mockus |
| Rosemarie | Fifield |
| Bernard | Laboy |
| Marianne | Earman |
+------------+------------+
SELECT * FROM dfs.`iot_demo/geodata/region.parquet`;
+--------------+--------------+-----------------------+
| R_REGIONKEY | R_NAME | R_COMMENT |
+--------------+--------------+-----------------------+
| 0 | AFRICA | lar deposits. blithe |
| 1 | AMERICA | hs use ironic, even |
| 2 | ASIA | ges. thinly even pin |
| 3 | EUROPE | ly final courts cajo |
| 4 | MIDDLE EAST | uickly special accou |
+--------------+--------------+-----------------------+
49. info@rittmanmead.com www.rittmanmead.com @rittmanmead
•Drill developed for real-time, ad-hoc data exploration with schema discovery on-the-fly
•Individual analysts exploring new datasets, leveraging corporate metadata/data to help
•Hive is more about large-scale, centrally curated set-based big data access
•Drill models conceptually as JSON, vs. Hive’s tabular approach
•Drill introspects schema from whatever it connects to, vs. formal modeling in Hive
Apache Drill vs. Apache Hive
Interactive Queries
(Data Discovery, Tableau/VA)
Reporting Queries
(Canned Reports, OBIEE)
ETL
(ODI, Scripting, Informatica)
Apache Drill Apache Hive
Interactive Queries
100ms - 3mins
Reporting Queries
3mins - 20mins
ETL & Batch Queries
20mins - hours
53. info@rittmanmead.com www.rittmanmead.com @rittmanmead 53
•Another DAG execution engine running on YARN
•More mature than TEZ, with richer API and more vendor support
•Uses concept of an RDD (Resilient Distributed Dataset)
‣RDDs like tables or Pig relations, but can be cached in-memory
‣Great for in-memory transformations, or iterative/cyclic processes
•Spark jobs comprise of a DAG of tasks operating on RDDs
•Access through Scala, Python or Java APIs
•Related projects include
‣Spark SQL
‣Spark Streaming
Apache Spark
54. info@rittmanmead.com www.rittmanmead.com @rittmanmead 54
•Native support for multiple languages
with identical APIs
‣Python - prototyping, data wrangling
‣Scala - functional programming features
‣Java - lower-level, application integration
•Use of closures, iterations, and other
common language constructs to minimize code
•Integrated support for distributed +
functional programming
•Unified API for batch and streaming
Rich Developer Support + Wide Developer Ecosystem
scala> val logfile = sc.textFile("logs/access_log")
14/05/12 21:18:59 INFO MemoryStore: ensureFreeSpace(77353)
called with curMem=234759, maxMem=309225062
14/05/12 21:18:59 INFO MemoryStore: Block broadcast_2
stored as values to memory (estimated size 75.5 KB, free 294.6 MB)
logfile: org.apache.spark.rdd.RDD[String] =
MappedRDD[31] at textFile at <console>:15
scala> logfile.count()
14/05/12 21:19:06 INFO FileInputFormat: Total input paths to process : 1
14/05/12 21:19:06 INFO SparkContext: Starting job: count at <console>:1
...
14/05/12 21:19:06 INFO SparkContext: Job finished:
count at <console>:18, took 0.192536694 s
res7: Long = 154563
scala> val logfile = sc.textFile("logs/access_log").cache
scala> val biapps11g = logfile.filter(line => line.contains("/biapps11g/"))
biapps11g: org.apache.spark.rdd.RDD[String] = FilteredRDD[34] at filter at <console>:17
scala> biapps11g.count()
...
14/05/12 21:28:28 INFO SparkContext: Job finished: count at <console>:20, took 0.387960876 s
res9: Long = 403
55. info@rittmanmead.com www.rittmanmead.com @rittmanmead 55
•Spark SQL, and Data Frames, allow RDDs in Spark to be processed using SQL queries
•Bring in and federate additional data from JDBC sources
•Load, read and save data in Hive, Parquet and other structured tabular formats
Spark SQL - Adding SQL Processing to Apache Spark
val accessLogsFilteredDF = accessLogs
.filter( r => ! r.agent.matches(".*(spider|robot|bot|slurp).*"))
.filter( r => ! r.endpoint.matches(".*(wp-content|wp-admin).*")).toDF()
.registerTempTable("accessLogsFiltered")
val topTenPostsLast24Hour = sqlContext.sql("SELECT p.POST_TITLE, p.POST_AUTHOR, COUNT(*)
as total
FROM accessLogsFiltered a
JOIN posts p ON a.endpoint = p.POST_SLUG
GROUP BY p.POST_TITLE, p.POST_AUTHOR
ORDER BY total DESC LIMIT 10 ")
// Persist top ten table for this window to HDFS as parquet file
topTenPostsLast24Hour.save("/user/oracle/rm_logs_batch_output/topTenPostsLast24Hour.parquet"
, "parquet", SaveMode.Overwrite)