Joe Caserta gave the presentation "The Data Lake - Balancing Data Governance and Innovation" at DAMA NY's one day mini-conference on May 19th. Speakers covered emerging trends in Data Governance, especially around Big Data.
For more information on Caserta Concepts, visit our website at http://casertaconcepts.com/.
How do you balance the need for structured and rule-based governance to assure enterprise data quality - with the imperative to innovate in order to stay relevant and competitive in today's business marketplace?
At the recent CDO Summit in NYC, a range of C-Level Executives across a variety of industries came to hear Joe Caserta, president of Caserta Concepts, put it all in perspective.
Joe talked about the challenges of "data sprawl" and the paradigm shift underway in the evolving big data and data-driven world.
For more information or to contact us, visit http://casertaconcepts.com/
The Big Data Journey – How Companies Adopt Hadoop - StampedeCon 2016StampedeCon
Hadoop adoption is a journey. Depending on the business the process can take weeks, months, or even years. Hadoop is a transformative technology so the challenges have less to do with the technology and more to do with how a company adapts itself to a new way of thinking about data. There are challenges for companies who have lived with an application driven business for the last two decades to suddenly become data driven. Companies need to begin thinking less in terms of single, silo’d servers and more about “the cluster”.
The concept of the cluster becomes the center of data gravity drawing all the applications to it. Companies, especially the IT organizations, embark on a process of understanding how to maintain and operationalize this environment and provide the data lake as a service to the businesses. They must empower the business by providing the resources for the use cases which drive both renovation and innovation. IT needs to adopt new technologies and new methodologies which enable the solutions. This is not technology for technology sake. Hadoop is a data platform servicing and enabling all facets of an organization. Building out and expanding this platform is the ongoing journey as word gets out to businesses that they can have any data they want and any time. Success is what drives the journey.
The length of the journey varies from company to company. Sometimes the challenges are based on the size of the company but many times the challenges are based on the difficulty of unseating established IT processes companies have adopted without forethought for the past two decades. Companies must navigate through the noise. Sifting through the noise to find those solutions which bring real value takes time. As the platform matures and becomes mainstream, more and more companies are finding it easier to adopt Hadoop. Hundreds of companies have already taken many steps; hundreds more have already taken the first step. As the wave of successful Hadoop adoption continues, more and more companies will see the value in starting the journey and paving the way for others.
Enterprise Search: Addressing the First Problem of Big Data & Analytics - Sta...StampedeCon
Enterprise search aims to identify and enable content from multiple enterprise sources to be indexed, searched, and displayed. It faces challenges like unifying diverse data sources, identifying relevant information in real-time, and providing action-oriented insights. Machine learning techniques can help by automatically classifying and clustering data, extracting entities and sentiments, and personalizing search results. Case studies demonstrate how enterprise search has helped organizations in healthcare, telecommunications, finance, and sports improve productivity, customer service, and data-driven insights.
This document discusses balancing data governance and innovation. It describes how traditional data analytics methods can inhibit innovation by requiring lengthy processes to analyze new data. The document advocates adopting a data lake approach using tools like Hadoop and Spark to allow for faster ingestion and analysis of diverse data types. It also discusses challenges around simultaneously enabling innovation through a data lake while still maintaining proper data governance, security, and quality. Achieving this balance is key for organizations to leverage data for competitive advantage.
Data Lakes are early in the Gartner hype cycle, but companies are getting value from their cloud-based data lake deployments. Break through the confusion between data lakes and data warehouses and seek out the most appropriate use cases for your big data lakes.
Joe Caserta, President at Caserta Concepts presented at the 3rd Annual Enterprise DATAVERSITY conference. The emphasis of this year's agenda is on the key strategies and architecture necessary to create a successful, modern data analytics organization.
Joe Caserta presented What Data Do You Have and Where is it?
For more information on the services offered by Caserta Concepts, visit out website at http://casertaconcepts.com/.
Caserta Concepts, Datameer and Microsoft shared their combined knowledge and a use case on big data, the cloud and deep analytics. Attendes learned how a global leader in the test, measurement and control systems market reduced their big data implementations from 18 months to just a few.
Speakers shared how to provide a business user-friendly, self-service environment for data discovery and analytics, and focus on how to extend and optimize Hadoop based analytics, highlighting the advantages and practical applications of deploying on the cloud for enhanced performance, scalability and lower TCO.
Agenda included:
- Pizza and Networking
- Joe Caserta, President, Caserta Concepts - Why are we here?
- Nikhil Kumar, Sr. Solutions Engineer, Datameer - Solution use cases and technical demonstration
- Stefan Groschupf, CEO & Chairman, Datameer - The evolving Hadoop-based analytics trends and the role of cloud computing
- James Serra, Data Platform Solution Architect, Microsoft, Benefits of the Azure Cloud Service
- Q&A, Networking
For more information on Caserta Concepts, visit our website: http://casertaconcepts.com/
Moving Past Infrastructure Limitations Presented by MediaMath
This presentation was given at a Big Data Warehousing Meetup with Caserta Concepts, MediaMath and Qubole. You can learn more about the event here: http://www.meetup.com/Big-Data-Warehousing/events/228372516/
Event description:
At Caserta Concepts, we are firm believers in big data thriving on the cloud. The instant-on, nearly unlimited storage and computing capabilities of AWS has made it the defacto solution for a full spectrum of organizations needing to process large amounts of data.
What's more, an ecosystem of value-added platforms has emerged to further ease and democratize the implementation of cloud based solutions. Qubole has developed a great platform for easily deploying and managing ephemeral and long-lived Hadoop and Spark clusters on AWS.
Moving Past Infrastructure Limitations: Data Warehousing at MediaMath
Over the past year and a half, MediaMath has undertaken a “data liberation” effort in an attempt to leave their bigbox, monolithic data warehouse behind. In this talk, Rory Sawyer, Software Engineer at MediaMath, will describe how this effort transformed MediaMath’s legacy architecture and legacy mindset, which imposed harsh inefficiencies on data sharing and utilization. The current mindset removes these inefficiencies and allows them to say “yes” to more projects and ideas.
Rory will also demo how MediaMath uses Amazon Web Services and Qubole so that infrastructure is no longer a limiting factor on what and how users query. This combination allows them to scale their resources up and down as needed while bridging different data sources and execution engines. Using and extending MediaMath’s data warehousing is no longer a privileged activity but an ability that every employee and client has.
The document provides an introduction and agenda for a presentation on data science and big data. It discusses Joe Caserta's background and experience in data warehousing, business intelligence, and data science. It outlines Caserta Concepts' focus on big data solutions, data warehousing, and industries like ecommerce, financial services, and healthcare. The agenda covers topics like governing big data for data science, introducing the data pyramid, what data scientists do, and standards for data science projects.
The 20th annual Enterprise Data World (EDW) Conference took place in San Diego last month April 17-21. It is recognized as the most comprehensive educational conference on data management in the world.
Joe Caserta was a featured presenter. His session “Evolving from the Data Warehouse to Big Data Analytics - the Emerging Role of the Data Lake," highlighted the challenges and steps to needed to becoming a data-driven organization.
Joe also participated in in two panel discussions during the show:
• "Data Lake or Data Warehouse?"
• "Big Data Investments Have Been Made, But What's Next
For more information on Caserta Concepts, visit our website at http://casertaconcepts.com/.
The Business Data Lake is a new approach to information management, analytics and reporting that better matches the culture of business and better enables organizations to truly leverage the value of their information.
This document discusses best practices for using Hadoop as an enterprise data hub. It provides an overview of how big data is driving new analytical workloads and the need for deeper customer insights. It discusses challenges with analyzing new sources of structured, unstructured and multi-structured data. It introduces the concept of a Hadoop enterprise data hub and data refinery to simplify access to new insights from big data. Key components of the data hub include a data reservoir to capture raw data from various sources, a data refinery to cleanse and transform the data, and publishing high value insights to data warehouses and other systems.
Building New Data Ecosystem for Customer Analytics, Strata + Hadoop World, 2016Caserta
Caserta Concepts Founder and President, Joe Caserta, gave this presentation at Strata + Hadoop World 2016 in New York, NY. His session covers path-to-purchase analytics using a data lake and spark.
For more information, visit http://casertaconcepts.com/
1. Enterprise Data Management (EDM) is the ability of an organization to precisely define, easily integrate and effectively retrieve data for both internal applications and external communication. It involves managing various types of data across the enterprise.
2. EDM includes areas like master data management, reference data management, metadata management, data governance, data quality, data analytics, data privacy, data integration, and data architecture.
3. The document discusses definitions and concepts for each of these areas, including roles, processes, and technologies involved. It provides overviews of fundamental concepts, principles, dimensions and processes for data quality, data governance, data privacy and other areas.
How to Become an Analytics Ready Insurer - with Informatica and HortonworksHortonworks
Whether you are an insurer, reinsurer, broker or insurance service provider; everything you do is based on analytics. From underwriting to claims to agency and marketing, the smartest and most streamlined business operations at insurance companies are driven by advanced and intelligent analytics. But is your data ready? Are you an “Analytics Ready” insurer? Great analytics starts with great data management. Join us as industry experts from Informatica and Hortonworks share industry trends and best practices to show you how to become an “Analytics Ready” insurer.
Hadoop 2.0: YARN to Further Optimize Data ProcessingHortonworks
Data is exponentially increasing in both types and volumes, creating opportunities for businesses. Watch this video and learn from three Big Data experts: John Kreisa, VP Strategic Marketing at Hortonworks, Imad Birouty, Director of Technical Product Marketing at Teradata and John Haddad, Senior Director of Product Marketing at Informatica.
Multiple systems are needed to exploit the variety and volume of data sources, including a flexible data repository. Learn more about:
- Apache Hadoop 2 and YARN
- Data Lakes
- Intelligent data management layers needed to manage metadata and usage patterns as well as track consumption across these data platforms.
A modern, flexible approach to Hadoop implementation incorporating innovation...DataWorks Summit
A modern, flexible approach to Hadoop implementation incorporating innovations from HP Haven
Jeff Veis
Vice President
HP Software Big Data
Gilles Noisette
Master Solution Architect
HP EMEA Big Data CoE
The Future of Data Management: The Enterprise Data HubCloudera, Inc.
The document discusses the enterprise data hub (EDH) as a new approach for data management. The EDH allows organizations to bring applications to data rather than copying data to applications. It provides a full-fidelity active compliance archive, accelerates time to insights through scale, unlocks agility and innovation, consolidates data silos for a 360-degree view, and enables converged analytics. The EDH is implemented using open source, scalable, and cost-effective tools from Cloudera including Hadoop, Impala, and Cloudera Manager.
Data Lake, Virtual Database, or Data Hub - How to Choose?DATAVERSITY
Data integration is just plain hard and there is no magic bullet. That said, three new data integration techniques do ameliorate the misery, making silo-busting possible, if not trivial. The three approaches – data lakes, virtual databases (aka federated databases), and data hubs – are a boon to organizations big enough to have separate systems, separate lines of business, and redundant acquired or COTS data stores. Each approach has its place, but how do you make the right decision about which data silo integration approach to choose and when?
This webinar describes how you can use the key concepts of data Movement, Harmonization, and Indexing to determine what you are giving up or investing in, and make the best decision for your project.
Big Data 2.0: YARN Enablement for Distributed ETL & SQL with HadoopCaserta
In our most recent Big Data Warehousing Meetup, we learned about transitioning from Big Data 1.0 with Hadoop 1.x with nascent technologies to the advent of Hadoop 2.x with YARN to enable distributed ETL, SQL and Analytics solutions. Caserta Concepts Chief Architect Elliott Cordo and an Actian Engineer covered the complete data value chain of an Enterprise-ready platform including data connectivity, collection, preparation, optimization and analytics with end user access.
For more information on our services or upcoming events, please visit our website at http://www.casertaconcepts.com/.
Reference architecture for Internet of ThingsSujee Maniyam
What kind of a data infrastructure is needed, to support Internet of Things?
This talk presents a reference architecture.
We are actually building this architecture as open source project. See here : bit.ly / iotxyz
Best Practices For Building and Operating A Managed Data Lake - StampedeCon 2016StampedeCon
The document discusses using a data lake approach with EMC Isilon storage to address various business use cases. It describes how the solution provides shared storage for multiple workloads through multi-protocol support, enables data protection and isolation of client data, and allows testing applications across Hadoop distributions through a common platform. Examples are given of how this approach supports an enterprise data hub, data warehouse offloading, data integration, and enrichment services.
The document discusses creating a modern data architecture using a data lake approach. It describes the key components of a data lake including different zones for landing raw data, refining it into trusted datasets, and using sandboxes. It also summarizes challenges of data lakes and how an integrated data lake management platform can help with ingestion, governance, security, and enabling self-service analytics. Finally, it briefly discusses considerations for implementing cloud-based and hybrid data lakes.
Using Hadoop for Cognitive Analytics discusses using Hadoop and external data sources for cognitive analytics. The document outlines solution architectures that integrate external and customer-specific metrics to improve decision making. Microservices are used for data ingestion and curation from various sources into Hadoop for storage and analytics. This allows combining business metrics with hyperlocal data at precise locations to provide insights.
Roadmap to data driven advice michael goedhart 1v0BigDataExpo
1. RBI aims to provide data-driven advisory services by building on its existing solid foundation of innovative technology, optimized processes, and independent teams.
2. The first steps involve a proof of concept for an initial data product in 2017, followed by an evolving roadmap and expanding team capabilities in analytics.
3. For 2018, the roadmap prioritizes adding customer value through enhancing existing products with analytics capabilities and proving the value of advisory notifications.
Loggly - Case Study - Stanley Black & Decker Transforms Work with Support fro...SolarWinds Loggly
With Loggly, Stanley Black & Decker:
• Provides team with troubleshooting capabilities for mobile and IoT applications running on traditional and serverless architectures
• Supports performance monitoring, security, and PCI compliance needs
• Enables quick scalability as new innovations are launched
The document provides an overview of IBM's BigInsights product. It discusses how BigInsights can help businesses gain insights from large, complex datasets through features like built-in text analytics, SQL support, spreadsheet-style analysis, and accelerators for domain-specific analytics like social media. The document also summarizes capabilities of BigInsights like Big SQL, Big Sheets, Big R, and its text analytics engine that allow businesses to explore, analyze, and model large datasets.
Tuning Solr and its Pipeline for Logs: Presented by Rafał Kuć & Radu Gheorghe...Lucidworks
The document summarizes key points from a presentation on optimizing Solr and log pipelines for time-series data. The presentation covered using time-based Solr collections that rotate based on size, tiering hot and cold clusters, tuning OS and Solr settings, parsing logs, buffering pipelines, and shipping logs using protocols like UDP, TCP, and Kafka. The overall conclusions were that tuning segments per tier and max merged segment size improved indexing throughput, and that simple, reliable pipelines like Filebeat to Kafka or rsyslog over UNIX sockets generally work best.
Kazuhiro Kosaka is an engineer at CyberAgent, Inc. who has worked there since 2009. He introduces Quartz Composer, a node-based visual programming language in macOS for processing and rendering graphical data. Quartz Composer allows creating visualizations and its community is shrinking, though it can be used for data visualization with limitations around network features requiring writing custom parsers. Alternatives mentioned include Unity and Amazon Lumberyard.
Bork is the Dutch distributor for the rewarded DINO software that is sold all over Europe. Bork is striving to be technology leader in the Netherlands.
The following slides summarize and curate most of the knowledge and patterns gathered to date on Node error handling.
Without clear understanding and strategy, Node error handling might be the Achilles heel of your app – its unique single-threaded execution model and loose types raise challenges that don’t exist in any other frameworks. Node by itself doesn’t provide patterns for critical paths like where to put error handling code, even worst it suggest patterns that were rejected by the community like passing errors in callbacks.
It covers topics like promises, generators, callbacks, unhandled exceptions, APM products, testing errors, operational errors vs development errors and much more
Présentation stratégie et use cases autour des solutions IoT (Internet of Things) lors de l'Oracle Cloud Café du 12 avril 2016 avec Eric de Smedt, presales director et Jean-Marc Hui Bon Hoa, expert IoT et la participation d'Accenture, les start-up GreenMe et Wicross
The document discusses recent changes in JavaScript development trends since the mid-2010s, including functional programming principles like immutable variables, no side effects, high order functions, and monads. It also covers modern front-end development patterns like MVVM using Knockout.js for declarative bindings and templating. CommonJS modules and asynchronous I/O are discussed in the context of server-side JavaScript.
How to Collect and Process Data Under GDPR?Piwik PRO
Learn the key differences between Dat Controllers, Data Processors, and Data Subjects. Find out how to safely collect and analyze data while respecting Data Subject Rights and adhering to General Data Protection Regulations (GDPR).
Created by experts from Piwik PRO
Big data analytics can provide businesses with new insights from large volumes of structured and unstructured data. It allows analyzing customer sentiment, detecting medical conditions, predicting weather patterns, assessing risk, and identifying threats. To leverage big data, businesses need to capture data from various sources, analyze it in real-time, and turn it into insights to predict customer, competitive, and market behavior. Deploying big data analytics competencies consistently across an enterprise correlates with higher financial performance and competitive advantage long-term.
O documento discute a importância dos dados abertos e como eles podem impulsionar a reinvenção do governo. Ele explora como os dados abertos e a análise de dados podem simplificar a estrutura governamental, orientar as políticas públicas e processos com base em evidências, e alimentar a transformação por meio da inovação e novos serviços.
Workshop with Joe Caserta, President of Caserta Concepts, at Data Summit 2015 in NYC.
Data science, the ability to sift through massive amounts of data to discover hidden patterns and predict future trends and actions, may be considered the "sexiest" job of the 21st century, but it requires an understanding of many elements of data analytics. This workshop introduced basic concepts, such as SQL and NoSQL, MapReduce, Hadoop, data mining, machine learning, and data visualization.
For notes and exercises from this workshop, click here: https://github.com/Caserta-Concepts/ds-workshop.
For more information, visit our website at www.casertaconcepts.com
Defining and Applying Data Governance in Today’s Business EnvironmentCaserta
This document summarizes a presentation by Joe Caserta on defining and applying data governance in today's business environment. It discusses the importance of data governance for big data, the challenges of governing big data due to its volume, variety, velocity and veracity. It also provides recommendations on establishing a big data governance framework and addressing specific aspects of big data governance like metadata, information lifecycle management, master data management, data quality monitoring and security.
Architecting Data For The Modern Enterprise - Data Summit 2017, Closing KeynoteCaserta
The “Big Data era” has ushered in an avalanche of new technologies and approaches for delivering information and insights to business users. What is the role of the cloud in your analytical environment? How can you make your migration as seamless as possible? This closing keynote, delivered by Joe Caserta, a prominent consultant who has helped many global enterprises adopt Big Data, provided the audience with the inside scoop needed to supplement data warehousing environments with data intelligence—the amalgamation of Big Data and business intelligence.
This presentation was given as the closing keynote at DBTA's annual Data Summit in NYC.
Data governance course - part 1.
Data Governance is the orchestration of people, process and technology
to enable an organization to leverage data as an enterprise asset.
The core objectives of a governance program are:
Guide information management decision-making
Ensure information is consistently defined and well understood
Increase the use and trust of data as an enterprise asset
Objectives of this presentation :
Introduction to data governance
• Why data governance discussion today : the enterprise challenges
Building a New Platform for Customer Analytics Caserta
Caserta Concepts and Databricks partner up to bring you this insightful webinar on how a business can choose from all of the emerging big data technologies to figure out which one best fits their needs.
Architecting for Big Data: Trends, Tips, and Deployment OptionsCaserta
Joe Caserta, President at Caserta Concepts addressed the challenges of Business Intelligence in the Big Data world at the Third Annual Great Lakes BI Summit in Detroit, MI on Thursday, March 26. His talk "Architecting for Big Data: Trends, Tips and Deployment Options," focused on how to supplement your data warehousing and business intelligence environments with big data technologies.
For more information on this presentation or the services offered by Caserta Concepts, visit our website: http://casertaconcepts.com/.
Five Attributes to a Successful Big Data StrategyPerficient, Inc.
The veracity, variety and sheer volume of data is increasing exponentially. With Hadoop and NoSQL solutions becoming commonplace, there are many technical options for managing and extracting value from this data. Many companies create labs to experiment with Big Data solutions, only later become IT playgrounds or unstructured dumping grounds.
To help avoid these pitfalls,companies with successful Big Data projects approach challenges by formulating a strategy that assures real business value is derived from their Big Data investments. In a Perficient poll, 73% of companies stated they are in the early-evaluation stage to find solutions to their Big Data problems and are only beginning to create their strategy.
Join us for a webinar featuring thought-provoking best practices used by successful companies to quickly realize business value from their Big Data investments. You'll learn:
The top five steps to increased business value
What the top companies are doing in Big Data that you need to know
Next steps to lay the ground work for a successful Big Data strategy
Agile BI: How to Deliver More Value in Less TimePerficient, Inc.
Learn how to:
Construct a BI and analytical environment that provides the critical functionality that enables your customers to provide timely answers, supporting modern agile business
Leverage agile delivery concepts to deliver value in days rather than in months
Build a support organization that enables your users to create increased value from your company’s information assets
This document discusses what makes an effective data team. It begins with introductions from Alex Dean, CEO of Snowplow Analytics. It then discusses how Snowplow helps companies collect and analyze customer event data. The document outlines a hierarchy of needs for a data team, beginning with ensuring data is available and ending with data scientists doing industry-leading work. It provides advice on each level of the hierarchy to help data teams become more effective.
MT101 Dell OCIO: Delivering data and analytics in real timeDell EMC World
Today’s business operations increasingly rely on sophisticated integration of data streaming across the enterprise. This requires an analytics ecosystem that is highly current and highly available. This session explores the infrastructure and methods Dell IT used for keeping the complex flows, integration processes, BI, and analytics operating 24x7.
There is an overwhelming list of expectations – and challenges – in this new, emerging and evolving role. In this presentation, given at the 2016 CDO Summit, Joe Caserta focuses on:
- Defining the CDO title
- Outlining the skills that enhance chances for success
- Listing all the many things the company thinks you are responsible for
- Providing an overview of the core technologies you need to be familiar with and will serve to ultimately support your success
- Presenting a concise list of the most pressing challenges
- Sharing insights and arguments for how best to meet the challenges and succeed in your new role
When the business needs intelligence (15Oct2014)Dipti Patil
When an organization needs to make important decisions, business intelligence can help by analyzing internal and external data to generate knowledge. Business intelligence enables fact-based decisions by aggregating, enriching, and presenting data from sources like ERP systems and databases. The goals of a business intelligence implementation are to capture data from across the business to create a unified view, produce an integrated data warehouse to improve decision making, and enable ongoing analysis of data rather than just collecting it.
Getting Data Quality Right
High quality data is important for organizational success, but achieving good data quality requires a programmatic approach. Data quality challenges are often the root cause of IT and business failures. To improve, organizations need to take a systems thinking approach, understand data issues over time, and not underestimate the role of culture. Developing repeatable data quality capabilities and expertise can help organizations identify problems, determine causes, and prevent future issues. Effective data quality engineering provides a framework for utilizing data to support business strategy and goals.
Integrating the CDO Role Into Your Organization; Managing the Disruption (MIT...Caserta
The role of the Chief Data Officer (CDO) has become integral to the evolution needed to turn a wisdom-driven company into an analytics-driven company. With Data Governance at the core of your responsibility, moving the innovation meter is a global challenge among CDOs. Specifically the CDO must:
• Provide a single point of accountability for data initiatives and issues
• Innovate ways to use existing data and evangelize a data vision for the organization
• Support & enforce data governance policies via outreach, training & tools
• Work with IT to develop/maintain an enterprise data repository
• Set standards for analytical reporting and generate data insights through data science
In this session, Joe Caserta addresses real-word CDO challenges, shares techniques to overcome them, manage corporate disruption and achieve success.
This document provides an overview of data quality and the fundamentals of ensuring data quality in an organization. It discusses the importance of data quality and outlines the key steps in the data quality pipeline including extract, clean, conform, and deliver. It also covers determining the system of record, cleaning data from multiple sources, prioritizing data quality goals, different types of data quality enforcement, and tracking and monitoring data quality failures. The document emphasizes that achieving high quality data requires planning, well-defined processes, and continuous monitoring.
Ashley Ohmann--Data Governance Final 011315Ashley Ohmann
This presentation discusses enterprise data governance with Tableau. It defines data governance as processes that formally manage important data assets. The goals of data governance include establishing standards, processes, compliance, security, and metrics. Good data governance benefits an organization by improving accuracy, enabling better decisions with less waste. The presentation provides examples of how one organization improved data governance through stakeholder involvement, establishing metrics, building a data warehouse, and implementing Tableau for analytics. Key goals discussed are building trust, communicating validity, enabling access, managing metadata, provisioning rights, and maintaining compliance.
Data Lake Architecture – Modern Strategies & ApproachesDATAVERSITY
Data Lake or Data Swamp? By now, we’ve likely all heard the comparison. Data Lake architectures have the opportunity to provide the ability to integrate vast amounts of disparate data across the organization for strategic business analytic value. But without a proper architecture and metadata management strategy in place, a Data Lake can quickly devolve into a swamp of information that is difficult to understand. This webinar will offer practical strategies to architect and manage your Data Lake in a way that optimizes its success.
The Data Lake and Getting Buisnesses the Big Data Insights They NeedDunn Solutions Group
Do terms like "Data Lake" confuse you? You’re not alone. With all of the technology buzzwords flying around today, it can become a task to keep up with and clearly understand each of them. However a data lake is definitely something to dedicate the time to understand. Leveraging data lake technology, companies are finally able to keep all of their disparate information and streams of data in one secure location ready for consumption at any time – this includes structured, unstructured, and semi-structured data. For more information on our Big Data Consulting Services, don’t hesitate to visit us online at: http://bit.ly/2fvV5rR
This document discusses the opportunities and challenges of big data. It defines big data as huge volumes of structured and unstructured data from various sources that require new tools to analyze and extract business insights. Big data provides both statistical and predictive views to help businesses make smarter decisions. While big data allows companies to integrate diverse data sources and gain real-time insights, challenges include processing large and complex data volumes and ensuring data quality, privacy and management. The document outlines the big data lifecycle and how analytics can be used descriptively, predictively and prescriptively.
All Together Now: A Recipe for Successful Data GovernanceInside Analysis
The Briefing Room with David Loshin and Phasic Systems
Slides from the Live Webcast on July 10, 2012
Getting disparate groups of professionals to agree on business terminology can take forever, especially when big dollars or major issues are at stake. Many data governance programs languish indefinitely because of simple hang-ups. But a new approach has recently achieved monumental results for the United States Navy. The detailed process has since been codified and combined with a NoSQL technology that enables even the most complex data models and definitions to be distilled into simple, functional data flows.
Check out this episode of The Briefing Room to hear Analyst David Loshin of Knowledge Integrity explain why effective Data Governance requires cooperation. Loshin will be briefed by Geoffrey Malafsky of Phasic Systems who will tout his company's proprietary protocol for extracting, defining and managing critical information assets and processes. He'll explain how their approach allows everyone to be "correct" in their definitions, without causing data quality or performance issues in associated information systems. And he'll explain how their Corporate NoSQL engine enables real-time harmonization of definitions and dimensions.
Visit us at: http://www.insideanalysis.com
Similar to The Data Lake - Balancing Data Governance and Innovation (20)
Using Machine Learning & Spark to Power Data-Driven MarketingCaserta
Joe Caserta provides a statistically-driven model to understanding the customer path to purchase, which combines online, offline and third-party data sources. He shows how customer data is fed to machine learning, which assigns weighted credit to customer interactions in order to give insight to what marketing activities truly matter. This presentation is from Caserta's February 2018 Big Data Warehousing Meetup co-hosted with Databricks.
Data Intelligence: How the Amalgamation of Data, Science, and Technology is C...Caserta
Joe Caserta explores the world of analytics, tech, and AI to paint a picture of where business is headed. This presentation is from the CDAO Exchange in Miami 2018.
Creating a DevOps Practice for Analytics -- Strata Data, September 28, 2017Caserta
Over the past eight or nine years, applying DevOps practices to various areas of technology within business has grown in popularity and produced demonstrable results. These principles are particularly fruitful when applied to a data analytics environment. Bob Eilbacher explains how to implement a strong DevOps practice for data analysis, starting with the necessary cultural changes that must be made at the executive level and ending with an overview of potential DevOps toolchains. Bob also outlines why DevOps and disruption management go hand in hand.
Topics include:
- The benefits of a DevOps approach, with an emphasis on improving quality and efficiency of data analytics
- Why the push for a DevOps practice needs to come from the C-suite and how it can be integrated into all levels of business
- An overview of the best tools for developers, data analysts, and everyone in between, based on the business’s existing data ecosystem
- The challenges that come with transforming into an analytics-driven company and how to overcome them
- Practical use cases from Caserta clients
This presentation was originally given by Bob at the 2017 Strata Data Conference in New York City.
General Data Protection Regulation - BDW Meetup, October 11th, 2017Caserta
Caserta Presentation:
General Data Protection Regulation (GDPR) is a business and technical challenge for companies worldwide - and the deadlines are coming fast! American institutions that do business in the EU or have customers from the EU will have their data practices affected. With this in mind, Caserta – joined by Waterline Data, Salt Recruiting, and Squire Patton Boggs – hosted a BDW Meetup on the GDPR, which is perhaps the most controversial data legislation that has been passed to date.
Joe Caserta, Founding President, Caserta, spoke on the basics of the GDPR, how it will impact data privacy around the world, and some techniques geared towards compliance.
Introduction to Data Science (Data Summit, 2017)Caserta
This document summarizes an introduction to data science presentation by Joe Caserta and Bill Walrond of Caserta Concepts. Caserta Concepts is an internationally recognized data innovation and engineering consulting firm. The agenda covers why data science is important, challenges of working with big data, governing big data, the data pyramid, what data scientists do, standards for data science, and a demonstration of data analysis. Popular machine learning algorithms like regression, decision trees, k-means clustering and collaborative filtering are also discussed.
Looker Data Modeling in the Age of Cloud - BDW Meetup May 2, 2017Caserta
This document discusses the evolution of data analytics and modeling. It describes three waves: the first with slow hardware and manual entry; the second with faster PCs but tool explosions; and the third wave now with big data, cloud warehouses, and data-driven tools like Looker and BigQuery. It argues that in this current wave, having a flexible yet performant data model built on SQL in a warehouse, and using a language like LookML to define relationships and translate questions, allows gaining reliable answers with agility without worrying about low-level syntax or tools.
Caserta Concepts, Datameer and Microsoft shared their combined knowledge and a use case on big data, the cloud and deep analytics. Attendes learned how a global leader in the test, measurement and control systems market reduced their big data implementations from 18 months to just a few.
Speakers shared how to provide a business user-friendly, self-service environment for data discovery and analytics, and focus on how to extend and optimize Hadoop based analytics, highlighting the advantages and practical applications of deploying on the cloud for enhanced performance, scalability and lower TCO.
Agenda included:
- Pizza and Networking
- Joe Caserta, President, Caserta Concepts - Why are we here?
- Nikhil Kumar, Sr. Solutions Engineer, Datameer - Solution use cases and technical demonstration
- Stefan Groschupf, CEO & Chairman, Datameer - The evolving Hadoop-based analytics trends and the role of cloud computing
- James Serra, Data Platform Solution Architect, Microsoft, Benefits of the Azure Cloud Service
- Q&A, Networking
For more information on Caserta Concepts, visit our website: http://casertaconcepts.com/
This document discusses appropriate and inappropriate use cases for Apache Spark based on the type of data and workload. It provides examples of good uses, such as batch processing, ETL, and machine learning/data science. It also gives examples of bad uses, such as random access queries, frequent incremental updates, and low latency stream processing. The document recommends using a database instead of Spark for random access, updates, and serving live queries. It suggests using message queues instead of files for low latency stream processing. The goal is to help users understand how to properly leverage Spark for big data workloads.
During this Big Data Warehousing Meetup, Caserta Concepts and Databricks addressed the number one operational and analytic goal of nearly every organization today – to have complete view of every customer. Customer Data Integration (CDI) must be implemented to cleanse and match customer identities within and across various data systems. CDI has been a long-standing data engineering challenge, not just one of logic and complexity but also of performance and scalability.
The speakers brought together best practice techniques with Apache Spark to achieve complete CDI.
Speakers:
Joe Caserta, President, Caserta Concepts
Kevin Rasmussen, Big Data Engineer, Caserta Concepts
Vida Ha, Lead Solutions Engineer, Databricks
The sessions covered a series of problems that are adequately solved with Apache Spark, as well as those that are require additional technologies to implement correctly. Topics included:
· Building an end-to-end CDI pipeline in Apache Spark
· What works, what doesn’t, and how do we use Spark we evolve
· Innovation with Spark including methods for customer matching from statistical patterns, geolocation, and behavior
· Using Pyspark and Python’s rich module ecosystem for data cleansing and standardization matching
· Using GraphX for matching and scalable clustering
· Analyzing large data files with Spark
· Using Spark for ETL on large datasets
· Applying Machine Learning & Data Science to large datasets
· Connecting BI/Visualization tools to Apache Spark to analyze large datasets internally
The speakers also touched on data governance, on-boarding new data rapidly, how to balance rapid agility and time to market with critical decision support and customer interaction. They also shared examples of problems that Apache Spark is not optimized for.
For more information on the services offered by Caserta Concepts, visit our website: http://casertaconcepts.com/
Introducing Kudu, Big Data Warehousing MeetupCaserta
Not just an SQL interface or file system, Kudu - the new, updating column store for Hadoop, is changing the storage landscape. It's easy to operate and makes new data immediately available for analytics or operations.
At the Caserta Concepts Big Data Warehousing Meetup, our guests from Cloudera outlined the functionality of Kudu and talked about why it will become an integral component in big data warehousing on Hadoop.
To learn more about what Caserta Concepts has to offer, visit http://casertaconcepts.com/
During a Big Data Warehousing Meetup in NYC, Elliott Cordo, Chief Architect at Caserta Concepts discussed emerging trends in real time data processing. The presentation included processing frameworks such as Spark and Storm, as well datastore technologies ranging from NoSQL to Hadoop. He also discussed exciting new AWS services such as Lambda, Kenesis, and Kenesis Firehose.
Big Data Warehousing Meetup: Dimensional Modeling Still Matters!!!Caserta
Joe Caserta went over the details inside the big data ecosystem and the Caserta Concepts Data Pyramid, which includes Data Ingestion, Data Lake/Data Science Workbench and the Big Data Warehouse. He then dove into the foundation of dimensional data modeling, which is as important as ever in the top tier of the Data Pyramid. Topics covered:
- The 3 grains of Fact Tables
- Modeling the different types of Slowly Changing Dimensions
- Advanced Modeling techniques like Ragged Hierarchies, Bridge Tables, etc.
- ETL Architecture.
He also talked about ModelStorming, a technique used to quickly convert business requirements into an Event Matrix and Dimensional Data Model.
This was a jam-packed abbreviated version of 4 days of rigorous training of these techniques being taught in September by Joe Caserta (Co-Author, with Ralph Kimball, The Data Warehouse ETL Toolkit) and Lawrence Corr (Author, Agile Data Warehouse Design).
For more information, visit http://casertaconcepts.com/.
Redefining Cybersecurity with AI CapabilitiesPriyanka Aash
In this comprehensive overview of Cisco's latest innovations in cybersecurity, the focus is squarely on resilience and adaptation in the face of evolving threats. The discussion covers the imperative of tackling Mal information, the increasing sophistication of insider attacks, and the expanding attack surfaces in a hybrid work environment. Emphasizing a shift towards integrated platforms over fragmented tools, Cisco introduces its Security Cloud, designed to provide end-to-end visibility and robust protection across user interactions, cloud environments, and breaches. AI emerges as a pivotal tool, from enhancing user experiences to predicting and defending against cyber threats. The blog underscores Cisco's commitment to simplifying security stacks while ensuring efficacy and economic feasibility, making a compelling case for their platform approach in safeguarding digital landscapes.
The Challenge of Interpretability in Generative AI Models.pdfSara Kroft
Navigating the intricacies of generative AI models reveals a pressing challenge: interpretability. Our blog delves into the complexities of understanding how these advanced models make decisions, shedding light on the mechanisms behind their outputs. Explore the latest research, practical implications, and ethical considerations, as we unravel the opaque processes that drive generative AI. Join us in this insightful journey to demystify the black box of artificial intelligence.
Dive into the complexities of generative AI with our blog on interpretability. Find out why making AI models understandable is key to trust and ethical use and discover current efforts to tackle this big challenge.
Top 12 AI Technology Trends For 2024.pdfMarrie Morris
Technology has become an irreplaceable component of our daily lives. The role of AI in technology revolutionizes our lives for the betterment of the future. In this article, we will learn about the top 12 AI technology trends for 2024.
The History of Embeddings & Multimodal EmbeddingsZilliz
Frank Liu will walk through the history of embeddings and how we got to the cool embedding models used today. He'll end with a demo on how multimodal RAG is used.
Demystifying Neural Networks And Building Cybersecurity ApplicationsPriyanka Aash
In today's rapidly evolving technological landscape, Artificial Neural Networks (ANNs) have emerged as a cornerstone of artificial intelligence, revolutionizing various fields including cybersecurity. Inspired by the intricacies of the human brain, ANNs have a rich history and a complex structure that enables them to learn and make decisions. This blog aims to unravel the mysteries of neural networks, explore their mathematical foundations, and demonstrate their practical applications, particularly in building robust malware detection systems using Convolutional Neural Networks (CNNs).
Increase Quality with User Access Policies - July 2024Peter Caitens
⭐️ Increase Quality with User Access Policies ⭐️, presented by Peter Caitens and Adam Best of Salesforce. View the slides from this session to hear all about “User Access Policies” and how they can help you onboard users faster with greater quality.
Finetuning GenAI For Hacking and DefendingPriyanka Aash
Generative AI, particularly through the lens of large language models (LLMs), represents a transformative leap in artificial intelligence. With advancements that have fundamentally altered our approach to AI, understanding and leveraging these technologies is crucial for innovators and practitioners alike. This comprehensive exploration delves into the intricacies of GenAI, from its foundational principles and historical evolution to its practical applications in security and beyond.
This PDF delves into the aspects of information security from a forensic perspective, focusing on privacy leaks. It provides insights into the methods and tools used in forensic investigations to uncover and mitigate privacy breaches in mobile and cloud environments.
3. @joe_Caserta
Caserta Timeline
Launched Big Data practice Co-author, with Ralph Kimball, The Data
Warehouse ETL Toolkit (Wiley)
Data Analysis, Data Warehousing and Business
Intelligence since 1996
Began consulting database programing and
data modeling 25+ years hands-on experience building
database solutions
Founded Caserta Concepts in NYC
Web log analytics solution published in
Intelligent Enterprise
Launched Data Science, Data Interaction and
Cloud practices Laser focus on extending Data Analytics with
Big Data solutions
1986
2004
1996
2009
2001
2013
2012
2014
Dedicated to Data Governance Techniques on
Big Data (Innovation)
Awarded Top 20 Big Data
Companies 2016
Top 20 Most Powerful Big Data
consulting firms
Launched Big Data Warehousing (BDW)
Meetup NYC: 2,000+ Members
2016 Awarded Fastest Growing Big Data
Companies 2016
Established best practices for big data
ecosystem implementations
4. @joe_Caserta
About Caserta Concepts
• Consulting Data Innovation and Modern Data Engineering
• Award-winning company
• Internationally recognized work force
• Strategy, Architecture, Implementation, Governance
• Innovation Partner
• Strategic Consulting
• Advanced Architecture
• Build & Deploy
• Leader in Enterprise Data Solutions
• Big Data Analytics
• Data Warehousing
• Business Intelligence
• Data Science
• Cloud Computing
• Data Governance
6. @joe_Caserta
The Future of Data is Today
As a Mindful Cyborg, Chris
Dancy utilizes up to
700 sensors, devices,
applications, and services to
track, analyze, and optimize as
many areas of his existence.
Data quantification enables
him to see the connections of
otherwise invisible data,
resulting in dramatic upgrades
to his health, productivity, and
quality of life.
7. @joe_Caserta
The Evolution of Data Analytics
Descriptive
Analytics
Diagnostic
Analytics
Predictive
Analytics
Prescriptive
Analytics
What
happened?
Why did it
happen?
What will
happen?
How can we make
It happen?
Data Analytics Sophistication
BusinessValue
Source: Gartner
Reports Correlations Predictions Recommendations
Cognitive Computing / Cognitive Data Analytics
8. @joe_Caserta
Traditional Data Analytics Methods
• Design – Top Down, Bottom Up
• Customer Interviews and requirements gathering
• Data Profiling
• Create Data Models
• Facts and Dimensions
• Extract Transform Load (ETL)
• Copy data from sources to data warehouse
• Data Governance
• Stewardship, business rules, data quality
• Put a BI Tool on Top
• Design semantic layer
• Develop reports
9. @joe_Caserta
A Day in the Life
• Onboarding new data is difficult!
• Rigid Structures and Data Governance
• Disconnected/removed from business requirements:
“Hey – I need to analyze some new data”
IT Conforms and profiles the data
Loads it into dimensional models
Builds a semantic layer nobody is going to use
Creates a dashboard we hope someone will notice
..and then you can access your data 3-6 months later to see if it has value!
10. @joe_Caserta
Houston, we have a Problem: Data Sprawl
• There is one application for every 5-10 employees generating copies of
the same files leading to massive amounts of duplicate idle data strewn all
across the enterprise. - Michael Vizard, ITBusinessEdge.com
• Employees spend 35% of their work time searching for information...
finding what they seek 50% of the time or less.
- “The High Cost of Not Finding Information,” IDC
13. @joe_Caserta
OLD WAY:
• Structure Ingest Analyze
• Fixed Capacity
• Monolithic
NEW WAY:
• Ingest Analyze Structure
• Dynamic Capacity
• Ecosystem
RECIPE:
• Data Lake
• Cloud
• Polyglot Data Landscape
The Paradigm Shift
Big Data is not the problem,
It’s the Change Agent
14. @joe_Caserta
Enrollments
Claims
Finance
ETL
Ad-Hoc Query
Horizontally Scalable Environment - Optimized for Analytics
Data Lake
Canned Reporting
Big Data Analytics
NoSQL
DatabasesETL
Ad-Hoc/Canned
Reporting
Traditional BI
Spark Python Hive
N1 N2 N4N3 N5
Hadoop Distributed File System (HDFS)
Traditional
EDW
Others…
The Evolution of Data Analytics
Data Science
15. @joe_Caserta
Innovation is the only sustainable competitive advantage a company can have
Innovations may fail, but companies that don’t innovate will fail
17. @joe_Caserta
What’s Old is New Again
Before Data Warehousing Governance
Users trying to produce reports from raw source data
No Data Conformance
No Master Data Management
No Data Quality processes
No Trust: Two analysts were almost guaranteed to come up with two
different sets of numbers!
Before Data Lake Governance
We can put “anything” in Hadoop
We can analyze anything
We’re scientists, we don’t need IT, we make the rules
Rule #1: Dumping data into Hadoop with no repeatable process, procedure, or data governance
will create a mess
Rule #2: Information harvested from an ungoverned systems will take us back to the old days: No
Trust = Not Actionable
18. @joe_Caserta
Technology:
• Scalable distributed storage Hadoop, S3
• Pluggable fit-for-purpose processing Spark, EMR
Functional Capabilities:
• Remove barriers from data ingestion and analysis
• Storage and processing for all data
• Tunable Governance
20. @joe_Caserta
•This is the ‘people’ part. Establishing Enterprise Data Council, Data Stewards, etc.Organization
•Definitions, lineage (where does this data come from), business definitions, technical
metadataMetadata
•Identify and control sensitive data, regulatory compliancePrivacy/Security
•Data must be complete and correct. Measure, improve, certifyData Quality and Monitoring
•Policies around data frequency, source availability, etc.Business Process Integration
•Ensure consistent business critical data i.e. Members, Providers, Agents, etc.Master Data Management
•Data retention, purge schedule, storage/archiving
Information Lifecycle
Management (ILM)
Components of Data Governance
21. @joe_Caserta
•This is the ‘people’ part. Establishing Enterprise Data Council, Data Stewards, etc.Organization
•Definitions, lineage (where does this data come from), business definitions, technical
metadataMetadata
•Identify and control sensitive data, regulatory compliancePrivacy/Security
•Data must be complete and correct. Measure, improve, certifyData Quality and Monitoring
•Policies around data frequency, source availability, etc.Business Process Integration
•Ensure consistent business critical data i.e. Members, Providers, Agents, etc.Master Data Management
•Data retention, purge schedule, storage/archiving
Information Lifecycle
Management (ILM)
Components of Data Governance
• Add Big Data to overall framework and assign responsibility
• Add data scientists to the Stewardship program
• Assign stewards to new data sets (twitter, call center logs, etc.)
• Graph databases are more flexible than relational
• Lower latency service required
• Distributed data quality and matching algorithms
• Data Quality and Monitoring (probably home grown, drools?)
• Quality checks not only SQL: machine learning, Pig and Map Reduce
• Acting on large dataset quality checks may require distribution
• Larger scale
• New datatypes
• Integrate with Hive Metastore, HCatalog, home grown tables
• Secure and mask multiple data types (not just tabular)
• Deletes are more uncommon (unless there is regulatory requirement)
• Take advantage of compression and archiving (like AWS Glacier)
• Data detection and masking on unstructured data upon ingest
• Near-zero latency, DevOps, Core component of business operations
For Big Data
24. @joe_Caserta
Data Munging Versus Reporting
Data Governance
AvailabilityRequirement
Fast
Slow
Minimum Maximum
Does Data munging in a data science
lab need the same restrictive
governance and enterprise reporting?
25. @joe_Caserta
The Big Data Pyramid
Ingest Raw
Data
Organize, Define,
Complete
Munging, Blending
Machine Learning
Data Quality and Monitoring
Metadata, ILM , Security
Data Catalog
Data Integration
Fully Governed ( trusted)
Arbitrary/Ad-hoc Queries and
Reporting
Big
Data
Warehouse
Data Science Workspace
Data Lake – Integrated Sandbox
Landing Area – Source Data in “Full Fidelity”
Usage Pattern Data Governance
Metadata, ILM,
Security
26. @joe_Caserta
Big
Data
Warehouse
Data Science Workspace
Data Lake – Integrated Sandbox
Landing Area – Source Data in “Full Fidelity”
The Data Refinery
• The feedback loop between Data Science, Data Warehouse and Data Lake is
critical
• Ephemeral Data Science Workbench
• Successful work products of science must Graduate into the appropriate layers
of the Data Lake
Cool New
Data
New
Insights
Governance
Refinery
27. @joe_Caserta
Define and Find Your Data
• Data Classification
• Import/Define business taxonomy
• Capture/Automate relationships between data sets
• Integrate metadata with other systems
• Centralized Auditing
• Security access information for every application with data
• Operational information for execution
• Search & Lineage (Browse)
• Predefined navigation paths to explore data
• Text-based search for data elements across data ecosystem
• Browse visualization of data lineage
• Security & Policy Engine
• Rationalize compliance policy at run-time
• Prevent data derivation based on classification (re-classification)
Key Requirements
• Automatic data-
discovery
• Metadata tagging
• Classification
28. @joe_Caserta
Caution: Assembly Required
Some of the most hopeful tools are brand new or in
incubation!
Enterprise big data implementations typically combine
products with custom built components
Tools
People, Processes and Business commitment is still critical!
Data Integration Data Catalog & Governance Emerging Solutions
29. @joe_Caserta
Existing On-Premise Solution
• Challenges with operations of data servers in Data Center
• Increasing infrastructure complexity
• Keeping up with data growth
Cloud Advantages
• Reduced upfront capital investment
• Faster speed to value
• Elasticity
“Those that go out and buy expensive
infrastructure find that the problem scope and
domain shift really quickly. By the time they get
around to answering the original question, the
business has moved on.” - Matt Wood, AWS
Move to the Cloud?
30. @joe_Caserta
Come out and Play
CIL - Caserta
Innovations Lab
Experience
Big Data Warehousing Meetup
• Meet monthly to share data best
practices, experiences
• 3,800+ Members
http://www.meetup.com/Big-Data-Warehousing/
http://www.slideshare.net/CasertaConcepts/
Examples of Previous Topics
• Data Governance, Compliance &
Security in Hadoop w/Cloudera
• Real Time Trade Data Monitoring
with Storm & Cassandra
• Predictive Analytics
• Exploring Big Data Analytics
Techniques w/Datameer
• Using a Graph DB for MDM &
Relationship Mgmt
• Data Science w/Claudia
Perlcih & Revolution Analytics
• Processing 1.4 Trillion Events
in Hadoop
• Building a Relevance Engine
using Hadoop, Mahout & Pig
• Big Data 2.0 – YARN Distributed
ETL & SQL w/Hadoop
• Intro to NoSQL w/10GEN
31. @joe_Caserta
Thank You / Q&A
Joe Caserta
President, Caserta Concepts
joe@casertaconcepts.com
@joe_Caserta
32. @joe_Caserta
The Data Scientist Winning Trifecta
Modern Data
Engineering/Data
Preparation
Domain
Knowledge/Business
Expertise
Advanced
Mathematics/
Statistics
33. @joe_Caserta
Electronic Medical Records (EMR) Analytics
Hadoop Data LakeEdge Node
`
100k
files
variant 1..n
…
variant 1..n
HDFS
Put
Netezza DW
Sqoop
Pig EMR
Processor
UDF
Library
Provider table
(parquet)
Member table
(parquet)
Python Wrapper
Provider table
Member table
Forqlift
Sequence
Files
…
variant 1..n
Sequence
Files
…
15 More
Entities
(parquet)
More
Dimensions
And
Facts
• Receive Electronic Medial Records from various providers in various formats
• Address Hadoop ‘small file’ problem
• No barrier for onboarding and analysis of new data
• Blend new data with Data Lake and Big Data Warehouse
• Machine Learning
• Text Analytics
• Natural Language Processing
• Reporting
• Ad-hoc queries
• File ingestion
• Information Lifecycle Mgmt