This document discusses how NetApp solutions can help businesses bridge their MongoDB databases across on-premises and cloud environments. It provides an introduction to NetApp and describes how their storage solutions and data fabric can enable hybrid cloud for MongoDB. Specific solutions and technologies discussed include NetApp ONTAP for storage management and provisioning, FlexClone for development/testing, and SolidFire for high performance MongoDB deployments. Customer examples and performance benefits are also summarized.
MongoDB Breakfast Milan - Mainframe Offloading StrategiesMongoDB
The document summarizes a MongoDB event focused on modernizing mainframe applications. The event agenda includes presentations on moving from mainframes to operational data stores, demo of a mainframe offloading solution from Quantyca, and stories of mainframe modernization. Benefits of using MongoDB for mainframe modernization include 5-10x developer productivity and 80% reduction in mainframe costs.
MongoDB .local Bengaluru 2019: Lift & Shift MongoDB to AtlasMongoDB
Managing and scaling the infrastructure for critical business data can be a real pain. To handle this massive scale of data effectively, thousands of users of MongoDB from all around the world have migrated their large and small databases to MongoDB Atlas.
By the end of this talk, you'll have a better understanding of the “how” and “why” of it, and will be able to leverage it in your organisation with elevated confidence. I'll demo the migration of a realtime application using MongoDB from existing cloud infrastructure to MongoDB Atlas.
If you're a developer, DBA or a business stakeholder, and your organisation is using/planning to use MongoDB on-premise or with any other cloud vendor, this talk will help you to gain insights into the best way to run MongoDB.
The rise of microservices - containers and orchestrationAndrew Morgan
Organisations are building their applications around microservice architectures because of the flexibility, speed of delivery, and maintainability they deliver. In this session, the concepts behind containers and orchestration will be explained and how to use them with MongoDB.
MongoDB World 2016: Scaling Targeted Notifications in the Music Streaming Wor...MongoDB
This document summarizes key information about Saavn, India's largest music streaming service. Some key points:
- Saavn has 18 million global monthly active users, with 14 million in India. The majority (64%) use Android devices to access over 25 million tracks.
- Push notifications are a primary driver of mobile app growth for Saavn. They send over 30 million notifications per day and see 3x more engagement from targeted notifications.
- Saavn stores user notification messages and activity data in MongoDB. They upgraded to WiredTiger for its document locking and high performance. Maintaining over 500GB of user data required implementing sharding and migrating the data.
- Tools like
RedisConf18 - Redis in Dev, Test, and Prod with the OpenShift Service CatalogRedis Labs
This document discusses using Redis in development, test, and production environments with the OpenShift Service Catalog.
It demonstrates using Redis for iterative development with ephemeral instances in development. In testing, it shows production-like configurations with immutable infrastructure, recovery testing, and zero-downtime deployments. For production, it notes the Service Catalog can provide targeted Redis instances and make external services discoverable. It promotes the Open Service Broker API and OpenShift Service Catalog for expanding service options.
RedisConf18 - Redis on Google Cloud PlatformRedis Labs
This document provides an overview of Redis on Google Cloud Platform and their new fully managed Redis service called Cloud Memorystore for Redis. The key points are:
- Google Cloud Platform offers a new managed Redis service called Cloud Memorystore for Redis that makes Redis fast, scalable, highly available, secure and fully managed.
- Cloud Memorystore for Redis offers different tiers (Basic and Standard) with different availability levels and SLAs. It allows scaling instances seamlessly to achieve high throughput and low latency.
- Using Cloud Memorystore for Redis provides increased reliability, security and ease of use compared to self-managed Redis, as Google handles the infrastructure management and maintenance.
MongoDB is a leading database technology that combines the foundations of RDBMS with the innovations of NoSQL, allowing organizations to simultaneously boost productivity and lower TCO.
MongoDB Enterprise Advanced is a finely-tuned package of advanced software, enterprise-grade support, and other services designed to accelerate your success with MongoDB in every stage of your app lifecycle, from early development to the scale-out of mission-critical production environments.
With the release of 3.2, MongoDB Enterprise Advanced now includes:
MongoDB Ops Manager 2.0
MongoDB Compass, the MongoDB GUI
MongoDB Connector for Business Intelligence
Encrypted Storage Engine
In-Memory Storage Engine (beta)
Attend this webinar to learn how MongoDB Enterprise Advanced can help you get to market faster and de-risk your mission critical deployments.
Powering Microservices with MongoDB, Docker, Kubernetes & Kafka – MongoDB Eur...Andrew Morgan
Organisations are building their applications around microservice architectures because of the flexibility, speed of delivery, and maintainability they deliver.
Want to try out MongoDB on your laptop? Execute a single command and you have a lightweight, self-contained sandbox; another command removes all trace when you're done. Need an identical copy of your application stack in multiple environments? Build your own container image and then your entire development, test, operations, and support teams can launch an identical clone environment.
Containers are revolutionizing the entire software lifecycle: from the earliest technical experiments and proofs of concept through development, test, deployment, and support. Orchestration tools manage how multiple containers are created, upgraded and made highly available. Orchestration also controls how containers are connected to build sophisticated applications from multiple, microservice containers.
This presentation introduces you to technologies such as Docker, Kubernetes & Kafka which are driving the microservices revolution. Learn about containers and orchestration – and most importantly how to exploit them for stateful services such as MongoDB.
MongoDB Europe 2016 - Deploying MongoDB on NetApp storageMongoDB
The document discusses how NetApp storage solutions can maximize investments in MongoDB. It describes MongoDB's internal storage pain points around performance, scalability, management and data movement. NetApp solutions like AFF, EF-Series and SolidFire are presented as optimizing MongoDB deployments by accelerating performance, increasing availability, eliminating data sprawl and streamlining data lifecycle management. Customer use cases demonstrate how NetApp has helped companies improve analytics, scale workloads efficiently and simplify backup/restore processes for MongoDB.
Power Real Estate Property Analytics with MongoDB + SparkMongoDB
Speaker: Gheni Abla, Analytics Software Technical Architect, CoreLogic
Level: 200 (Intermediate)
Track: Data Analytics
CoreLogic is a leading global property information, analytics and solutions provider. The company provides a range of analytic solutions for automated property valuation and appraisals. This presentation will cover a recent project at CoreLogic that utilized MongoDB for storing property and ownership data for over 150 million properties. MongoDB provided powerful support for storing and searching location-based property data. The MongoDB-Spark connector facilitated seamless integration between data access and the Spark-based distributed analytics processing and MongoDB’s replication capability provided high-availability across data centers. This session will cover CoreLogic’s software architecture and real-world development experiences with geospatial data and MongoDB-Spark connector.
What You Will Learn:
- How CoreLogic manages and stores data for over 150 million real estate properties in MongoDB, and utilizes MongoDB's geospatial data support.
- How to distribute large-scale analytics process using Spark and improve data access efficiency using the MongoDB-Spark connector.
- How to utilize MongoDB replication for implementing high-availability between two geographically dispersed data centers.
Practical Design Patterns for Building Applications Resilient to Infrastructu...MongoDB
Speaker: Feng Qu, Sr MTS, eBay
Level: 200 (Intermediate)
Track: Developer
Building applications resilient to infrastructure failure is essential to systems that run in distributed environments, including those with a MongoDB database. For example, failure can come from computer resources, such as nodes, network switches, or the entire data center. On occasion, MongoDB nodes may be marked down by Operations to perform administrative tasks, such as a software upgrade, adding extra capacity, etc.
In this session, we will discuss how to build resilient applications using appropriate design patterns suitable to enterprise class MongoDB applications.
What You Will Learn:
- How to manage updates within a resilient architecture.
- Design patterns for resilient applications.
- Practical advice for deploying resilient enterprise applications.
HBaseConAsia2018 Track2-3: Bringing MySQL Compatibility to HBase using Databa...Michael Stack
This document discusses AntsDB, an open source project that brings MySQL compatibility to HBase in order to address the need for relational database capabilities in NoSQL systems. It describes AntsDB's architecture, which uses caching and other techniques to provide low-latency transactions and joins on HBase. Performance tests show AntsDB can achieve high throughput for writes and OLTP workloads. AntsDB aims to be complementary to HBase by virtualizing MySQL atop HBase while simulating MySQL behaviors and allowing applications built for MySQL to run unchanged on HBase.
What's new with enterprise Redis - Leena Joshi, Redis LabsRedis Labs
Redis Labs manages over 160k+ HA databases, 10k clustered databases, without data loss in spite of one node failure a day and one data center outage per month. Using Enterprise
Redis(RLEC), Redis Labs delivers seamless zero downtime scaling, true high availability with persistence, cross-rack/zone/
datacenter replication and instant automatic failover. Learn how. Join this session for a deep dive into how enterprise Redis makes for no-hassle Redis deployments and the roadmap for new Redis capabilities. Discover new cost savings with Redis on Flash for cost-effective high performance operations and analytics
Data Streaming with Apache Kafka & MongoDB - EMEAAndrew Morgan
A new generation of technologies is needed to consume and exploit today's real time, fast moving data sources. Apache Kafka, originally developed at LinkedIn, has emerged as one of these key new technologies.
This webinar explores the use-cases and architecture for Kafka, and how it integrates with MongoDB to build sophisticated data-driven applications that exploit new sources of data.
Eric Lubow gave a presentation on how SimpleReach fixed problems with their MongoDB implementation. They implemented a sharded replica set architecture across availability zones for high availability and speed. They improved data accuracy by separating databases and enforcing consistent access patterns. SimpleReach also implemented a controlled data flow using NSQ to batch and route data between MongoDB, Cassandra, Vertica, and other tools for analytics and real-time usage. Their architecture provides redundancy, minimal downtime for changes, and monitors performance using tools like Nagios, Statsd and Cloudwatch.
RedisConf18 - Open Source Built for Scale: Redis in Amazon ElastiCache ServiceRedis Labs
This document summarizes Andi Gutmans' presentation about Redis and Amazon ElastiCache. The key points are:
1. Amazon ElastiCache provides a fully managed Redis compatible data store for building low latency and highly scalable applications on AWS.
2. AWS contributes to open source Redis projects through bug fixes, feature additions and code improvements. Recent contributions include improvements to replication, migration and scaling.
3. A new open source project was announced to add encryption-in-transit to Redis, allowing easy encryption of client-server and cluster communications without requiring application changes.
4. AWS is contributing encryption functionality to the core Redis codebase through an open pull request.
Using MongoDB to Build a Fast and Scalable Content RepositoryNuxeo
Nuxeo provides native integration with the leading NoSQL database, MongoDB, as a supported content and data storage back end. Developers can quickly and easily deploy cloud-ready enterprise applications and we’re happy to share that success story during their new annual event in Europe.
Experience first-hand how the world’s fastest-growing database is powering today’s innovations and can help you gain a competitive advantage. Learn how large enterprises have delivered new applications to market at the speed of a lean startup, and how startups scale and execute on their giant ideas like an enterprise.
HBaseConAsia2018 Track3-5: HBase Practice at LianjiaMichael Stack
This document discusses different big data scenarios using HBase including:
1. Architecture evolution over time including olap and real-time ETL scenarios
2. The olap scenario requirements like handling billion records with sub-second queries and examples using Kylin
3. The monitor scenario showing how different systems are monitored using technologies like Grafana
4. Brief mentions of data mining and HDI scenarios
Speaker: Raphael Londner, Developer Advocate, MongoDB
Speaker: Paul Sears, Partner Solutions Architect, Amazon Web Services
Level: 200 (Intermediate)
Track: Atlas
In this session, AWS Solutions Architect Paul Sears will provide an overview of AWS Lambda functions, including some key integration use cases with MongoDB Atlas. Developer Advocate Raphael Londner will walk you through how to code a Lambda function connected to MongoDB Atlas, with a specific focus on performance optimization. Raphael will then demonstrate how to orchestrate multiple Lambda functions inside a state machine built on top of AWS Step Functions.
What You Will Learn:
- Common use cases for which MongoDB Atlas + AWS Lambda help you boost developer productivity and minimize operational costs.
- How to write a performance-optimized Lambda function that re-uses MongoDB Atlas database connections across multiple calls in order to speed up queries.
- How AWS Step Functions can help you easily build application workflows to coordinate your Lambda functions.
NetApp IT Data Center Strategies to Enable Digital TransformationNetApp
During an Insight Las Vegas 2017 breakout presentation, NetApp IT Customer-1 Director, Stan Cox, and Senior Storage Architect, Eduardo Rivera explained how NetApp IT enables digital transformation with data center strategies that incorporates ONTAP AFF systems in the data center to save power, cooling & space and NetApp Private Storage and ONTAP Cloud to leverage the public cloud while retaining control of their data. Using OnCommand Insight for data center management—and its integration with their configuration management database—the NetApp IT team knows what’s in their data centers, in terms of both functionality, usage, and inter-connections. NetApp IT believes knowing what’s in your data centers is fundamental to maintaining total cost of ownership, adapting to new technologies, leveraging the cloud while owning your data, and enabling digital transformation.
The document provides an overview of Macroview Solution's data center virtualization offerings. It discusses their technology partners including VMware, Cisco, Citrix, Microsoft, and NetApp. It then summarizes their service catalog including virtualization, compute, storage, virtual desktop, enterprise mobility, disaster recovery, and multi-cloud capabilities. Specific storage solutions from NetApp are highlighted including all-flash arrays, snapshots, cloning, deduplication, encryption, quality of service, and data replication technologies.
DevOps@Scale- IBM Cloud and NetAp-Insight-BerlinSreeni Pamidala
DevOps@scale. IBM and NetApp recently joined forces to dramatically accelerate developer workspace creation and build times (by more than 30x!) for the developers leveraging IBM Cloud Container Service powered by Kubernetes. Agile and Lean methods drove the entire project lifecycle from early design thinking workshops to pair programming with weekly playbacks. By starting with a design thinking focus and practicing agile methods, the team repeatedly demonstrated the ability to pivot, accelerate development, and ultimately highlight the flexibility and high-value capabilities of both the NetApp Data Fabric and the IBM Cloud Platform. This session will take an in-depth look at the practices used by the joint development team, and the high-level results they were able to achieve in a very short period of time.
Recipe for Success: The Right Ingredients for Enterprise-Class Cloud Data Man...Amazon Web Services
,When data is the lifeblood of your organisation, best in class data management and protection practices are a no-brainer. With NetApp and AWS, there are a wide menu of data services available to help with things like backup and disaster recovery, accelerating DevOps, data warehouses and analytics, and running high performance business applications to mention a few. With NetApp and AWS, how can you ensure that you don’t compromise on things like cost, performance, security or manageability? With the right cloud data management solution, why not have the best of both worlds and get ahead!In this session, learn what the secret sauce is to optimise the foundation of your cloud data management and how enterprise customers like Monash University and REA Group have been leveraging the economics of cloud for their needs.Yours Sincerely,NetApp, the Cloud Data Management Experts DP (Hons), PhD(DataFabric)
Speakers:
Tiedan Yu, Senior Storage Engineer, Monash University
Jesse Pratt, Infrastructure Manager, REA Group
Matt Moore, Hybrid Cloud Architect, NetApp
Keiran McCartney, Alliances & Solutions Manager, NetApp
This document discusses NetApp's integration with OpenStack. It begins with an introduction to NetApp, describing it as a global Fortune 500 company and leader in data management solutions. It then covers basic OpenStack and NetApp terminology. The remainder summarizes NetApp's storage portfolio for OpenStack and how its different solutions provide capabilities like snapshots, cloning, and quality of service controls when used with OpenStack interfaces like Cinder and Manila. It concludes with a demonstration of provisioning OpenStack volumes using NetApp storage.
NetApp IT Efficiencies Gained with Flash, NetApp ONTAP, OnCommand Insight, Al...NetApp
During an Insight Las Vegas 2017 breakout presentation, NetApp IT Senior Manager of Customer-1, Pridhvi Appineni, to talk about IT's business results of running a global enterprise on NetApp technology. From being cloud ready to data compliant to prepared for a disaster, NetApp technology is at the heart of our stable, reliable IT data management environment
The combination of StackPointCloud with NetApp creates NetApp Kubernetes Service, the industry’s first complete Kubernetes platform for multi-cloud deployments and a complete cloud-based stack for Azure, Google Cloud, AWS, and NetApp HCI. Further, Trident is a fully supported open source project maintained by NetApp, designed from the ground up to help meet the sophisticated persistence demands of containerized applications.
NetApp HCI provides an enterprise-scale hyperconverged infrastructure solution for end user computing. It combines all-flash storage from NetApp SolidFire with VMware ESXi hypervisors for compute and a centralized management platform.
Lessons learned processing 70 billion data points a day using the hybrid cloudDataWorks Summit
NetApp receives 70 billion data points of telemetry information each day from its customer’s storage systems. This telemetry data contains configuration information, performance counters, and logs. All of this data is processed using multiple Hadoop clusters, and feeds a machine learning pipeline and a data serving infrastructure that produces insights for customers via an application called Active IQ. We describe the evolution of our Hadoop infrastructure from a traditional on-premises architecture to the hybrid cloud, and lessons learned.
We’ll discuss the insights we are able to produce for our customers, and the techniques used. Finally, we describe the data management challenges with our multi-petabyte Hadoop data lake. We solved these problems by building a unified data lake on-premises and using the NetApp Data Fabric to seamlessly connect to public clouds for data science and machine learning compute resources.
Architecting a truly hybrid cloud implementation allowed NetApp to free up our data scientists to use any software on any cloud, kept the customer log data safe on NetApp Private Storage in Equinix, resulted in faster ability to innovate and release new code and provided flexibility to use any public cloud at the same time with data on NetApp in Equinix.
Speaker
Pranoop Erasani, NetApp, Senior Technical Director, ONTAP
Shankar Pasupathy, NetApp, Technical Director, ACE Engineering
NetApp IT and how Data Fabric Simplifies Data Management across the Hybrid Cl...NetApp
During an Insight Pavilion Theater presentation, Kamal Vyas, IT Senior Storage Engineer, provided an explanation of Data Fabric and how it enables IT to manage data across a hybrid cloud environment—critical to IT’s next generation service delivery platform to leverage both private and public cloud. Data Fabric provides the framework to enable NetApp IT to use Cloud on its terms, by placing application workloads in the cloud that offers the right service level, right price and ability to dynamically migrate as service levels and price requirements change.
NetApp IT Uses NetApp Manageability SDK to do More Than Configuration Tasks NetApp
During an Insight Pavilion Theater presentation, Ezra Tingler, IT Senior Storage Engineer, provided an explanation of using ONTAP APIs, like the NetApp Manageability (NM) Software Development Kit (SDK), to do more than configuration tasks. NetApp IT automates storage administration by writing scripts that are kicked off by monitoring tools to resolve issues without bothering Level 1 and 2 support. Using NM SDK, NetApp IT can install and configure 24 nodes in two hours, enforce 100% consistent configurations across the ONTAP environment, and address aggregate nearly full alerts and volume auto-size issues within minutes.
NetApp HCI
Hyper Converged Infrastructure (HCI) continues to evolve rapidly to meet the expectations of the Enterprise. First generation HCI platforms achieved an immediate return on
investment and met a simple set of goals to achieve rapid adoption and success:
• The ability to collapse and consolidate large traditional infrastructures to reduce capital expenditures (CAPEX)
• Reduction in operating expenses (OPEX) through simplified management tools and complexity coupled with less of a dependency on specialized technical resources
Iperconvergenza come migliora gli economics del tuo ITNetApp
The document describes instructions for connecting audio to an online webinar. It provides three options for connecting audio: calling using a computer, calling a phone number, or having the system call back a provided number. It also includes the webinar title and information about asking questions.
The document is a presentation about NetApp HCI (Hyper Converged Infrastructure). It discusses how NetApp HCI provides enterprise-scale HCI that offers guaranteed performance, automated infrastructure management, flexibility and scale. It also connects to NetApp's data fabric for additional data services and management capabilities. The presentation provides an overview of the NetApp HCI architecture, deployment process, management tools, performance features, scaling options, and integration with other NetApp products.
From Mainframe to Microservices: Vanguard’s Move to the Cloud - ENT331 - re:I...Amazon Web Services
The document discusses Vanguard's move from a mainframe-based architecture to microservices in the cloud. It describes Vanguard's initial complex IT environment with monolithic applications and a mainframe. Vanguard's approach was to replicate data from the mainframe to the cloud, refactor applications to make API calls to microservices, and migrate batch processes. This "strangulation strategy" allowed the monolith to be gradually replaced. The document outlines Vanguard's cloud data architecture and how it leveraged AWS services like RDS, DynamoDB, Lambda and Kinesis while addressing compliance and operational requirements. Lessons learned included preparing for regulatory needs and pushback to cloud migration.
Aem asset optimizations & best practicesKanika Gera
The document discusses best practices for optimizing Adobe Experience Manager (AEM) assets. It covers asset capabilities in AEM, reasons for performance bottlenecks, optimizations to improve performance like configuring Java settings and workflows, common asset architectures for different use cases, and guidance on sizing assets and repositories. The presentation aims to help users maximize the performance of their AEM assets.
Presentazione PernixData @ VMUGIT UserCon 2015VMUG IT
The document discusses PernixData's solutions for optimizing storage performance, management, and cost in data centers. PernixData's software provides VM-aware storage intelligence that accelerates I/O performance by placing storage close to applications using server flash and RAM. This allows organizations to turn any shared storage infrastructure into a high-performance all-flash array without disruption. PernixData's solutions have helped many customers optimize strategic operations, maximize efficiency, and save substantial costs on hardware and operations.
A global automation services company wanted to move their applications to the cloud to reduce costs and reach more customers. However, their data center architecture was too expensive and inflexible. NetApp Cloud Volumes provided cloud-native file services that delivered the high performance required for the company's database applications, allowing them to run their applications in the cloud and save on development and maintenance costs while meeting customer expectations.
Similar to Bridging Your Business Across the Enterprise and Cloud with MongoDB and NetApp (20)
MongoDB SoCal 2020: Migrate Anything* to MongoDB AtlasMongoDB
This presentation discusses migrating data from other data stores to MongoDB Atlas. It begins by explaining why MongoDB and Atlas are good choices for data management. Several preparation steps are covered, including sizing the target Atlas cluster, increasing the source oplog, and testing connectivity. Live migration, mongomirror, and dump/restore options are presented for migrating between replicasets or sharded clusters. Post-migration steps like monitoring and backups are also discussed. Finally, migrating from other data stores like AWS DocumentDB, Azure CosmosDB, DynamoDB, and relational databases are briefly covered.
MongoDB SoCal 2020: Go on a Data Safari with MongoDB Charts!MongoDB
These days, everyone is expected to be a data analyst. But with so much data available, how can you make sense of it and be sure you're making the best decisions? One great approach is to use data visualizations. In this session, we take a complex dataset and show how the breadth of capabilities in MongoDB Charts can help you turn bits and bytes into insights.
MongoDB SoCal 2020: Using MongoDB Services in Kubernetes: Any Platform, Devel...MongoDB
MongoDB Kubernetes operator and MongoDB Open Service Broker are ready for production operations. Learn about how MongoDB can be used with the most popular container orchestration platform, Kubernetes, and bring self-service, persistent storage to your containerized applications. A demo will show you how easy it is to enable MongoDB clusters as an External Service using the Open Service Broker API for MongoDB
MongoDB SoCal 2020: A Complete Methodology of Data Modeling for MongoDBMongoDB
Are you new to schema design for MongoDB, or are you looking for a more complete or agile process than what you are following currently? In this talk, we will guide you through the phases of a flexible methodology that you can apply to projects ranging from small to large with very demanding requirements.
MongoDB SoCal 2020: From Pharmacist to Analyst: Leveraging MongoDB for Real-T...MongoDB
Humana, like many companies, is tackling the challenge of creating real-time insights from data that is diverse and rapidly changing. This is our journey of how we used MongoDB to combined traditional batch approaches with streaming technologies to provide continues alerting capabilities from real-time data streams.
MongoDB SoCal 2020: Best Practices for Working with IoT and Time-series DataMongoDB
Time series data is increasingly at the heart of modern applications - think IoT, stock trading, clickstreams, social media, and more. With the move from batch to real time systems, the efficient capture and analysis of time series data can enable organizations to better detect and respond to events ahead of their competitors or to improve operational efficiency to reduce cost and risk. Working with time series data is often different from regular application data, and there are best practices you should observe.
This talk covers:
Common components of an IoT solution
The challenges involved with managing time-series data in IoT applications
Different schema designs, and how these affect memory and disk utilization – two critical factors in application performance.
How to query, analyze and present IoT time-series data using MongoDB Compass and MongoDB Charts
At the end of the session, you will have a better understanding of key best practices in managing IoT time-series data with MongoDB.
Join this talk and test session with a MongoDB Developer Advocate where you'll go over the setup, configuration, and deployment of an Atlas environment. Create a service that you can take back in a production-ready state and prepare to unleash your inner genius.
MongoDB .local San Francisco 2020: Powering the new age data demands [Infosys]MongoDB
Our clients have unique use cases and data patterns that mandate the choice of a particular strategy. To implement these strategies, it is mandatory that we unlearn a lot of relational concepts while designing and rapidly developing efficient applications on NoSQL. In this session, we will talk about some of our client use cases, the strategies we have adopted, and the features of MongoDB that assisted in implementing these strategies.
MongoDB .local San Francisco 2020: Using Client Side Encryption in MongoDB 4.2MongoDB
Encryption is not a new concept to MongoDB. Encryption may occur in-transit (with TLS) and at-rest (with the encrypted storage engine). But MongoDB 4.2 introduces support for Client Side Encryption, ensuring the most sensitive data is encrypted before ever leaving the client application. Even full access to your MongoDB servers is not enough to decrypt this data. And better yet, Client Side Encryption can be enabled at the "flick of a switch".
This session covers using Client Side Encryption in your applications. This includes the necessary setup, how to encrypt data without sacrificing queryability, and what trade-offs to expect.
MongoDB .local San Francisco 2020: Using MongoDB Services in Kubernetes: any ...MongoDB
MongoDB Kubernetes operator is ready for prime-time. Learn about how MongoDB can be used with most popular orchestration platform, Kubernetes, and bring self-service, persistent storage to your containerized applications.
MongoDB .local San Francisco 2020: Go on a Data Safari with MongoDB Charts!MongoDB
These days, everyone is expected to be a data analyst. But with so much data available, how can you make sense of it and be sure you're making the best decisions? One great approach is to use data visualizations. In this session, we take a complex dataset and show how the breadth of capabilities in MongoDB Charts can help you turn bits and bytes into insights.
MongoDB .local San Francisco 2020: From SQL to NoSQL -- Changing Your MindsetMongoDB
When you need to model data, is your first instinct to start breaking it down into rows and columns? Mine used to be too. When you want to develop apps in a modern, agile way, NoSQL databases can be the best option. Come to this talk to learn how to take advantage of all that NoSQL databases have to offer and discover the benefits of changing your mindset from the legacy, tabular way of modeling data. We’ll compare and contrast the terms and concepts in SQL databases and MongoDB, explain the benefits of using MongoDB compared to SQL databases, and walk through data modeling basics so you feel confident as you begin using MongoDB.
MongoDB .local San Francisco 2020: MongoDB Atlas JumpstartMongoDB
Join this talk and test session with a MongoDB Developer Advocate where you'll go over the setup, configuration, and deployment of an Atlas environment. Create a service that you can take back in a production-ready state and prepare to unleash your inner genius.
MongoDB .local San Francisco 2020: Tips and Tricks++ for Querying and Indexin...MongoDB
The document discusses guidelines for ordering fields in compound indexes to optimize query performance. It recommends the E-S-R approach: placing equality fields first, followed by sort fields, and range fields last. This allows indexes to leverage equality matches, provide non-blocking sorts, and minimize scanning. Examples show how indexes ordered by these guidelines can support queries more efficiently by narrowing the search bounds.
MongoDB .local San Francisco 2020: Aggregation Pipeline Power++MongoDB
Aggregation pipeline has been able to power your analysis of data since version 2.2. In 4.2 we added more power and now you can use it for more powerful queries, updates, and outputting your data to existing collections. Come hear how you can do everything with the pipeline, including single-view, ETL, data roll-ups and materialized views.
MongoDB .local San Francisco 2020: A Complete Methodology of Data Modeling fo...MongoDB
The document describes a methodology for data modeling with MongoDB. It begins by recognizing the differences between document and tabular databases, then outlines a three step methodology: 1) describe the workload by listing queries, 2) identify and model relationships between entities, and 3) apply relevant patterns when modeling for MongoDB. The document uses examples around modeling a coffee shop franchise to illustrate modeling approaches and techniques.
MongoDB .local San Francisco 2020: MongoDB Atlas Data Lake Technical Deep DiveMongoDB
MongoDB Atlas Data Lake is a new service offered by MongoDB Atlas. Many organizations store long term, archival data in cost-effective storage like S3, GCP, and Azure Blobs. However, many of them do not have robust systems or tools to effectively utilize large amounts of data to inform decision making. MongoDB Atlas Data Lake is a service allowing organizations to analyze their long-term data to discover a wealth of information about their business.
This session will take a deep dive into the features that are currently available in MongoDB Atlas Data Lake and how they are implemented. In addition, we'll discuss future plans and opportunities and offer ample Q&A time with the engineers on the project.
MongoDB .local San Francisco 2020: Developing Alexa Skills with MongoDB & GolangMongoDB
Virtual assistants are becoming the new norm when it comes to daily life, with Amazon’s Alexa being the leader in the space. As a developer, not only do you need to make web and mobile compliant applications, but you need to be able to support virtual assistants like Alexa. However, the process isn’t quite the same between the platforms.
How do you handle requests? Where do you store your data and work with it to create meaningful responses with little delay? How much of your code needs to change between platforms?
In this session we’ll see how to design and develop applications known as Skills for Amazon Alexa powered devices using the Go programming language and MongoDB.
MongoDB .local Paris 2020: Realm : l'ingrédient secret pour de meilleures app...MongoDB
aux Core Data, appréciée par des centaines de milliers de développeurs. Apprenez ce qui rend Realm spécial et comment il peut être utilisé pour créer de meilleures applications plus rapidement.
MongoDB .local Paris 2020: Upply @MongoDB : Upply : Quand le Machine Learning...MongoDB
Il n’a jamais été aussi facile de commander en ligne et de se faire livrer en moins de 48h très souvent gratuitement. Cette simplicité d’usage cache un marché complexe de plus de 8000 milliards de $.
La data est bien connu du monde de la Supply Chain (itinéraires, informations sur les marchandises, douanes,…), mais la valeur de ces données opérationnelles reste peu exploitée. En alliant expertise métier et Data Science, Upply redéfinit les fondamentaux de la Supply Chain en proposant à chacun des acteurs de surmonter la volatilité et l’inefficacité du marché.
Increase Quality with User Access Policies - July 2024Peter Caitens
⭐️ Increase Quality with User Access Policies ⭐️, presented by Peter Caitens and Adam Best of Salesforce. View the slides from this session to hear all about “User Access Policies” and how they can help you onboard users faster with greater quality.
Discovery Series - Zero to Hero - Task Mining Session 1DianaGray10
This session is focused on providing you with an introduction to task mining. We will go over different types of task mining and provide you with a real-world demo on each type of task mining in detail.
"Hands-on development experience using wasm Blazor", Furdak Vladyslav.pptxFwdays
I will share my personal experience of full-time development on wasm Blazor
What difficulties our team faced: life hacks with Blazor app routing, whether it is necessary to write JavaScript, which technology stack and architectural patterns we chose
What conclusions we made and what mistakes we committed
Finetuning GenAI For Hacking and DefendingPriyanka Aash
Generative AI, particularly through the lens of large language models (LLMs), represents a transformative leap in artificial intelligence. With advancements that have fundamentally altered our approach to AI, understanding and leveraging these technologies is crucial for innovators and practitioners alike. This comprehensive exploration delves into the intricacies of GenAI, from its foundational principles and historical evolution to its practical applications in security and beyond.
The Challenge of Interpretability in Generative AI Models.pdfSara Kroft
Navigating the intricacies of generative AI models reveals a pressing challenge: interpretability. Our blog delves into the complexities of understanding how these advanced models make decisions, shedding light on the mechanisms behind their outputs. Explore the latest research, practical implications, and ethical considerations, as we unravel the opaque processes that drive generative AI. Join us in this insightful journey to demystify the black box of artificial intelligence.
Dive into the complexities of generative AI with our blog on interpretability. Find out why making AI models understandable is key to trust and ethical use and discover current efforts to tackle this big challenge.
Keynote : AI & Future Of Offensive SecurityPriyanka Aash
In the presentation, the focus is on the transformative impact of artificial intelligence (AI) in cybersecurity, particularly in the context of malware generation and adversarial attacks. AI promises to revolutionize the field by enabling scalable solutions to historically challenging problems such as continuous threat simulation, autonomous attack path generation, and the creation of sophisticated attack payloads. The discussions underscore how AI-powered tools like AI-based penetration testing can outpace traditional methods, enhancing security posture by efficiently identifying and mitigating vulnerabilities across complex attack surfaces. The use of AI in red teaming further amplifies these capabilities, allowing organizations to validate security controls effectively against diverse adversarial scenarios. These advancements not only streamline testing processes but also bolster defense strategies, ensuring readiness against evolving cyber threats.
Cracking AI Black Box - Strategies for Customer-centric Enterprise ExcellenceQuentin Reul
The democratization of Generative AI is ushering in a new era of innovation for enterprises. Discover how you can harness this powerful technology to deliver unparalleled customer value and securing a formidable competitive advantage in today's competitive market. In this session, you will learn how to:
- Identify high-impact customer needs with precision
- Harness the power of large language models to address specific customer needs effectively
- Implement AI responsibly to build trust and foster strong customer relationships
Whether you're at the early stages of your AI journey or looking to optimize existing initiatives, this session will provide you with actionable insights and strategies needed to leverage AI as a powerful catalyst for customer-driven enterprise success.
Generative AI technology is a fascinating field that focuses on creating comp...Nohoax Kanont
Generative AI technology is a fascinating field that focuses on creating computer models capable of generating new, original content. It leverages the power of large language models, neural networks, and machine learning to produce content that can mimic human creativity. This technology has seen a surge in innovation and adoption since the introduction of ChatGPT in 2022, leading to significant productivity benefits across various industries. With its ability to generate text, images, video, and audio, generative AI is transforming how we interact with technology and the types of tasks that can be automated.
Keynote : Presentation on SASE TechnologyPriyanka Aash
Secure Access Service Edge (SASE) solutions are revolutionizing enterprise networks by integrating SD-WAN with comprehensive security services. Traditionally, enterprises managed multiple point solutions for network and security needs, leading to complexity and resource-intensive operations. SASE, as defined by Gartner, consolidates these functions into a unified cloud-based service, offering SD-WAN capabilities alongside advanced security features like secure web gateways, CASB, and remote browser isolation. This convergence not only simplifies management but also enhances security posture and application performance across global networks and cloud environments. Discover how adopting SASE can streamline operations and fortify your enterprise's digital transformation strategy.