Performance budgets have been around for more than ten years. Over those years, we’ve learned a lot about what works, what doesn’t, and what we need to improve. In this session, Tammy revisits old assumptions about performance budgets and offers some new best practices. Topics include:
• Understanding performance budgets vs. performance goals
• Aligning budgets with user experience
• Pros and cons of Core Web Vitals
• How to stay on top of your budgets to fight regressions
Slides from my 4-hour workshop on Client-Side Performance Testing conducted at Phoenix, AZ in STPCon 2017 (March).
Workshop Takeaways:
Understand difference between is Performance Testing and Performance Engineering.
Hand’s on experience of some open-source tools to monitor, measure and automate Client-side Performance Testing.
Examples / code walk-through of some ways to automate Client-side Performance Testing.
See blog for more details - https://essenceoftesting.blogspot.com/2017/03/workshop-client-side-performance.html
Metrics, metrics everywhere (but where the heck do you start?)Tammy Everts
You want a single, unicorn metric that magically sums up the user experience, business value, and numbers that DevOps cares about, but so far, you're just not getting it. So where do you start? In this talk at the 2015 Velocity conference in Santa Clara, Cliff Crocker and I walked through various metrics that answer performance questions from multiple perspectives -- from designer and DevOps to CRO and CEO.
Metrics, Metrics Everywhere (but where the heck do you start?)SOASTA
Not surprisingly, there’s no one-size-fits-all performance metric (though life would be simpler if there were). Different metrics will give you different critical insights into whether or not your pages are delivering the results you want — both from your end user’s perspective and ultimately from your organization’s perspective. Join Tammy Everts, and walk through various metrics that answer performance questions from multiple perspectives. You’ll walk away with a better understanding of your options, as well as a clear understanding of how to choose the right metric for the right audience.
Metrics, Metrics Everywhere (but where the heck do you start?)SOASTA
Not surprisingly, there’s no one-size-fits-all performance metric (though life would be simpler if there were). Different metrics will give you different critical insights into whether or not your pages are delivering the results you want — both from your end user’s perspective and ultimately from your organization’s perspective. Join Tammy Everts, and walk through various metrics that answer performance questions from multiple perspectives. You’ll walk away with a better understanding of your options, as well as a clear understanding of how to choose the right metric for the right audience.
Mozilla Foundation Metrics - presentation to engineersJohn Schneider
@rossbruniges and I talked with our fellow Mozilla Foundation engineers and development teams about getting the data for building a data driven operation using statsd, graphite, geckoboard, google analytics, and newrelic.
As web applications evolve and provide more and more features there is a growing need to accurately measure performance as perceived by users. While measuring performance during development can help to build faster applications, load and response times vary from user to user depending on their device and network conditions.
This talk covers the user-centric performance metrics available and the way we can collect and analyse these data on real users’ devices by leveraging Web APIs and data analysis tools.
When addressing website performance issues, developers typically jump to conclusions, focusing on the perceived causes rather than uncovering the real causes through research.
Mitchel Sellers will show you how to approach website performance issues with a level of consistency that ensures they're properly identified and resolved so you'll avoid jumping to conclusions in the future.
You can watch the webinar recording here:
https://www.postsharp.net/documentation/video?id=190066128
This document discusses how to select the right web technology and implementation partner for a web portal project. It emphasizes that the decisions made will impact the long-term success of the portal. When evaluating technology, consider qualities like reliability, security, ease of use, support, scalability and flexibility. When evaluating a partner, look for one with a collaborative approach, adequate support and portal experience. The document provides guidelines on what to assess in a technology and partner to ensure the best choice is made.
The document provides a checklist for front-end performance optimization. It includes recommendations to establish performance metrics and goals, optimize assets like images, videos, fonts and JavaScript, choose frameworks and CDNs wisely, and set priorities to optimize the core experience for all users. Key metrics to target include a Time to Interactive under 5 seconds on 3G and First Input Delay below 100ms.
How to GROW your website: The fundamentals by Remmy Nweke
K E Y W O R D S
Yes, it may no longer be
news to some of us when we
hear ‘grow’ or ‘growth’ those
will be reoccurring in this
presentation.
So, our keyword here is
‘Grow’; entails having as
synonyms to: develop,
multiply, swell, enlarge,
expand, extend in the
generic sense of usage.
FUNDAMENTAL: A central
or primary rule or principle
on which something is
based.
How to GROW your website: The fundamentals by Remmy Nweke
K E Y W O R D S
Yes, it may no longer be
news to some of us when we
hear ‘grow’ or ‘growth’ those
will be reoccurring in this
presentation.
So, our keyword here is
‘Grow’; entails having as
synonyms to: develop,
multiply, swell, enlarge,
expand, extend in the
generic sense of usage.
FUNDAMENTAL: A central
or primary rule or principle
on which something is
based.
SEO Friendly Migrations - Tea-Time SEO' Series of Daily SEO Live TalksAuthoritas
Get practical advice from SEO experts: Director of SEO Eskimo, Joanna Lewis; Kim Dewe, Head of SEO @ Blue Array; Kristina Azarenko, eCommerce & Technical SEO Consultant and founder of MarketingSyrup.
In this short ~20 minute talk they present advice on how to plan and execute a website redevelopment or website migration project successfully. This covers SEO tools, tips and pitfalls to avoid. These talks are offered free to the SEO community working from home during the coronavirus pandemic.
Watch a recording of the stream to go with these slides here:
https://www.youtube.com/watch?v=INCJi89feBI
Visit https://www.authoritas.com for more SEO advice and SEO tools and data to help you drive more organic traffic to your ecommerce stores.
Demystifying web performance tooling and metricsAnna Migas
Web performance has been one of the most talked about web development topics in the recent years. Yet if you try to start your journey with the speed optimisations, you might find yourself in a pickle. With the tooling, you might feel overwhelmed—it looks complex and hard to comprehend. With the metrics: at first glance all of them seem similar, not to mention that they change over time and you cannot figure out which of them to take into account.
Boosting your conversion rate through web performance improvementsAlyss Noland
This document discusses ways to improve web performance and boost conversion rates. It begins by explaining how slow page loads can negatively impact businesses, costing Amazon $1.6 billion per year for every second of slowdown. The document then discusses various metrics that impact performance like page size, number of HTTP requests and JavaScript size. It provides tips for testing and improving performance, such as optimizing images, minifying files, leveraging caching and CDNs. The document stresses that web performance optimization is an ongoing process of testing, setting budgets and refactoring code over time.
Google Ad Yield Management (2016 Feb) by Acqua Mediarally1275
This document discusses strategies for optimizing Google ad revenue. It provides examples of how the author's company, Acqua Media, helped publishers increase their earnings through analyzing platforms like AdX, DFP, AdSense and AdMob, setting revenue targets, optimizing ad rules and placements, acquiring preferred deals, and testing new ad positions. Case studies show how clients achieved page RPM lifts of 10-56% within months by applying these techniques. The document also covers topics like viewability, mobile page loading, and the effects of ad blocking.
This document provides a 31-step checklist for optimizing conversion and retention using analytics. It summarizes setting up analytics tools like Google Tag Manager, Google Analytics, Hotjar, and product analytics to track users across the AARRR funnel of Acquisition, Activation, Retention, Revenue, and Referrals. It involves auditing these tools, setting up events and goals to track key metrics, and identifying dropout points through quantitative research and website walkthroughs.
Why Measuring Page Load Is The Wrong MetricNew Relic
Performance matters. We know from many industry reports that there is a correlation between the time it takes to load a page and user activity.
Bounce rates, conversion rates, and the number of clicks in a session can be dramatically impacted by slow page loads. But, what if we are measuring page load incorrectly? How we measure site speed is not often the same as how a user perceives site speed. If we are using the wrong metrics for measurement, we risk spending cycles on optimization without realizing the user activity gains.
Join us and learn how New Relic Browser and Insights can help improve visibility into the metrics that matter.
https://www.youtube.com/watch?v=NCNAMGTj2ik
Conversion Optimization Framework to Build Sustainable and Repeat GrowthTushar Purohit
The goal of the this presentation on Conversion optimization Framework is to remove the guesswork from the conversion optimization process. It provides comprehensive analysis to anyone interested in optimization with a specific methodology to produce consistent results.
ALPS WG Update - IAB Ad Ops Summit, Fall 2009Eric Goldsmith
Working group status update at the Interactive Advertising Bureau's 2009 Ad Operations Summit in New York, Nov 16, 2009.
The Ad Load Performance Scoring (ALPS) working group, with membership from AOL, Yahoo, Microsoft and Google, is developing a method for 'scoring' the load performance of ads, and incorporates best-practices compliance as well as measured load speed.
Similar to Performance Budgets for the Real World by Tammy Everts (20)
Using ScyllaDB for Real-Time Write-Heavy WorkloadsScyllaDB
Keeping latencies low for highly concurrent, intensive data ingestion
ScyllaDB’s “sweet spot” is workloads over 50K operations per second that require predictably low (e.g., single-digit millisecond) latency. And its unique architecture makes it particularly valuable for the real-time write-heavy workloads such as those commonly found in IoT, logging systems, real-time analytics, and order processing.
Join ScyllaDB technical director Felipe Cardeneti Mendes and principal field engineer, Lubos Kosco to learn about:
- Common challenges that arise with real-time write-heavy workloads
- The tradeoffs teams face and tips for negotiating them
- ScyllaDB architectural elements that support real-time write-heavy workloads
- How your peers are using ScyllaDB with similar workloads
Unconventional Methods to Identify Bottlenecks in Low-Latency and High-Throug...ScyllaDB
In this presentation, we explore how standard profiling and monitoring methods may fall short in identifying bottlenecks in low-latency data ingestion workflows. Instead, we showcase the power of simple yet clever methods that can uncover hidden performance limitations.
Attendees will discover unconventional techniques, including clever logging, targeted instrumentation, and specialized metrics, to pinpoint bottlenecks accurately. Real-world use cases will be presented to demonstrate the effectiveness of these methods. By the end of the session, attendees will be equipped with alternative approaches to identify bottlenecks and optimize their low-latency data ingestion workflows for high throughput.
Mitigating the Impact of State Management in Cloud Stream Processing SystemsScyllaDB
Stream processing is a crucial component of modern data infrastructure, but constructing an efficient and scalable stream processing system can be challenging. Decoupling compute and storage architecture has emerged as an effective solution to these challenges, but it can introduce high latency issues, especially when dealing with complex continuous queries that necessitate managing extra-large internal states.
In this talk, we focus on addressing the high latency issues associated with S3 storage in stream processing systems that employ a decoupled compute and storage architecture. We delve into the root causes of latency in this context and explore various techniques to minimize the impact of S3 latency on stream processing performance. Our proposed approach is to implement a tiered storage mechanism that leverages a blend of high-performance and low-cost storage tiers to reduce data movement between the compute and storage layers while maintaining efficient processing.
Throughout the talk, we will present experimental results that demonstrate the effectiveness of our approach in mitigating the impact of S3 latency on stream processing. By the end of the talk, attendees will have gained insights into how to optimize their stream processing systems for reduced latency and improved cost-efficiency.
Measuring the Impact of Network Latency at TwitterScyllaDB
Widya Salim and Victor Ma will outline the causal impact analysis, framework, and key learnings used to quantify the impact of reducing Twitter's network latency.
Architecting a High-Performance (Open Source) Distributed Message Queuing Sys...ScyllaDB
BlazingMQ is a new open source* distributed message queuing system developed at and published by Bloomberg. It provides highly-performant queues to applications for asynchronous, efficient, and reliable communication. This system has been used at scale at Bloomberg for eight years, where it moves terabytes of data and billions of messages across tens of thousands of queues in production every day.
BlazingMQ provides highly-available, fault-tolerant queues courtesy of replication based on the Raft consensus algorithm. In addition, it provides a rich set of enterprise message routing strategies, enabling users to implement a variety of scenarios for message processing.
Written in C++ from the ground up, BlazingMQ has been architected with low latency as one of its core requirements. This has resulted in some unique design and implementation choices at all levels of the system, such as its lock-free threading model, custom memory allocators, compact wire protocol, multi-hop network topology, and more.
This talk will provide an overview of BlazingMQ. We will then delve into the system’s core design principles, architecture, and implementation details in order to explore the crucial role they play in its performance and reliability.
*BlazingMQ will be released as open source between now and P99 (exact timing is still TBD)
Noise Canceling RUM by Tim Vereecke, AkamaiScyllaDB
Noisy Real User Monitoring (RUM) data can ruin your P99!
We introduce a fresh concept called ""Human Visible Navigations"" (HVN) to tackle this risk; we focus on the experiences you actually care about when talking about the speed of our sites:
- Human: We exclude noise coming from bots and synthetic measurements.
- Visible: We remove any partial or fully hidden experiences. These tend to be very slow but users don’t see this slowness.
- Navigations: We ignore lightning fast back-forward navigations which usually have few optimisation opportunities.
Adopting Human Visible Navigations provides you with these key benefits:
- Fewer changes staying below the radar
- Fewer data fluctuations
- Fewer blindspots when finding bottlenecks
- Better correlation with business metrics
This is supported by plenty of real world examples coming from the world's largest scale modeling site (6M Monthly visits) in combination with aggregated data from the brand new rumarchive.com (open source)
After attending this session; your P99 and other percentiles will become less noisy and easier to tune!
Always-on Profiling of All Linux Threads, On-CPU and Off-CPU, with eBPF & Con...ScyllaDB
In this session, Tanel introduces a new open source eBPF tool for efficiently sampling both on-CPU events and off-CPU events for every thread (task) in the OS. Linux standard performance tools (like perf) allow you to easily profile on-CPU threads doing work, but if we want to include the off-CPU timing and reasons for the full picture, things get complicated. Combining eBPF task state arrays with periodic sampling for profiling allows us to get both a system-level overview of where threads spend their time, even when blocked and sleeping, and allow us to drill down into individual thread level, to understand why.
Using Libtracecmd to Analyze Your Latency and Performance TroublesScyllaDB
Trying to figure out why your application is responding late can be difficult, especially if it is because of interference from the operating system. This talk will briefly go over how to write a C program that can analyze what in the Linux system is interfering with your application. It will use trace-cmd to enable kernel trace events as well as tracing lock functions, and it will then go over a quick tutorial on how to use libtracecmd to read the created trace.dat file to uncover what is the cause of interference to you application.
Reducing P99 Latencies with Generational ZGCScyllaDB
With the low-latency garbage collector ZGC, GC pause times are no longer a big problem in Java. With sub-millisecond pause times there are instead other things in the GC and JVM that can cause application threads to experience unexpected latencies. This talk will dig into a specific use where the GC pauses are no longer the cause of unexpected latencies and look at how adding generations to ZGC help lower the p99 application latencies.
5 Hours to 7.7 Seconds: How Database Tricks Sped up Rust Linting Over 2000XScyllaDB
Linters are a type of database! They are a collection of lint rules — queries that look for rule violations to report — plus a way to execute those queries over a source code dataset.
This is a case study about using database ideas to build a linter that looks for breaking changes in Rust library APIs. Maintainability and performance are key: new Rust releases tend to have mutually-incompatible ways of representing API information, and we cannot afford to reimplement and optimize dozens of rules for each Rust version separately. Fortunately, databases don't require rewriting queries when the underlying storage format or query plan changes! This allows us to ship massive optimizations and support multiple Rust versions without making any changes to the queries that describe lint rules.
Ship now, optimize later"" can be a sustainable development practice after all — join us to see how!
How Netflix Builds High Performance Applications at Global ScaleScyllaDB
We all want to build applications that are blazingly fast. We also want to scale them to users all over the world. Can the two happen together? Can users in the slowest of environments also get a fast experience? Learn how we do this at Netflix: how we understand every user's needs and preferences and build high performance applications that work for every user, every time.
Conquering Load Balancing: Experiences from ScyllaDB DriversScyllaDB
Load balancing seems simple on the surface, with algorithms like round-robin, but the real world loves throwing curveballs. Join me in this session as we delve into the intricacies of load balancing within ScyllaDB Drivers. Discover firsthand experiences from our journey in driver development, where we employed the Power of Two Choices algorithm, optimized the implementation of load balancing in Rust Driver, mitigated cloud costs through zone-aware load balancing and combated the issue of overloading a particular core of ScyllaDB. Be prepared to delve into the practical and theoretical aspects of load balancing, gaining valuable insights along the way.
Interaction Latency: Square's User-Centric Mobile Performance MetricScyllaDB
Mobile performance metrics often take inspiration from the backend world and measure resource usage (CPU usage, memory usage, etc) and workload durations (how long a piece of code takes to run).
However, mobile apps are used by humans and the app performance directly impacts their experience, so we should primarily track user-centric mobile performance metrics. Following the lead of tech giants, the mobile industry at large is now adopting the tracking of app launch time and smoothness (jank during motion).
At Square, our customers spend most of their time in the app long after it's launched, and they don't scroll much, so app launch time and smoothness aren't critical metrics. What should we track instead?
This talk will introduce you to Interaction Latency, a user-centric mobile performance metric inspired from the Web Vital metric Interaction to Next Paint"" (web.dev/inp). We'll go over why apps need to track this, how to properly implement its tracking (it's tricky!), how to aggregate this metric and what thresholds you should target.
How to Avoid Learning the Linux-Kernel Memory ModelScyllaDB
The Linux-kernel memory model (LKMM) is a powerful tool for developing highly concurrent Linux-kernel code, but it also has a steep learning curve. Wouldn't it be great to get most of LKMM's benefits without the learning curve?
This talk will describe how to do exactly that by using the standard Linux-kernel APIs (locking, reference counting, RCU) along with a simple rules of thumb, thus gaining most of LKMM's power with less learning. And the full LKMM is always there when you need it!
99.99% of Your Traces are Trash by Paige CruzScyllaDB
Distributed tracing is still finding its footing in many organizations today, one challenge to overcome is the data volume - keeping 100% of your traces is expensive and unnecessary. Enter sampling - head vs tail how do you decide? Let’s look at the design of Sifter and get familiar with why tail-based sampling is the way to enact a cost-effective tracing solution while actually increasing the system’s observability.
Square's Lessons Learned from Implementing a Key-Value Store with RaftScyllaDB
To put it simply, Raft is used to make a use case (e.g., key-value store, indexing system) more fault tolerant to increase availability using replication (despite server and network failures). Raft has been gaining ground due to its simplicity without sacrificing consistency and performance.
Although we'll cover Raft's building blocks, this is not about the Raft algorithm; it is more about the micro-lessons one can learn from building fault-tolerant, strongly consistent distributed systems using Raft. Things like majority agreement rule (quorum), write-ahead log, split votes & randomness to reduce contention, heartbeats, split-brain syndrome, snapshots & logs replay, client requests dedupe & idempotency, consistency guarantees (linearizability), leases & stale reads, batching & streaming, parallelizing persisting & broadcasting, version control, and more!
And believe it or not, you might be using some of these techniques without even realizing it!
This is inspired by Raft paper (raft.github.io), publications & courses on Raft, and an attempt to implement a key-value store using Raft as a side project.
A Deep Dive Into Concurrent React by Matheus AlbuquerqueScyllaDB
Writing fluid user interfaces becomes more and more challenging as the application complexity increases. In this talk, we’ll explore how proper scheduling improves your app’s experience by diving into some of the concurrent React features, understanding their rationales, and how they work under the hood.
The Latency Stack: Discovering Surprising Sources of LatencyScyllaDB
Usually, when an API call is slow, developers blame ourselves and our code. We held a lock too long, or used a blocking operation, or built an inefficient query. But often, the simple picture of latency as “the time a server takes to process a message” hides a great deal of end-to-end complexity. Debugging tail latencies requires unpacking the abstractions that we normally ignore: virtualization, hidden queues, and network behavior.
In this talk, I’ll describe how developers can diagnose more sources of delay and failure by building a more realistic and broad understanding of networked services. I’ll give some real-world cases when high end-to-end latency or elevated failure rates occurred due to factors we ordinarily might not even measure. Some examples include TCP SYN retransmission; virtualization on the client; and surprising behavior from AWS load balancers. Unfortunately, many measurement techniques don’t cover anything but the portion most directly under developer control. But developers can do better by comparing multiple measurements, applying Little’s law, investing in eBPF probes, and paying attention to the network layer.
Understanding API performance to find and fix issues faster ultimately means understanding the entire stack: the client, your code, and the underlying infrastructure.
Welcome to Cyberbiosecurity. Because regular cybersecurity wasn't complicated...Snarky Security
How wonderful it is that in our modern age, every bit of our biological data can be digitized, stored, and potentially pilfered by cyber thieves! Isn't it just splendid to think that while scientists are busy pushing the boundaries of biotechnology, hackers could be plotting the next big bio-data heist? This delightful scenario is brought to you by the ever-expanding digital landscape of biology and biotechnology, where the integration of computer science, engineering, and data science transforms our understanding and manipulation of biological systems.
While the fusion of technology and biology offers immense benefits, it also necessitates a careful consideration of the ethical, security, and associated social implications. But let's be honest, in the grand scheme of things, what's a little risk compared to potential scientific achievements? After all, progress in biotechnology waits for no one, and we're just along for the ride in this thrilling, slightly terrifying, adventure.
So, as we continue to navigate this complex landscape, let's not forget the importance of robust data protection measures and collaborative international efforts to safeguard sensitive biological information. After all, what could possibly go wrong?
-------------------------
This document provides a comprehensive analysis of the security implications biological data use. The analysis explores various aspects of biological data security, including the vulnerabilities associated with data access, the potential for misuse by state and non-state actors, and the implications for national and transnational security. Key aspects considered include the impact of technological advancements on data security, the role of international policies in data governance, and the strategies for mitigating risks associated with unauthorized data access.
This view offers valuable insights for security professionals, policymakers, and industry leaders across various sectors, highlighting the importance of robust data protection measures and collaborative international efforts to safeguard sensitive biological information. The analysis serves as a crucial resource for understanding the complex dynamics at the intersection of biotechnology and security, providing actionable recommendations to enhance biosecurity in an digital and interconnected world.
The evolving landscape of biology and biotechnology, significantly influenced by advancements in computer science, engineering, and data science, is reshaping our understanding and manipulation of biological systems. The integration of these disciplines has led to the development of fields such as computational biology and synthetic biology, which utilize computational power and engineering principles to solve complex biological problems and innovate new biotechnological applications. This interdisciplinary approach has not only accelerated research and development but also introduced new capabilities such as gene editing and biomanufact
Keynote : Presentation on SASE TechnologyPriyanka Aash
Secure Access Service Edge (SASE) solutions are revolutionizing enterprise networks by integrating SD-WAN with comprehensive security services. Traditionally, enterprises managed multiple point solutions for network and security needs, leading to complexity and resource-intensive operations. SASE, as defined by Gartner, consolidates these functions into a unified cloud-based service, offering SD-WAN capabilities alongside advanced security features like secure web gateways, CASB, and remote browser isolation. This convergence not only simplifies management but also enhances security posture and application performance across global networks and cloud environments. Discover how adopting SASE can streamline operations and fortify your enterprise's digital transformation strategy.
How UiPath Discovery Suite supports identification of Agentic Process Automat...DianaGray10
📚 Understand the basics of the newly persona-based LLM-powered Agentic Process Automation and discover how existing UiPath Discovery Suite products like Communication Mining, Process Mining, and Task Mining can be leveraged to identify APA candidates.
Topics Covered:
💡 Idea Behind APA: Explore the innovative concept of Agentic Process Automation and its significance in modern workflows.
🔄 How APA is Different from RPA: Learn the key differences between Agentic Process Automation and Robotic Process Automation.
🚀 Discover the Advantages of APA: Uncover the unique benefits of implementing APA in your organization.
🔍 Identifying APA Candidates with UiPath Discovery Products: See how UiPath's Communication Mining, Process Mining, and Task Mining tools can help pinpoint potential APA candidates.
🔮 Discussion on Expected Future Impacts: Engage in a discussion on the potential future impacts of APA on various industries and business processes.
Enhance your knowledge on the forefront of automation technology and stay ahead with Agentic Process Automation. 🧠💼✨
Speakers:
Arun Kumar Asokan, Delivery Director (US) @ qBotica and UiPath MVP
Naveen Chatlapalli, Solution Architect @ Ashling Partners and UiPath MVP
This PDF delves into the aspects of information security from a forensic perspective, focusing on privacy leaks. It provides insights into the methods and tools used in forensic investigations to uncover and mitigate privacy breaches in mobile and cloud environments.
UiPath Community Day Amsterdam: Code, Collaborate, ConnectUiPathCommunity
Welcome to our third live UiPath Community Day Amsterdam! Come join us for a half-day of networking and UiPath Platform deep-dives, for devs and non-devs alike, in the middle of summer ☀.
📕 Agenda:
12:30 Welcome Coffee/Light Lunch ☕
13:00 Event opening speech
Ebert Knol, Managing Partner, Tacstone Technology
Jonathan Smith, UiPath MVP, RPA Lead, Ciphix
Cristina Vidu, Senior Marketing Manager, UiPath Community EMEA
Dion Mes, Principal Sales Engineer, UiPath
13:15 ASML: RPA as Tactical Automation
Tactical robotic process automation for solving short-term challenges, while establishing standard and re-usable interfaces that fit IT's long-term goals and objectives.
Yannic Suurmeijer, System Architect, ASML
13:30 PostNL: an insight into RPA at PostNL
Showcasing the solutions our automations have provided, the challenges we’ve faced, and the best practices we’ve developed to support our logistics operations.
Leonard Renne, RPA Developer, PostNL
13:45 Break (30')
14:15 Breakout Sessions: Round 1
Modern Document Understanding in the cloud platform: AI-driven UiPath Document Understanding
Mike Bos, Senior Automation Developer, Tacstone Technology
Process Orchestration: scale up and have your Robots work in harmony
Jon Smith, UiPath MVP, RPA Lead, Ciphix
UiPath Integration Service: connect applications, leverage prebuilt connectors, and set up customer connectors
Johans Brink, CTO, MvR digital workforce
15:00 Breakout Sessions: Round 2
Automation, and GenAI: practical use cases for value generation
Thomas Janssen, UiPath MVP, Senior Automation Developer, Automation Heroes
Human in the Loop/Action Center
Dion Mes, Principal Sales Engineer @UiPath
Improving development with coded workflows
Idris Janszen, Technical Consultant, Ilionx
15:45 End remarks
16:00 Community fun games, sharing knowledge, drinks, and bites 🍻
Demystifying Neural Networks And Building Cybersecurity ApplicationsPriyanka Aash
In today's rapidly evolving technological landscape, Artificial Neural Networks (ANNs) have emerged as a cornerstone of artificial intelligence, revolutionizing various fields including cybersecurity. Inspired by the intricacies of the human brain, ANNs have a rich history and a complex structure that enables them to learn and make decisions. This blog aims to unravel the mysteries of neural networks, explore their mathematical foundations, and demonstrate their practical applications, particularly in building robust malware detection systems using Convolutional Neural Networks (CNNs).
TrustArc Webinar - Innovating with TRUSTe Responsible AI CertificationTrustArc
In a landmark year marked by significant AI advancements, it’s vital to prioritize transparency, accountability, and respect for privacy rights with your AI innovation.
Learn how to navigate the shifting AI landscape with our innovative solution TRUSTe Responsible AI Certification, the first AI certification designed for data protection and privacy. Crafted by a team with 10,000+ privacy certifications issued, this framework integrated industry standards and laws for responsible AI governance.
This webinar will review:
- How compliance can play a role in the development and deployment of AI systems
- How to model trust and transparency across products and services
- How to save time and work smarter in understanding regulatory obligations, including AI
- How to operationalize and deploy AI governance best practices in your organization
Generative AI technology is a fascinating field that focuses on creating comp...Nohoax Kanont
Generative AI technology is a fascinating field that focuses on creating computer models capable of generating new, original content. It leverages the power of large language models, neural networks, and machine learning to produce content that can mimic human creativity. This technology has seen a surge in innovation and adoption since the introduction of ChatGPT in 2022, leading to significant productivity benefits across various industries. With its ability to generate text, images, video, and audio, generative AI is transforming how we interact with technology and the types of tasks that can be automated.
Cracking AI Black Box - Strategies for Customer-centric Enterprise ExcellenceQuentin Reul
The democratization of Generative AI is ushering in a new era of innovation for enterprises. Discover how you can harness this powerful technology to deliver unparalleled customer value and securing a formidable competitive advantage in today's competitive market. In this session, you will learn how to:
- Identify high-impact customer needs with precision
- Harness the power of large language models to address specific customer needs effectively
- Implement AI responsibly to build trust and foster strong customer relationships
Whether you're at the early stages of your AI journey or looking to optimize existing initiatives, this session will provide you with actionable insights and strategies needed to leverage AI as a powerful catalyst for customer-driven enterprise success.
Choosing the Best Outlook OST to PST Converter: Key Features and Considerationswebbyacad software
When looking for a good software utility to convert Outlook OST files to PST format, it is important to find one that is easy to use and has useful features. WebbyAcad OST to PST Converter Tool is a great choice because it is simple to use for anyone, whether you are tech-savvy or not. It can smoothly change your files to PST while keeping all your data safe and secure. Plus, it can handle large amounts of data and convert multiple files at once, which can save you a lot of time. It even comes with 24*7 technical support assistance and a free trial, so you can try it out before making a decision. Whether you need to recover, move, or back up your data, Webbyacad OST to PST Converter is a reliable option that gives you all the support you need to manage your Outlook data effectively.
"Making .NET Application Even Faster", Sergey Teplyakov.pptxFwdays
In this talk we're going to explore performance improvement lifecycle, starting with setting the performance goals, using profilers to figure out the bottle necks, making a fix and validating that the fix works by benchmarking it. The talk will be useful for novice and seasoned .NET developers and architects interested in making their application fast and understanding how things work under the hood.
2. Tammy Everts (she/her)
CXO at SpeedCurve
■ Author of ‘Time Is Money: The Business
Value of Web Performance’ (O’Reilly)
■ Co-chair of performance.now()
■ Co-curator of WPOstats.com
■ Used to own Uber.com
4. Improved average load time from 6s to 1.2s
7-12% increase in conversion rate + 25% increase in PVs
Average load time degraded to 5s
User feedback: “I will not come back to this site again.”
Re-focused on performance
0.4% increase in conversion rate
2010
2011
2009
5. 1. Constant feature development
2. Badly implemented third-parties
3. Waited too long to tackle problems
4. Relied on performance sprints
5. Stopped doing front-end performance measurement
6. No way to track regressions
10. “Fighting regressions took priority over optimizations.
The reason we decided this was because in the past,
when we had performance efforts, engineers who were
working on optimizations couldn’t really see progress
in our performance metrics, because there were
so many regressions happening at the same time.”
Michelle Vu, Pinterest
perfnow.nl/2018#michelle
11. What is a performance budget?
Which metrics should I focus on?
What should my budget thresholds be?
How can I stay on top of my budgets?
13. Thresholds YOU create for metrics
that are meaningful for YOUR site
Time-based • Start Render, Largest Contentful Paint
Quantity-based • Page size, image weight, Long Tasks
Score-based • Cumulative Layout Shift, Lighthouse scores
14. Monitoring tools
Synthetic (lab)
Mimics defined network & browser conditions
No installation required
Limited URLs
Limited test locations
Compare any sites
Detailed analysis & visuals
Real user monitoring (field)
Real network & browser conditions
Requires JavaScript installation
Large sample size (up to 100%)
Geographic spread
Only measure your own site
Correlation with other metrics (e.g., bounce rate)
16. A good performance budget shows you…
❑ What your budget is
❑ When you go out of bounds
❑ How long you’re out of bounds
❑ When you’re back within budget
17. Budgets can be passive (e.g., charts)
Get alerts so you can investigate
Break the build
26. Is the page loading?
Can I use it?
How does it feel?
27. ❑ Tracks the most important content
❑ Is accessible out of the box
❑ Is available in synthetic and RUM
❑ Can be correlated to UX and business metrics
❑ Has broad browser support
The ideal UX metric…
29. The time from the start of the initial navigation
until the first byte is received by the browser
(AKA Time to First Byte)
Synthetic & RUM
Backend time
31. ❑ Tracks the most important content
❑ Is accessible out of the box
❑ Has broad browser support
❑ Is available in synthetic and RUM
❑ Can be correlated to UX and business metrics
√
√
√
Backend time
√
32. The time from the start of navigation
until the first non-white content is painted
Synthetic & RUM
Start Render
36. ❑ Tracks the most important content
❑ Is accessible out of the box
❑ Has broad browser support
❑ Is available in synthetic and RUM
❑ Can be correlated to UX and business metrics
√
√
√
Start Render
√
√
39. Amount of time it takes for the largest visual
element (image or video) to render
Synthetic & RUM
(Chromium browsers only)
Largest Contentful Paint
42. ❑ Tracks the most important content
❑ Is accessible out of the box
❑ Has broad browser support
❑ Is available in synthetic and RUM
❑ Can be correlated to UX and business metrics
√
√
Largest Contentful Paint
√
√
43. When the last piece of critical content
(hero image, first H1) is painted in the
browser.
Synthetic
Last Painted Hero
45. ❑ Tracks the most important content
❑ Is accessible out of the box
❑ Has broad browser support
❑ Is available in synthetic and RUM
❑ Can be correlated to UX and business metrics
√
√
Last Painted Hero
√
47. Aggregate score that reflects how much
page elements shift during rendering
Synthetic & RUM
(Chromium browsers only)
Cumulative Layout Shift
49. Bounce rate gets worse as CLS degrades
Bounce rate improves as CLS degrades
Bounce rate stays the same as CLS degrades
50. ❑ Tracks the most important content
❑ Is accessible out of the box
❑ Has broad browser support
❑ Is available in synthetic and RUM
❑ Can be correlated to UX and business metrics
√
√
Cumulative Layout Shift
√
51. Any JavaScript function that takes >50ms to execute
Long Tasks don’t always block page rendering,
but they can cause the page to feel janky
Synthetic & RUM
Long Tasks
55. ❑ Tracks the most important content
❑ Is accessible out of the box
❑ Has broad browser support
❑ Is available in synthetic and RUM
❑ Can be correlated to UX and business metrics
√
√
Long Tasks Time
√
√
56. Measures a page's responsiveness
to individual user interactions
Observes latency and reports a single value
that all (or nearly all) interactions are below
RUM
Interaction to Next Paint
58. ❑ Tracks the most important content
❑ Is accessible out of the box
❑ Has broad browser support
❑ Is available in synthetic and RUM
❑ Can be correlated to UX and business metrics
√
Interaction to Next Paint
√
√
59. Meaningful
content
Usable out of the
box
Broad browser
support
Synthetic RUM
Correlates to
business/UX
Time to First Byte ★ ★ ★ ★ ☆
Start Render ☆ ★ ★ ★ ★ ★
Largest Contentful Paint ☆ ★ ★ ★ ★
Last Painted Hero ☆ ★ ★ ★
Cumulative Layout Shift ☆ ☆ ★ ★
Long Tasks ★ ★ ★ ★ ☆
Interaction to Next Paint ★ ★ ★ ★
63. Goals are aspirational
How fast do I want to be eventually?
Budgets are pragmatic
How can I keep my site from getting slower
while I work toward my goals?
79. Everyone* who touches a page should
understand the performance impact
of their choices
*Yes, this includes marketing people
80. For example…
If your marketing team is responsible for adding
and maintaining third-party tags, they should:
❑ Have a basic understanding of the metrics (such as Long Tasks Time)
❑ Collaborate on setting the budget
❑ Receive alerts when the budget is violated
❑ Participate (or at least have visibility) in identifying and fixing the issue
81. 1. Start small (even a single metric will do!)
2. Visually validate your metrics (filmstrips, videos)
3. Validate your metrics some more (UX, business)
4. Get buy-in from different stakeholders
5. Focus on the pages that matter most
6. Revisit your budgets regularly (2-4 weeks)
7. Remember that metrics are always evolving
8. Never stop measuring
83. A Complete Guide to Performance Budgets
speedcurve.com/blog/performance-budgets/
Setting a Performance Budget
timkadlec.com/2013/01/setting-a-performance-budget/
Performance Budgets, Pragmatically
csswizardry.com/2020/01/performance-budgets-pragmatically/
Web Vitals
web.dev/vitals/
Farewell FID… hello Interaction to Next Paint
speedcurve.com/blog/interaction-to-next-paint-core-web-vitals/
Cumulative Layout Shift: What it measures (and what it doesn’t)
speedcurve.com/blog/google-cumulative-layout-shift/