House Committee Postpones Markup Amid New Privacy Bill Updates

On June 27, 2024, the U.S. House of Representatives cancelled the House Energy and Commerce Committee markup of the American Privacy Rights Act (“APRA” or “Bill”) scheduled for that day, reportedly with little notice. There has been no indication of when the markup will be rescheduled; however, House Energy and Commerce Committee Chairwoman Cathy McMorris Rodgers issued a statement reiterating her support for the legislation.

On June 20, 2024, the House posted a third version of the discussion draft of the APRA. On June 25, 2024, two days before the scheduled markup session, Committee members introduced the APRA as a bill, H.R. 8818. Each version featured several key changes from earlier drafts, which are outlined collectively, below.

Notable changes in H.R. 8818 include the removal of two key sections:

  • “Civil Rights and Algorithms,” which required entities to conduct covered algorithm impact assessments when algorithms posed a consequential risk of harm to individuals or groups; and
  • “Consequential Decision Opt-Out,” which allowed individuals to opt out of being subjected to covered algorithms.

Additional changes include the following:

  • The Bill introduces new definitions, such as “coarse geolocation information” and “online activity profile,” the latter of which refines a category of sensitive data. “Neural data” and “information that reveals the status of an individual as a member of the Armed Forces” are added as new categories of sensitive data. The Bill also modifies the definitions of “contextual advertising” and “first-party advertising.”
  • The data minimization section includes a number of changes, such as the addition of “conduct[ing] medical research” in compliance with applicable federal law as a new permitted purpose. The Bill also limits the ability to rely on permitted purposes in processing sensitive covered data, biometric and genetic information.
  • The Bill now allows not only covered entities (excluding data brokers or large data holders), but also service providers (that are not large data holders) to apply for the Federal Trade Commission-approved compliance guideline mechanism.
  • Protections for covered minors now include a prohibition on first-party advertising (in addition to targeted advertising) if the covered entity knows the individual is a minor, with limited exceptions acknowledged by the Bill. It also restricts the transfer of a minor’s covered data to third parties.
  • The Bill adds another preemption clause, clarifying that APRA would preempt any state law providing protections for children or teens to the extent such laws conflict with the Bill, but does not prohibit states from enacting laws, rules or regulations that offer greater protection to children or teens than the APRA.

For additional information about the changes, please refer to the unofficial redline comparison of all APRA versions published by the IAPP.

The Double-Edged Impact of AI Compliance Algorithms on Whistleblowing

As the implementation of Artificial Intelligence (AI) compliance and fraud detection algorithms within corporations and financial institutions continues to grow, it is crucial to consider how this technology has a twofold effect.

It’s a classic double-edged technology: in the right hands it can help detect fraud and bolster compliance, but in the wrong it can snuff out would-be-whistleblowers and weaken accountability mechanisms. Employees should assume it is being used in a wide range of ways.

Algorithms are already pervasive in our legal and governmental systems: the Securities and Exchange Commission, a champion of whistleblowers, employs these very compliance algorithms to detect trading misconduct and determine whether a legal violation has taken place.

There are two major downsides to the implementation of compliance algorithms that experts foresee: institutions avoiding culpability and tracking whistleblowers. AI can uncover fraud but cannot guarantee the proper reporting of it. This same technology can be used against employees to monitor and detect signs of whistleblowing.

Strengths of AI Compliance Systems:

AI excels at analyzing vast amounts of data to identify fraudulent transactions and patterns that might escape human detection, allowing institutions to quickly and efficiently spot misconduct that would otherwise remain undetected.

AI compliance algorithms are promised to operate as follows:

  • Real-time Detection: AI can analyze vast amounts of data, including financial transactions, communication logs, and travel records, in real-time. This allows for immediate identification of anomalies that might indicate fraudulent activity.
  • Pattern Recognition: AI excels at finding hidden patterns, analyzing spending habits, communication patterns, and connections between seemingly unrelated entities to flag potential conflicts of interest, unusual transactions, or suspicious interactions.
  • Efficiency and Automation: AI can automate data collection and analysis, leading to quicker identification and investigation of potential fraud cases.

Yuktesh Kashyap, associate Vice President of data science at Sigmoid explains on TechTarget that AI allows financial institutions, for example, to “streamline compliance processes and improve productivity. Thanks to its ability to process massive data logs and deliver meaningful insights, AI can give financial institutions a competitive advantage with real-time updates for simpler compliance management… AI technologies greatly reduce workloads and dramatically cut costs for financial institutions by enabling compliance to be more efficient and effective. These institutions can then achieve more than just compliance with the law by actually creating value with increased profits.”

Due Diligence and Human Oversight

Stephen M. Kohn, founding partner of Kohn, Kohn & Colapinto LLP, argues that AI compliance algorithms will be an ineffective tool that allow institutions to escape liability. He worries that corporations and financial institutions will implement AI systems and evade enforcement action by calling it due diligence.

“Companies want to use AI software to show the government that they are complying reasonably. Corporations and financial institutions will tell the government that they use sophisticated algorithms, and it did not detect all that money laundering, so you should not sanction us because we did due diligence.” He insists that the U.S. Government should not allow these algorithms to be used as a regulatory benchmark.

Legal scholar Sonia Katyal writes in her piece “Democracy & Distrust in an Era of Artificial Intelligence” that “While automation lowers the cost of decision making, it also raises significant due process concerns, involving a lack of notice and the opportunity to challenge the decision.”

While AI can be used as a powerful tool for identifying fraud, there is still no method for it to contact authorities with its discoveries. Compliance personnel are still required to blow the whistle, given societies standard due process. These algorithms should be used in conjunction with human judgment to determine compliance or lack thereof. Due process is needed so that individuals can understand the reasoning behind algorithmic determinations.

The Double-Edged Sword

Darrell West, Senior Fellow at Brookings Institute’s Center for Technology Innovation and Douglas Dillon Chair in Governmental Studies warns about the dangerous ways these same algorithms can be used to find whistleblowers and silence them.

Nowadays most office jobs (whether remote or in person) conduct operations fully online. Employees are required to use company computers and networks to do their jobs. Data generated by each employee passes through these devices and networks. Meaning, your privacy rights are questionable.

Because of this, whistleblowing will get much harder – organizations can employ the technology they initially implemented to catch fraud to instead catch whistleblowers. They can monitor employees via the capabilities built into our everyday tech: cameras, emails, keystroke detectors, online activity logs, what is downloaded, and more. West urges people to operate under the assumption that employers are monitoring their online activity.

These techniques have been implemented in the workplace for years, but AI automates tracking mechanisms. AI gives organizations more systematic tools to detect internal problems.

West explains, “All organizations are sensitive to a disgruntled employee who might take information outside the organization, especially if somebody’s dealing with confidential information, budget information or other types of financial information. It is just easy for organizations to monitor that because they can mine emails. They can analyze text messages; they can see who you are calling. Companies could have keystroke detectors and see what you are typing. Since many of us are doing our jobs in Microsoft Teams meetings and other video conferencing, there is a camera that records and transcribes information.”

If a company is defining a whistleblower as a problem, they can monitor this very information and look for keywords that would indicate somebody is engaging in whistleblowing.

With AI, companies can monitor specific employees they might find problematic (such as a whistleblower) and all the information they produce, including the keywords that might indicate fraud. Creators of these algorithms promise that soon their products will be able to detect all sorts of patterns and feelings, such as emotion and sentiment.

AI cannot determine whether somebody is a whistleblower, but it can flag unusual patterns and refer those patterns to compliance analysts. AI then becomes a tool to monitor what is going on within the organization, making it difficult for whistleblowers to go unnoticed. The risk of being caught by internal compliance software will be much greater.

“The only way people could report under these technological systems would be to go offline, using their personal devices or burner phones. But it is difficult to operate whistleblowing this way and makes it difficult to transmit confidential information. A whistleblower must, at some point, download information. Since you will be doing that on a company network, and that is easily detected these days.”

But the question of what becomes of the whistleblower is based on whether the compliance officers operate in support of the company or the public interest – they will have an extraordinary amount of information about the company and the whistleblower.

Risks for whistleblowers have gone up as AI has evolved because it is harder for them to collect and report information on fraud and compliance without being discovered by the organization.

West describes how organizations do not have a choice whether or not to use AI anymore: “All of the major companies are building it into their products. Google, Microsoft, Apple, and so on. A company does not even have to decide to use it: it is already being used. It’s a question of whether they avail themselves of the results of what’s already in their programs.”

“There probably are many companies that are not set up to use all the information that is at their disposal because it does take a little bit of expertise to understand data analytics. But this is just a short-term barrier, like organizations are going to solve that problem quickly.”

West recommends that organizations should just be a lot more transparent about their use of these tools. They should inform their employees what kind of information they are using, how they are monitoring employees, and what kind of software they use. Are they using detection? Software of any sort? Are they monitoring keystrokes?

Employees should want to know how long information is being stored. Organizations might legitimately use this technology for fraud detection, which might be a good argument to collect information, but it does not mean they should keep that information for five years. Once they have used the information and determined whether employees are committing fraud, there is no reason to keep it. Companies are largely not transparent about length of storage and what is done with this data and once it is used.

West believes that currently, most companies are not actually informing employees of how their information is being kept and how the new digital tools are being utilized.

The Importance of Whistleblower Programs:

The ability of AI algorithms to track whistleblowers poses a real risk to regulatory compliance given the massive importance of whistleblower programs in the United States’ enforcement of corporate crime.

The whistleblower programs at the Securities and Exchange Commission (SEC) and Commodity Futures Trading Commission (CFTC) respond to individuals who voluntarily report original information about fraud or misconduct.

If a tip leads to a successful enforcement action, the whistleblowers are entitled to 10-30% of the recovered funds. These programs have created clear anti-retaliation protections and strong financial incentives for reporting securities and commodities fraud.

Established in 2010 under the Dodd-Frank Act, these programs have been integral to enforcement. The SEC reports that whistleblower tips have led to over $6 billion in sanctions while the CFTC states that almost a third of its investigations stem from whistleblower disclosures.

Whistleblower programs, with robust protections for those who speak out, remain essential for exposing fraud and holding organizations accountable. This ensures that detected fraud is not only identified, but also reported and addressed, protecting taxpayer money, and promoting ethical business practices.

If AI algorithms are used to track down whistleblowers, their implementation would hinder these programs. Companies will undoubtedly retaliate against employees they suspect of blowing the whistle, creating a massive chilling effect where potential whistleblowers would not act out of fear of detection.

Already being employed in our institutions, experts believe these AI-driven compliance systems must have independent oversight for transparency’s sake. The software must also be designed to adhere to due process standards.

For more news on AI Compliance and Whistleblowing, visit the NLR Communications, Media & Internet section.

Five Compliance Best Practices for … Conducting a Risk Assessment

As an accompaniment to our biweekly series on “What Every Multinational Should Know About” various international trade, enforcement, and compliance topics, we are introducing a second series of quick-hit pieces on compliance best practices. Give us two minutes, and we will give you five suggested compliance best practices that will benefit your international regulatory compliance program.

Conducting an international risk assessment is crucial for identifying and mitigating potential risks associated with conducting business operations in foreign countries and complying with the expansive application of U.S. law. Because compliance is essentially an exercise in identifying, mitigating, and managing risk, the starting point for any international compliance program is to conduct a risk assessment. If your company has not done one within the last two years, then your organization probably should be putting one in motion.

Here are five compliance checks that are important to consider when conducting a risk assessment:

  1. Understand Business Operations: A good starting point is to gain a thorough understanding of the organization’s business operations, including products, services, markets, supply chains, distribution channels, and key stakeholders. You should pay special attention to new risk areas, including newly acquired companies and divisions, expansions into new countries, and new distribution patterns. Identifying the business profile of the organization, and how it raises systemic risks, is the starting point of developing the risk profile of the company.
  2. Conduct Country- and Industry-Specific Risk Factors: Analyze the political, economic, legal, and regulatory landscape of each country where the organization operates or plans to operate. Consider factors such as political stability, corruption levels, regulatory environment, and cultural differences. You should also understand which countries also raise indirect risks, such as for the transshipment of goods to sanctioned countries. You also should evaluate industry-specific risks and trends that may impact your company’s risk profile, such as the history of recent enforcement actions.
  3. Gather Risk-Related Data and Information: You should gather relevant data and information from internal and external sources to inform the risk-assessment process. Relevant examples include internal documentation, industry publications, reports of recent enforcement actions, and areas where government regulators are stressing compliance, such as the recent focus on supply chain factors. Use risk-assessment tools and methodologies to systematically evaluate and prioritize risks, such as risk matrices, risk heat maps, scenario analysis, and probability-impact assessments. (The Foley anticorruption, economic sanctions, and forced labor heat maps are found here.)
  4. Engage Stakeholders: Engage key stakeholders throughout the risk-assessment process to gather insights, perspectives, and feedback. Consult with local employees and business partners to gain feedback on compliance issues that are likely to arise while also seeking their aid in disseminating the eventual compliance dictates, internal controls, and other compliance measures that your organization ends up implementing or updating.
  5. Document Findings and Develop Risk-Mitigation Strategies: Document the findings of the risk assessment, including identified risks, their potential impact and likelihood, and recommended mitigation strategies. Ensure that documentation is clear, concise, and actionable. Use the documented findings to develop risk-mitigation strategies and action plans to address identified risks effectively while prioritizing mitigation efforts based on risk severity, urgency, and feasibility of implementation.

Most importantly, you should recognize that assessing and addressing risk is an ongoing process. You should ensure your organization has established processes for the ongoing monitoring and review of risks to track changes in the risk landscape and evaluate the effectiveness of mitigation measures. Further, at least once every two years, most multinational organizations should be updating their risk assessment periodically to reflect evolving risks and business conditions as well as changing regulations and regulator enforcement priorities.

Five Data Quality Nightmares That Haunt Marketers and How Avoid Them

In this spooky season of vampires, witches and scary clowns, we’d like to add one more to the mix – data quality nightmares – which can be more frightful than a marathon of Freddy Kreuger movies to some of us.

We need data about our clients and prospects in order to create strategic programs that can lead to new business and increased visibility, but maintaining that data on an ongoing basis can quickly turn into a nightmare without the right resources.

Having good quality data is important for success in so many areas of your organization, including:

  • Communicating effectively with core constituencies
  • Successfully planning and executing events
  • Segmenting your target markets, clients or customers
  • Providing superior customer service
  • Understanding the needs of clients or customers
  • Effectively developing new business
  • Improving delivery and reducing costs of postal mailings

The reality is that your data will never be perfect, but there are ways you can address and improve it. The longer you wait to improve your data management, the scarier it will become. Here are some of the most common data quality nightmares we see and how to avoid them:

Data Quality Nightmare 1: Duplicate data

Is your CRM a graveyard for thousands of duplicate company and individual contacts? Data comes from all directions, so it’s important to ensure that data isn’t being duplicated. Dupes make it difficult to coordinate efforts and activities. Duplicate data occurs when customer information appears more than once in the database, or multiple variations of the same individual appear.

Secondly, duplicate data can damage your brand image. It is unlikely that a contact who receives the same information twice will be happy about it. This is an easy way to frustrate customers and prospects and can make your business appear disorganized.

Data Quality Nightmare 2: Missing or incomplete data

Are your contact details ‘ghosting you’? Without good data you can’t target or segment, and your communications and invitations won’t reach the right audiences.

Similar to inaccurate data, incomplete data can also have a negative impact on your business performance.

One way that organizations can help control this data quality nightmare, is by making certain form fields a required entry. That way, data entries will be more consistent and complete.

Data Quality Nightmare 3: Incorrect or inconsistent data

Does incorrect or inconsistent data give you nightmares? Bad CRM data leads to missed opportunities for new customers, and it could create issues for your sales cycle. There is almost no point in engaging with contacts in your database if the information is incorrect.

There are multiple ways to encourage good data habits, depending on your system and method of contact entry. If your firm relies on manual data entry, implement a firmwide Data Standards Guide to inform users how data should be entered (e.g., does your firm spell out or abbreviate job titles?). It can also be helpful to use system validation rules wherever possible to require certain information in new records such as last name, city and email address to ensure your contacts are relevant.

Data Quality Nightmare 4: Too much data

Are you in the ‘zombie zone’ trying blindly to figure out what to do with too much data and/or disparate data from disconnected systems?

Having too much data can be overwhelming – and unnecessary. It’s important to set parameters on what information you truly need about your clients and prospects, and then maintain only that information going forward. This will streamline the process and make everyone’s jobs easier by avoiding data quality nightmares.

Data Quality Nightmare 5: Lack of data quality resources

Does your team run screaming from data quality projects leaving you with a data disaster?

To encourage ongoing system adoption and utilization, data quality and maintenance must be top priorities. Resources must be dedicated – including time, money and people. Processes and procedures need to be put in place to maintain ongoing quality. Most importantly, training and communication are essential to ensure that end users don’t create unnecessary duplicates or introduce more bad data into the system.

Data Quality Doesn’t Have to Be Scary

While it’s easy to become scared by nightmare data, it’s important to put it in perspective. Focus on discreet data and projects that yield real ROI such as:

  • Start with your most relevant records like current clients. Begin cleaning your top 100 to 500 along with associated key contacts.
  • Review frequently used lists to ensure your communications and invitations are reaching the right recipients.
  • Vet bounced emails after each campaign, or better yet, regularly run lists through an automated data process to identify bad emails before a campaign to ensure that information actually reaches your targets in a timely manner.
  • Tackle time-sensitive one-off projects. For instance, an upcoming event often provides a good opportunity to get users engaged in cleanup efforts, particularly if the event is important to them.

It’s also important to remember that because data degrades so rapidly, data cleaning can’t be a one-time initiative. Once your team begins regularly maintaining your data, the cleanup will get easier over time. And remember, because data cleaning never really ends, the good news is that this means you have forever to get better at it.

© Copyright 2022 CLIENTSFirst Consulting

The Do’s and Don’ts of Data Cleaning – Don’t Drown in Bad Data

Bad CRM data can compound exponentially, impacting marketing and business development. It’s essential to understand the scope of  your data problems and follow a plan for regular data cleaning.  

Have you ever heard the saying, “No man ever steps into the same river twice”? Because a river’s water is constantly flowing and changing, the water you step in today will be different from yesterday. The same is true for the data in your CRM system: people are constantly changing roles, relocating, retiring; companies are opening, closing, moving and merging.

On top of that, new data isn’t always entered correctly. As a result, a database with clean, correct information today will not necessarily be accurate tomorrow. Over time, this bad data can compound exponentially, resulting in ineffective marketing, events and communication campaigns because as your data degrades, you reach fewer members of your target audience.

For professional services firms, poor data quality in your CRM system can also translate into a decline in system adoption. Once your professionals see bad data, they won’t trust the system as a whole and ultimately may outright refuse to use it. This is why we stress the importance of ongoing data cleaning.

Data Cleaning Do’s and Don’ts

Simply put, data cleaning involves identifying incorrect, incomplete and/or dated data in your systems and correcting and enhancing it. If you have a large database with thousands, or hundreds of thousands, of records, the data quality process can seem daunting and overwhelming.

While there’s no magic bullet or quick fix for poor data quality, ignoring data problems until there’s a crisis is not a strategy. Good data quality requires ongoing effort that never ends. The good news is that this means you have forever to get better at it. So, start now. Begin by assessing the scope of your data quality issues. Then, because it’s not always cost-effective or even possible to clean all your data, start by focusing on the highest priority projects.

Identify and Prioritize Your Most Important Data

All contact records are not created equal. For instance, client data is typically more important than non-client data. Additionally, individuals who have recently subscribed to your communications or attended an event are more important than those who last interacted with your firm years ago. Whatever segmenting scenario you select, it’s important to find ways to divide your contact data into manageable pieces because it makes the process more manageable and allows you to better measure progress.

Eliminate Stagnant Records

Related to prioritizing your data, don’t be hesitant about removing records that have been inactive for an extended period. Search your system for contacts that have not been updated for a few years, are not related to or known by any of your professionals, are not clients or alumni, and have not opened a communication or invitation in two to three years. Chances are good these records are not only outdated but also may not be worth the resources it would take to update them. Identify these records and consider removing them from the system. Less mess in your database makes cleanup a bit more manageable.

Your Plan Is Your Life Preserver

Once you’ve prioritized subsets or segments of contacts, identifying and prioritizing your most common data errors can help you decide on the best way to tackle ongoing data cleaning. For example, if you have an important email that needs to be sent to clients, you need to focus on email addresses. Identify records that don’t have an email address, have incorrectly formatted email addresses or have bounced recently.

In addition, if there are contacts you haven’t sent a communication or invitation to for an extended period of time, it’s entirely likely that their email may no longer be valid. It’s important to regularly test emails on your lists because not doing so can cause you to be blacklisted by anti-spam entities or have your account blocked by your eMarketing provider.

Initial Cleaning Cycle

The best place to start your data cleaning cycle is with a contact and list verification and cleansing service such as TrueDQ. This service will evaluate your list data, identify potentially harmful “honeypot” email addresses and even automatically update many of your contacts with current, complete contact information. The data can then also be enhanced with additional missing information, such as industries and locations, to help with targeting and segmenting.

Rinse and Repeat

When one segment or list has been cleaned, move on to the next one – bearing in mind that what’s important on the next list may be different from the last one. For example, maybe you need to send a hard copy postal mailing, so it will be important to ensure the accuracy of physical mailing addresses rather than email addresses.

Bounces and Returns

One of the most common data quality failures at law and other professional services firms is ignoring bounced emails and returned hard copy mailings. Bounces and returns are real-time indicators that can help you keep on top of your data quality. Researching and correcting them is important because sometimes they involve important former clients who could potentially hire the firm again at their new company.

Returned hard mail will often include the forwarding address of the recipient, which should be corrected in your CRM. For emails, use a central email address to collect automatic email replies, since these frequently tell you when a recipient no longer works at an organization.

Ideally, data stewards should regularly review all bounces to take the onus off the professionals. However, it can also be helpful to generate reports on bounced communications and circulate them to professionals or their assistants who may be able to provide updated information – or will at least appreciate knowing which of their contacts have moved on or changed roles.

Finally, if your eMarketing and/or CRM system has a process for automatically isolating bounced records, be sure you have a reciprocal process that automatically reinstates bounced records when the email field is updated.

Prevent Invalid Data

There are multiple ways to encourage good data habits, depending on your system and method of contact entry. If your firm relies on manual data entry, implement a firmwide Data Standards Guide to inform users how data should be entered (e.g., does your firm spell out or abbreviate job titles?). It can also be helpful to use system validation rules wherever possible to require certain information in new records such as last name, city and email address to ensure your contacts are relevant.

Finally, regularly review newly added records for consistency and completeness. This process can reveal issues such as users who may require additional training on contact input best practices. It can also help to catch spam or other potentially dangerous entries that can sometimes flow into your database from online forms that are filled out by bots.

Never, Ever Stop

Just as rivers keep flowing, so does the data in your CRM system – and the data will always need cleaning to ensure that it is fresh. While this may feel like a relentless and burdensome task, never stop – just go with the flow –  because when you’re not regularly cleaning the data, your CRM “river” can become stagnant, and the more polluted it becomes, the longer the eventual cleanup will take.

© Copyright 2022 CLIENTSFirst Consulting

A Rule 37 Refresher – As Applied to a Ransomware Attack

Federal Rule of Civil Procedure 37(e) (“Rule 37”) was completely rewritten in the 2015 amendments.  Before the 2015 amendments, the standard was that a party could not generally be sanctioned for data loss as a result of the routine, good faith operation of its system. That rule didn’t really capture the reality of all of the potential scenarios related to data issues nor did it provide the requisite guidance to attorneys and parties.

The new rule added a dimension of reasonableness to preservation and a roadmap for analysis.  The first guidepost is whether the information should have been preserved. This rule is based upon the common law duty to preserve when litigation is likely. The next guidepost is whether the data loss resulted from a failure to take reasonable steps to preserve. The final guidepost is whether or not the lost data can be restored or replaced through additional discovery.  If there is data that should have been preserved, that was lost because of failure to preserve, and that can’t be replicated, then the court has two additional decisions to make: (1) was there prejudice to another party from the loss OR (2) was there an intent to deprive another party of the information.  If the former, the court may only impose measures “no greater than necessary” to cure the prejudice.  If the latter, the court may take a variety of extreme measures, including dismissal of the action. An important distinction was created in the rule between negligence and intention.

So how does a ransomware attack fit into the new analytical framework? A Special Master in MasterObjects, Inc. v. Amazon.com (U.S. Dist. Court, Northern District of California, March 13, 2022) analyzed Rule 37 in the context of a ransomware attack. MasterObjects was the victim of a well-documented ransomware attack, which precluded the companies access to data prior to 2016. The Special Master considered the declaration from MasterObjects which explained that, despite using state of the art cybersecurity protections, the firm was attacked by hackers in December 2020.  The hack rendered all the files/mailboxes inaccessible without a recovery key set by the attackers.  The hackers demanded a ransom and the company contacted the FBI.  Both the FBI and insurer advised them not to pay the ransom. Despite spending hundreds of hours attempting to restore the data, everything prior to 2016 was inaccessible.

Applying Rule 37, the Special Master stated that, at the outset, there is no evidence that any electronically stored information was “lost.”  The data still exists and, while access has been blocked, it can be accessed in the future if a key is provided or a technological work-around is discovered.

Even if a denial of access is construed to be a “loss,” the Special Master found no evidence in this record that the loss occurred because MasterObjects failed to take reasonable steps to preserve it. This step of the analysis, “failure to take reasonable steps to preserve,” is a “critical, basic element” to prove spoliation.

On the issue of prejudice, Amazon argued that “we can’t know what we don’t know” (related to missing documents).  The Special Master did not find Amazon’s argument persuasive. The Special Master concluded that Amazon’s argument cannot survive the adoption of Rule 37(e). “The rule requires affirmative proof of prejudice in the specific destruction at issue.”

Takeaways:

  1. If you are in a spoliation dispute, make sure you have the experts and evidence to prove or defend your case.

  2. When you are trying to prove spoliation, know the new test and apply it in your analysis (the Special Master noted that Amazon did not reference Rule 37 in its briefing).

  3. As a business owner, when it comes to cybersecurity, you must take reasonable and defensible efforts to protect your data.

©2022 Strassburger McKenna Gutnick & Gefsky

Italian Garante Bans Google Analytics

On June 23, 2022, Italy’s data protection authority (the “Garante”) determined that a website’s use of the audience measurement tool Google Analytics is not compliant with the EU General Data Protection Regulation (“GDPR”), as the tool transfers personal data to the United States, which does not offer an adequate level of data protection. In making this determination, the Garante joins other EU data protection authorities, including the French and Austrian regulators, that also have found use of the tool to be unlawful.

The Garante determined that websites using Google Analytics collected via cookies personal data including user interactions with the website, pages visited, browser information, operating system, screen resolution, selected language, date and time of page views and user device IP address. This information was transferred to the United States without the additional safeguards for personal data required under the GDPR following the Schrems II determination, and therefore faced the possibility of governmental access. In the Garante’s ruling, website operator Caffeina Media S.r.l. was ordered to bring its processing into compliance with the GDPR within 90 days, but the ruling has wider implications as the Garante commented that it had received many “alerts and queries” relating to Google Analytics. It also stated that it called upon “all controllers to verify that the use of cookies and other tracking tools on their websites is compliant with data protection law; this applies in particular to Google Analytics and similar services.”

Copyright © 2022, Hunton Andrews Kurth LLP. All Rights Reserved.

Throwing Out the Privacy Policy is a Bad Idea

The public internet has been around for about thirty years and consumers’ browser-based graphic-heavy experience has existed for about twenty-five years. In the early days, commercial websites operated without privacy policies.

Eventually, people started to realize that they were leaving trails of information online, and in the early ‘aughts the methods for business capturing and profiting from these trails became clear, although the actual uses of the data on individual sites was not clear. People asked for greater transparency from the sites they visited online, and in response received the privacy policy.

A deeply-flawed instrument, the website privacy policy purports to explain how information is gathered and used by a website owner, but most such policies are strangely both imprecise and too long, losing the average reader in a fog of legalese language and marginally relevant facts. Some privacy policies are intentionally obtuse because it doesn’t profit the website operator to make its methods obvious. Many are overly general, in part because the website company doesn’t want to change its policy every time it shifts business practices or vendor alliances. Many are just messy and poorly written.

Part of the reason that privacy policies are confusing is that data privacy is not a precise concept. The definition of data is context dependent. Data can mean the information about a transaction, information gathered from your browser visit (include where you were before and after the visit), information about you or your equipment, or even information derived by analysis of the other information. And we know that de-identified data can be re-identified in many cases, and that even a collection a generic data can lead to one of many ways to identify a person.

The definition of data is context dependent.

The definition of privacy is also untidy. An ecommerce company must capture certain information to fulfill an online order. In this era of connected objects, the company may continue to take information from the item while the consumer is using it. This is true for equipment from televisions to dishwashers to sex toys. The company likely uses this information internally to develop its products. It may use the data to market more goods or services to the consumer. It may transfer the information to other companies so they can market their products more effectively. The company may provide the information to the government. This week’s New Yorker devotes several pages to how the word “privacy” conflates major concepts in US law, including secrecy and autonomy,1 and is thus confusing to courts and public alike.

All of this is difficult to reflect in a privacy policy, even if the company has incentive to provide useful information to its customers.

Last month the Washington Post ran an article by Geoffrey Fowler that was subtitled “Let’s abolish reading privacy policies.” The article notes a 2019 Pew survey claiming that only 9 percent of Americans say they always read privacy policies. I would suggest that more than half of those Americans are lying. Almost no one always reads privacy policies upon first entering a website or downloading an app. That’s not even really what privacy policies are for.

Fowler shows why people do not read these policies. He writes, “As an experiment, I tallied up all of the privacy policies just for the apps on my phone. It totaled nearly 1 million words. “War and Peace” is about half as long. And that’s just my phone. Back in 2008, Lorrie Cranor, a professor of engineering and public policy at Carnegie Mellon University, and a colleague estimated that reading and consenting to all the privacy policies on websites Americans visit would take 244 hours per year.”

The length, complexity and opacity of online privacy policies are concerning. The best alleviation for this concern would not be to eliminate privacy policies, but to make them less instrumental in the most important decisions about descriptive data.

Limit companies’ use of data and we won’t need to fight through their privacy options.

Website owners should not be expected to write out privacy policies that are both sufficiently detailed and succinctly readable so that consumers can make meaningful choices about use of the data that describes them. This type of system forces a person to be responsible for her own data protection and takes the onus off of the company to limit its use of the data. It is like our current system of waste recycling – both ineffective and supported by polluters, because rather than forcing manufacturers to use more environmentally friendly packaging, it pushes consumers to deal with the problem at home, shifting the burden from industry to us.  Similarly, if the legislatures provided a set of simple rules for website operators – here is what you are allowed to do with personal data, and here is what you are not allowed to do with it – then no one would read privacy policies to make sure data about our transactions was spared the worst treatment. The worst treatment would be illegal.

State laws are moving in this direction, providing simpler rules restricting certain uses and transfers of personal data and sensitive data. We are early in the process, but if the trend continues regarding omnibus state privacy laws in the same manner that all states eventually passed data breach disclosure laws, then we can be optimistic and expect full coverage of online privacy rules for all Americans within a decade or so. But we shouldn’t need to wait for all states to comply.

Unlike the data breach disclosure laws which encourage companies to comply only with the laws relevant to their particular loss of data, omnibus privacy laws affect the way companies conduct the normal course of everyday business, so it will only take requirements in a few states before big companies start building their privacy rights recognition functions around the lowest common denominator. It will simply make economic sense for businesses to give every US customer the same rights as most protective state provides its residents. Why build 50 sets of rules when you don’t need to do so? The cost savings of maintaining only one privacy rights-recognition system will offset the cost of providing privacy rights to people in states who haven’t passed omnibus laws yet.

This won’t make privacy policies any easier to read, but it will become less important to read them. Then privacy policies can return to their core function, providing a record of how a company treats data. In other words, a reference document, rather than a set of choices inset into a pillow of legal terms.

We shouldn’t eliminate the privacy policy. We should reduce the importance of such polices, and limit their functions, reducing customer frustration with the privacy policy’s role in our current process. Limit companies’ use of data and we won’t need to fight through their privacy options.


ENDNOTES

1 Privacy law also conflates these meanings with obscurity in a crowd or in public.


Article By Theodore F. Claypoole of Womble Bond Dickinson (US) LLP

Copyright © 2022 Womble Bond Dickinson (US) LLP All Rights Reserved.

New UK IDTA and Addendum Come Into Force

The new UK International Data Transfer Agreement (“IDTA”) and Addendum to the new 2021 EU Standard Contract Clauses (“New EU SCCs”) are now in force (as of the 21 March 2022), providing much needed certainty for UK organisations transferring personal data to service providers and group companies based outside of the UK/EEA.

The IDTA and Addendum replace the old EU Standard Contractual Clauses  (“Old EU SCCs”) for use as a UK GDPR-compliant transfer tool for restricted transfers from the UK, which also enables UK data exporters to comply with the European Court of Justice’s ‘Schrems II’ judgement.

For new UK data transfer arrangements or where UK organisations are in the process of reviewing their existing arrangements, use of the new ITDA or Addendum would be the best option to seek to future proof against the need to replace them in 2 years’ time.

Where the data flows involve transfers of personal data from both the UK and the EU, the use of the Addendum alongside the New EU SCCs, will enable organisations to implement a more harmonised solution.

To view copies of the documents please follow the links below:

To read our previous blog post on this topic, click here.


Article By Francesca Fellowes of Squire Patton Boggs (US) LLP. Hannah-Mei Grisley also contributed to this article.

© Copyright 2022 Squire Patton Boggs (US) LLP

EDPB on Dark Patterns: Lessons for Marketing Teams

“Dark patterns” are becoming the target of EU data protection authorities, and the new guidelines of the European Data Protection Board (EDPB) on “dark patterns in social media platform interfaces” confirm their focus on such practices. While they are built around examples from social media platforms (real or fictitious), these guidelines contain lessons for all websites and applications. The bad news for marketers: the EDPB doesn’t like it when dry legal texts and interfaces are made catchier or more enticing.

To illustrate, in a section of the guidelines regarding the selection of an account profile photo, the EDPB considers the example of a “help/information” prompt saying “No need to go to the hairdresser’s first. Just pick a photo that says ‘this is me.’” According to the EDPB, such a practice “can impact the final decision made by users who initially decided not to share a picture for their account” and thus makes consent invalid under the General Data Protection Regulation (GDPR). Similarly, the EDPB criticises an extreme example of a cookie banner with a humourous link to a bakery cookies recipe that incidentally says, “we also use cookies”, stating that “users might think they just dismiss a funny message about cookies as a baked snack and not consider the technical meaning of the term “cookies.”” The EDPB even suggests that the data minimisation principle, and not security concerns, should ultimately guide an organisation’s choice of which two-factor authentication method to use.

Do these new guidelines reflect privacy paranoia or common sense? The answer should lie somewhere in between, but the whole document (64 pages long) in our view suggests an overly strict approach, one that we hope will move closer to commonsense as a result of a newly started public consultation process.

Let us take a closer look at what useful lessons – or warnings – can be drawn from these new guidelines.

What are “dark patterns” and when are they unlawful?

According to the EDPB, dark patterns are “interfaces and user experiences […] that lead users into making unintended, unwilling and potentially harmful decisions regarding the processing of their personal data” (p. 2). They “aim to influence users’ behaviour and can hinder their ability to effectively protect their personal data and make conscious choices.” The risk associated with dark patterns is higher for websites or applications meant for children, as “dark patterns raise additional concerns regarding potential impact on children” (p. 8).

While the EDPB takes a strongly negative view of dark patterns in general, it recognises that dark patterns do not automatically lead to an infringement of the GDPR. The EDPB acknowledges that “[d]ata protection authorities are responsible for sanctioning the use of dark patterns if these breach GDPR requirements” (emphasis ours; p. 2). Nevertheless, the EDPB guidance strongly links the concept of dark patterns with the data protection by design and by default principles of Art. 25 GDPR, suggesting that disregard for those principles could lead to a presumption that the language or a practice in fact creates a “dark pattern” (p. 11).

The EDPB refers here to its Guidelines 4/2019 on Article 25 Data Protection by Design and by Default and in particular to the following key principles:

  • “Autonomy – Data subjects should be granted the highest degree of autonomy possible to determine the use made of their personal data, as well as autonomy over the scope and conditions of that use or processing.
  • Interaction – Data subjects must be able to communicate and exercise their rights in respect of the personal data processed by the controller.
  • Expectation – Processing should correspond with data subjects’ reasonable expectations.
  • Consumer choice – The controllers should not “lock in” their users in an unfair manner. Whenever a service processing personal data is proprietary, it may create a lock-in to the service, which may not be fair, if it impairs the data subjects’ possibility to exercise their right of data portability in accordance with Article 20 GDPR.
  • Power balance – Power balance should be a key objective of the controller-data subject relationship. Power imbalances should be avoided. When this is not possible, they should be recognised and accounted for with suitable countermeasures.
  • No deception – Data processing information and options should be provided in an objective and neutral way, avoiding any deceptive or manipulative language or design.
  • Truthful – the controllers must make available information about how they process personal data, should act as they declare they will and not mislead data subjects.”

Is data minimisation compatible with the use of SMS two-factor authentication?

One of the EDPB’s positions, while grounded in the principle of data minimisation, undercuts a security practice that has grown significantly over the past few years. In effect, the EDPB seems to question the validity under the GDPR of requests for phone numbers for two-factor authentication where e-mail tokens would theoretically be possible:

“30. To observe the principle of data minimisation, [organisations] are required not to ask for additional data such as the phone number, when the data users already provided during the sign- up process are sufficient. For example, to ensure account security, enhanced authentication is possible without the phone number by simply sending a code to users’ email accounts or by several other means.
31. Social network providers should therefore rely on means for security that are easier for users to re[1]initiate. For example, the [organisation] can send users an authentication number via an additional communication channel, such as a security app, which users previously installed on their mobile phone, but without requiring the users’ mobile phone number. User authentication via email addresses is also less intrusive than via phone number because users could simply create a new email address specifically for the sign-up process and utilise that email address mainly in connection with the Social Network. A phone number, however, is not that easily interchangeable, given that it is highly unlikely that users would buy a new SIM card or conclude a new phone contract only for the reason of authentication.” 
(emphasis ours; p. 15)

The EDPB also appears to be highly critical of phone-based verification in the context of registration “because the email address constitutes the regular contact point with users during the registration process” (p. 15).

This position is unfortunate, as it suggests that data minimisation may preclude controllers from even assessing which method of two-factor authentication – in this case, e-mail versus SMS one-time passwords – better suits its requirements, taking into consideration the different security benefits and drawbacks of the two methods. The EDPB’s reasoning could even be used to exclude any form of stronger two-factor authentication, as additional forms inevitably require separate processing (e.g., phone number or third-party account linking for some app-based authentication methods).

For these reasons, organisations should view this aspect of the new EDPB guidelines with a healthy dose of skepticism. It likewise will be important for interested stakeholders to participate in the consultation to explain the security benefits of using phone numbers to keep the “two” in two-factor authentication.

Consent withdrawal: same number of clicks?

Recent decisions by EU regulators (notably two decisions by the French authority, the CNIL have led to speculation about whether EU rules effectively require website operators to make it possible for data subjects to withdraw consent to all cookies with one single click, just as most websites make it possible to give consent through a single click. The authorities themselves have not stated that this is unequivocally required, although privacy activists notably filed complaints against hundreds of websites, many of them for not including a “reject all” button on their cookie banner.

The EDPB now appears to side with the privacy activists in this respect, stating that “consent cannot be considered valid under the GDPR when consent is obtained through only one mouse-click, swipe or keystroke, but the withdrawal takes more steps, is more difficult to achieve or takes more time” (p. 14).

Operationally, however, it seems impossible to comply with a “one-click withdrawal” standard in absolute terms. Just pulling up settings after registration or after the first visit to a website will always require an extra click, purely to open those settings. We expect this issue to be examined by the courts eventually.

Is creative wording indicative of a “dark pattern”?

The EDPB’s guidelines contain several examples of wording that is intended to convince the user to take a specific action.

The photo example mentioned in the introduction above is an illustration, but other (likely fictitious) examples include the following:

  • For sharing geolocation data: “Hey, a lone wolf, are you? But sharing and connecting with others help make the world a better place! Share your geolocation! Let the places and people around you inspire you!” (p.17)
  • To prompt a user to provide a self-description: “Tell us about your amazing self! We can’t wait, so come on right now and let us know!” (p. 17)

The EDPB criticises the language used, stating that it is “emotional steering”:

“[S]uch techniques do not cultivate users’ free will to provide their data, since the prescriptive language used can make users feel obliged to provide a self-description because they have already put time into the registration and wish to complete it. When users are in the process of registering to an account, they are less likely to take time to consider the description they give or even if they would like to give one at all. This is particularly the case when the language used delivers a sense of urgency or sounds like an imperative. If users feel this obligation, even when in reality providing the data is not mandatory, this can have an impact on their “free will”” (pp. 17-18).

Similarly, in a section about account deletion and deactivation, the EDPB criticises interfaces that highlight “only the negative, discouraging consequences of deleting their accounts,” e.g., “you’ll lose everything forever,” or “you won’t be able to reactivate your account” (p. 55). The EDPB even criticises interfaces that preselect deactivation or pause options over delete options, considering that “[t]he default selection of the pause option is likely to nudge users to select it instead of deleting their account as initially intended. Therefore, the practice described in this example can be considered as a breach of Article 12 (2) GDPR since it does not, in this case, facilitate the exercise of the right to erasure, and even tries to nudge users away from exercising it” (p. 56). This, combined with the EDPB’s aversion to confirmation requests (see section 5 below), suggests that the EDPB is ignoring the risk that a data subject might opt for deletion without fully recognizing the consequences, i.e., loss of access to the deleted data.

The EDPB’s approach suggests that any effort to woo users into giving more data or leaving data with the organisation will be viewed as harmful by data protection authorities. Yet data protection rules are there to prevent abuse and protect data subjects, not to render all marketing techniques illegal.

In this context, the guidelines should in our opinion be viewed as an invitation to re-examine marketing techniques to ensure that they are not too pushy – in the sense that users would in effect truly be pushed into a decision regarding personal data that they would not otherwise have made. Marketing techniques are not per se unlawful under the GDPR but may run afoul of GDPR requirements in situations where data subjects are misled or robbed of their choice.

Other key lessons for marketers and user interface designers

  • Avoid continuous prompting: One of the issues regularly highlighted by the EDPB is “continuous prompting”, i.e., prompts that appear again and again during a user’s experience on a platform. The EDPB suggests that this creates fatigue, leading the user to “give in,” i.e., by “accepting to provide more data or to consent to another processing, as they are wearied from having to express a choice each time they use the platform” (p. 14). Examples given by the EDPB include the SMS two-factor authentication popup mentioned above, as well as “import your contacts” functionality. Outside of social media platforms, the main example for most organisations is their cookie policy (so this position by the EDPB reinforces the need to manage cookie banners properly). In addition, newsletter popups and popups about “how to get our new report for free by filling out this form” are frequent on many digital properties. While popups can be effective ways to get more subscribers or more data, the EDPB guidance suggests that regulators will consider such practices questionable from a data protection perspective.
  • Ensure consistency or a justification for confirmation steps: The EDPB highlights the “longer than necessary” dark pattern at several places in its guidelines (in particular pp. 18, 52, & 57), with illustrations of confirmation pop-ups that appear before a user is allowed to select a more privacy-friendly option (and while no such confirmation is requested for more privacy-intrusive options). Such practices are unlawful according to the EDPB. This does not mean that confirmation pop-ups are always unlawful – just that you need to have a good justification for using them where you do.
  • Have a good reason for preselecting less privacy-friendly options: Because the GDPR requires not only data protection by design but also data protection by default, make sure that you are able to justify an interface in which a more privacy-intrusive option is selected by default – or better yet, don’t make any preselection. The EDPB calls preselection of privacy-intrusive options “deceptive snugness” (“Because of the default effect which nudges individuals to keep a pre-selected option, users are unlikely to change these even if given the possibility” p. 19).
  • Make all privacy settings available in all platforms: If a user is asked to make a choice during registration or upon his/her first visit (e.g., for cookies, newsletters, sharing preferences, etc.), ensure that those settings can all be found easily later on, from a central privacy settings page if possible, and alongside all data protection tools (such as tools for exercising a data subject’s right to access his/her data, to modify data, to delete an account, etc.). Also make sure that all such functionality is available not only on a desktop interface but also for mobile devices and across all applications. The EDPB illustrates this point by criticising the case where an organisation has a messaging app that does not include the same privacy statement and data subject request tools as the main app (p. 27).
  • Be clearer in using general language such as “Your data might be used to improve our services”: It is common in most privacy statements to include a statement that personal data (e.g., customer feedback) “can” or “may be used” to improve an organisation’s products and services. According to the EDPB, the word “services” is likely to be “too general” to be viewed as “clear,” and it is “unclear how data will be processed for the improvement of services.” The use of the conditional tense in the example (“might”) also “leaves users unsure whether their data will be used for the processing or not” (p. 25). Given that the EDPB’s stance in this respect is a confirmation of a position taken by EU regulators in previous guidance on transparency, and serves as a reminder to tell data subjects how data will be used.
  • Ensure linguistic consistency: If your website or app is available in more than one language, ensure that all data protection notices and tools are available in those languages as well and that the language choice made on the main interface is automatically taken into account on the data-related pages (pp. 25-26).

Best practices according to the EDPB

Finally, the EDPB highlights some other “best practices” throughout its guidelines. We have combined them below for easier review:

  • Structure and ease of access:
    • Shortcuts: Links to information, actions, or settings that can be of practical help to users to manage their data and data protection settings should be available wherever they relate to information or experience (e.g., links redirecting to the relevant parts of the privacy policy; in the case of a data breach communication to users, to provide users with a link to reset their password).
    • Data protection directory: For easy navigation through the different section of the menu, provide users with an easily accessible page from where all data protection-related actions and information are accessible. This page could be found in the organisation’s main navigation menu, the user account, through the privacy policy, etc.
    • Privacy Policy Overview: At the start/top of the privacy policy, include a collapsible table of contents with headings and sub-headings that shows the different passages the privacy notice contains. Clearly identified sections allow users to quickly identify and jump to the section they are looking for.
    • Sticky navigation: While consulting a page related to data protection, the table of contents could be constantly displayed on the screen allowing users to quickly navigate to relevant content thanks to anchor links.
  • Transparency:
    • Organisation contact information: The organisation’s contact address for addressing data protection requests should be clearly stated in the privacy policy. It should be present in a section where users can expect to find it, such as a section on the identity of the data controller, a rights related section, or a contact section.
    • Reaching the supervisory authority: Stating the specific identity of the EU supervisory authority and including a link to its website or the specific website page for lodging a complaint is another EDPB recommendation. This information should be present in a section where users can expect to find it, such as a rights-related section.
    • Change spotting and comparison: When changes are made to the privacy notice, make previous versions accessible with the date of release and highlight any changes.
  • Terminology & explanations:
    • Coherent wording: Across the website, the same wording and definition is used for the same data protection concepts. The wording used in the privacy policy should match that used on the rest of the platform.
    • Providing definitions: When using unfamiliar or technical words or jargon, providing a definition in plain language will help users understand the information provided to them. The definition can be given directly in the text when users hover over the word and/or be made available in a glossary.
    • Explaining consequences: When users want to activate or deactivate a data protection control, or give or withdraw their consent, inform them in a neutral way of the consequences of such action.
    • Use of examples: In addition to providing mandatory information that clearly and precisely states the purpose of processing, offering specific data processing examples can make the processing more tangible for users
  • Contrasting Data Protection Elements: Making data protection-related elements or actions visually striking in an interface that is not directly dedicated to the matter helps readability. For example, when posting a public message on the platform, controls for geolocation should be directly available and clearly visible.
  • Data Protection Onboarding: Just after the creation of an account, include data protection points within the onboarding experience for users to discover and set their preferences seamlessly. This can be done by, for example, inviting them to set their data protection preferences after adding their first friend or sharing their first post.
  • Notifications (including data breach notifications): Notifications can be used to raise awareness of users of aspects, changes, or risks related to personal data processing (e.g., when a data breach occurs). These notifications can be implemented in several ways, such as through inbox messages, pop-in windows, fixed banners at the top of the webpage, etc.

Next steps and international perspectives

These guidelines (available online) are subject to public consultation until 2 May 2022, so it is possible they will be modified as a result of the consultation and, we hope, improved to reflect a more pragmatic view of data protection that balances data subjects’ rights, security, and operational business needs. If you wish to contribute to the public consultation, note that the EDPB publishes feedback it receives (as a result, we have occasionally submitted feedback on behalf of clients wishing to remain anonymous).

Irrespective of the outcome of the public consultation, the guidelines are guaranteed to have an influence on the approach of EU data protection authorities in their investigations. From this perspective, it is better to be forewarned – and to have legal arguments at your disposal if you wish to adopt an approach that deviates from the EDPB’s position.

Moreover, these guidelines come at a time when the United States Federal Trade Commission (FTC) is also concerned with dark patterns. The FTC recently published an enforcement policy statement on the matter in October 2021. Dark patterns are also being discussed at the Organisation for Economic Cooperation and Development (OECD). International dialogue can be helpful if conversations about desired policy also consider practical solutions that can be implemented by businesses and reflect a desirable user experience for data subjects.

Organisations should consider evaluating their own techniques to encourage users to go one way or another and document the justification for their approach.

© 2022 Keller and Heckman LLP