New Forms of Digital Identity to assist in Fight Against Unlawful Impersonation
New Digital Identity Types: Organizational ID and Human vs. Bot

New Forms of Digital Identity to assist in Fight Against Unlawful Impersonation

Robocalls have become a wide-scale problem largely because of advancements in telecommunications and IT technologies, which have dramatically reduced telecom and computing infrastructure costs, and consequently, expenses associated with placing bot-assisted phone calls.

While significantly less expensive than TDM/SS7 for placing legitimate calls, VoIP/SIP also enables unlawful activities at operational/economic scale with a high degree of anonymity. In contrast to trusted business calls, voluminous VoIP calling applied by scammers, and even some telemarketing firms, is a micro-penny affair to transport bits of data, often not resulting in answered calls.

One of the tricks robocallers use to prompt called parties to answer a phone call is something known as “telephone number spoofing”, which refers to an altered calling party phone number. The calling party’s phone number is altered electronically so it appears to be a number that is different from the actual originating party caller ID.

There are some legitimate purposes for "spoofing" a telephone number, such as a global pharmacy organization calling to alert a patient to a prescription ready for pick-up and displaying the regional number of the local pharmacy branch as opposed to the corporate number actually originating the call. Illegal call spoofing refers to a bad actor using the telephone number of an organization they’re not authorized to represent.

Unlawful telephone number spoofing can be especially egregious if it involves Caller Name (CNAM). Also referred to as "enhanced caller ID", CNAM represents name information that is tied to a telephone number of a calling party that may be displayed on a device of a called party. When a telephone number is spoofed, the terminating service provider switch (e.g. managed by a carrier for a called party) may initiate a CNAM database query using that spoofed number.

Another bad actor ploy is to explicitly claim to be with a company or imply a relationship with a company. This deceptive tactic is referred to as "brand impersonation". Whether it be implicit or explicit, brand impersonation is unethical, immoral and in most cases illegal as defined by the spirit and intent of the Truth in Caller ID Act.

Brand impersonation takes on a whole new meaning with AI, which can be used to create deep fakes, such as videos or audio recordings that have been manipulated to make it look or sound like someone (such as organization's representative) is saying or doing something they never actually said or did. Bad actors could use deep fakes to blackmail people, extort money, or damage someone's reputation.

Voice Cloning: Synthesizing a Person's Voice

Voice cloning is a technique used to replicate or synthesize a person's voice, often using advanced machine learning algorithms. It involves training a model with samples of the target voice and then generating new speech that sounds like the target speaker. 

Synthetic voice technology has several applications such as its use in movies, video games, or other forms of media to recreate voices of actors or characters. This can be especially useful for posthumous performances or voice acting. It can also enhance virtual assistants like Siri or Alexa by enabling them to respond in a user's voice, making interactions more natural and personalized.

Voice cloning models are trained through the use of certain data sets, typically of recorded speech. AI and machine learning are leveraged to train models for ongoing usage. From an accessibility perspective, it is generally believed that even a small snippet of someone's voice is all it takes to train systems to be able to impersonate the voice of a genuine person. 

Conversational AI

Conversational AI refers to the use of artificial intelligence to enable natural and fluid interactions between humans and machines through conversation. It involves the development of systems that can understand and respond to human language in a way that mimics natural-sounding human conversations. 

While replicating a specific human's speech patterns is more difficult, conversational AI provides a mechanism for interactive dialog that provides plausibility that it may actually be the person that the voice claims to be. 

More advanced systems may add further credibility through programming specific data that would be known by the person impersonated and/or a party targeted by the bot using the synthetic voice.

As AI technologies continue to advance, conversational AI systems are becoming more sophisticated, capable of understanding context, emotions, and nuances in human language, leading to more natural and productive interactions.

Concerns about AI Misuse in Communications

While voice cloning and conversational AI offer exciting possibilities, there are ethical concerns around its misuse, such as fraud or deception. For example, bad actors may use these systems to pose as an authority figure to gain access to sensitive information and/or platforms. Bad actors may also pose as friends or family with fraudulent intent. 

With generative AI, brand impersonation has become a greater threat. For example, it can be used to create realistic-looking websites, emails, voice content and text messages that can be used to scam consumers. 

Potentially even worse, generative AI may be used in conjunction with data scraping to train conversational AI systems to convey person-specific and/or contextual information that adds plausibility to the ruse.

Advanced Digital Identity as a Tool in the Fight Against Unlawful Communications

There are certain key tools in the fight against unwanted robocalls and robotexts. These tools include technologies, policies, procedures, and methods, which taken together, form an overall "Trusted Communications" Framework to enable wanted communications and inhibit unwanted communications.

These tools include Consent Management, UI/UX (user interface and user experience), Digital Identity, KYX (Know Your Everything), Monitoring, Authentication and Validation. While these tools may be individually operated and enhanced in a silo-like approach, a more optimal approach is to consider all elements as a whole. 

One important tool in the fight against unwanted communications is advanced digital identity management as a means of verifying an entity involved with consumer contact. To date, the industry relies largely upon network and telephone number identification and association. A prime example of this for voice calls is STIR/SHAKEN, a framework developed to deal with unauthorized telephone number spoofing issues that uses an in-band signaling based method of authentication and validation.

Developed jointly by the SIP Forum and ATIS (Alliance for Telecommunications Industry Solutions) to efficiently implement the Internet Engineering Task Force’s (IETF) STIR (for Secure Telephony Identity Revisited) standard, SHAKEN (for Signature-based Handling of Asserted information using toKENs) defines a mechanism to verify the calling number and specifies how it will be transported across communications networks.

The STIR/SHAKEN (S/S) framework authenticates and validates telephone numbers, networks, and the association of telephone numbers to networks. A key policy element of the S/S framework is attestation in which a call signing entity attests to the call as A-level attestation (network is known and telephone number is associated), B-level attestation (network is known but telephone number cannot be associated), or C-level attestation (neither network is known nor telephone number associated). C-level attestation is typically applied by networks when there is little or no KYC (Know Your Customer) associated with a call such as is the case for certain circumstances such as calls entering the United States via an international gateway.

As discussed S/S provides a framework in which network and telephone number identity, and related KYC, are key elements. It is NOT a system for vetting organizations (e.g. business entities responsible for voice calls to consumers), or authenticating and validating communications related to organizations. Accordingly, S/S does NOT provide a framework for organizational identity; S/S does NOT provide a means of validating organizational vetting; S/S does NOT provide direct traceability to responsible organizations. Stated differently, S/S does NOT provide a trust framework to facilitate legitimate business calls and ensure non-repudiation (at the organizational level rather than the communication service provider level) regarding applicable laws and regulations.

Given the aforementioned shortcomings of existing tools, two key areas of focus that the FTC is recommended to consider for next generation digital identity are (1) Organizational Identity (what business, government, or NGO is taking responsibility for a consumer contact attempt) and (2) Bot vs. Human, the ability to identify where a communications attempt is from a human being or synthesize voice and/or conversational AI. 

Organizational Identity

The purpose of Organizational Identity is to provide traceability to an organization (business, government agency or NGO) responsible for conduct associated with phone number usage. 

For example, if a telephone call is found to exhibit unlawful behavior, attribution of an organization to that call and phone number (at the time the call is placed) provides non-repudiation. This is very important for non-real-time, post-call purposes as a means of providing accountability. More advanced implementations of an Organizational ID that involve network and analytics integration could provide a means of real-time call processing.

Human vs. Bot: Proof-of-Life (PoL) Identity

A Proof-of-Life (PoL) ID could be leveraged to identify that a communications attempt is associated with a real human or a synthesized voice (e.g. a bot). Unlike the Organizational ID, which can represent virtually an infinite number of descriptives, the PoL ID is a Boolean attribute representing either "Human" or "Bot".

New Digital Identities require Verification

One important aspect of both the Organizational ID and the PoL ID is reliable authentication and validation. Therefore, it is recommended that both leverage cryptographically verifiable credentials. The use of blockchain or some other similar means of verification (e.g. cannot be counterfeit, provides traceability and attribution) is strongly recommended. 

New Digital Identities used in Conjunction with KYC

It is also strongly recommended that the use of said Organizational ID and PoL ID be used in conjunction with Know Your Customer (KYC) onboarding and monitoring best practices. 

For example, a business may be assigned an Organizational ID upon becoming a customer of a communications service provider. This ID would be associated with all telephone numbers that are used at any given point in time, providing a means of KYC attribution and thus accountability to the business itself for compliance with applicable regulations and laws.

For the PoL ID, organizations would similarly be required to register telephone numbers as being associated with "Human"=1 or "Bot"=0. As business needs and/or telephone numbers change, administrative updates may be required to change 1-to-0, or vice versa, from time to time.

From a KYC monitoring perspective, observations will reveal whether PoL claims have been made, and if so, whether the claim is valid (e.g. did the organization claim "Human" but it was actually found to be "Bot"). Organizations found to be making false claims regarding PoL could be treated in a manner similar to those that make other false claims that pertain to KYC as well as related terms of service, and compliance with applicable regulations and laws.

New Digital Identities and UI/UX

While there is arguably a need to be able to discern bot vs. human, the industry also needs UI/UX improvements to convey to consumers when an incoming communications attempt will involve interactions with a bot or a human. 

The specific means of conveyance could be as simple as illuminating either a (to be developed) "bot" icon or a "human" icon on a smartphone screen upon receiving a call or text. The latter would allow the called party to have confidence that a human being will be on the other end if they answer the call.


If the "Bot" icon is illuminated, an additional (to be developed) indication on the phone screen (such as a Blue Checkmark) could demonstrate that the calling/texting party is a known, registered organization. This would provide an improved level of consumer confidence, even if the initiating entity uses synthetic voice.

About the Author

In his current role, Gerry Christensen is responsible for regulatory compliance as an internal advisor to Caller ID Reputation® and its customers as well as externally in terms of policy-making, industry solutions and standards. In this capacity, Gerry relies on his knowledge of regulations regarding B2C communications engagement. This includes the Truth in Caller ID Act, the Telephone Consumer Protection Act of 1991, state "mini-TCPA" laws and statutes governing consumer contact, various Federal Communications Commission rules, and the Federal Trade Commission's Telemarketing Sales Rule (FTC TSR).

Christensen coined the term, "Bad Actor's Dilemma", which conveys the notion that unlawful callers often (1) don't self-identify and/or (2) commit brand impersonation (explicit or implied), when calling consumers. These rules are addressed explicitly in the FTC TSR (see 310.3 and 310.4) and implicitly in the Truth in Caller ID Act. Christensen has expertise in VoIP, messaging and other IP-based communications.

Gerry is also an expert in solutions necessary to identify unwanted robocalls as well as enabling wanted business calls. This includes authentication, organizational identity, and use of various important data resources such as the DNO, DNC and RND.

Gerry is also an expert in technologies and solutions to facilitate accurate and consistent communications identity. This includes authentication and validation methods such as STIR/SHAKEN as well as various non-standard techniques. His expertise also includes non-network/telephone number methods such as cryptographically identifiable means of verifying organizational identity. In total, Christensen's knowledge and skills make him uniquely qualified as an industry expert in establishing a trust framework for supporting wanted business communications.

Gerry Christensen

Head of Caller ID Reputation® Partnerships and Expert in Communications Identity and Trust

3w

The Federal Communications Commission said Chairwoman Jessica Rosenworcel is seeking public input on how to define AI-generated calls, force those using AI-generated calls to disclose the practice and “support” technologies that notify consumers when they are receiving unlawful AI robocalls.  FCC CHAIRWOMAN PROPOSES FIRST-OF-THEIR-KIND AI-GENERATED ROBOCALL RULES Proposal Would Seek Comment on Requiring Callers to Disclose if They Use AI in Robocalls and Protecting the Communications Accessibility Benefits of AI FCC release here: https://docs.fcc.gov/public/attachments/DOC-403990A1.pdf

Like
Reply
Gerry Christensen

Head of Caller ID Reputation® Partnerships and Expert in Communications Identity and Trust

4mo

Excerpts from this post were provided as comments to the Federal Trade Commission as they "Seek Comment for Trade Regulation Rule on Impersonation of Government and Businesses" and may be found here: https://www.regulations.gov/comment/FTC-2023-0030-0045

Like
Reply

To view or add a comment, sign in

Explore topics