Richard Searle

Richard Searle

Greater Oxford Area
656 followers 500+ connections

Activity

Join now to see all activity

Volunteer Experience

  • Confidential Computing Consortium Graphic

    Chair of the End-User Advisory Council

    Confidential Computing Consortium

    - 2 years 8 months

    Science and Technology

    Engaging with the end-user community for Confidential Computing to support the objectives of the Confidential Computing Consortium project of the Linux Foundation.

  • Confidential Computing Consortium Graphic

    General Members' Representative to the Governing Board

    Confidential Computing Consortium

    - 4 years

    Science and Technology

    Representing the interests of General Member organizations of the Confidential Computing Consortium of the Linux foundation to the Governing Board, since the inaugurated of this industry project in 2019.

  • National Institute of Standards and Technology (NIST) Graphic

    Contributor: Generative AI Public Working Group (NIST GAI-PWG)

    National Institute of Standards and Technology (NIST)

    - 9 months

    Science and Technology

    The Public Working Group on Generative AI will help address the opportunities and challenges associated with AI that can generate content, such as code, text, images, videos and music. The public working group will also help NIST develop key guidance to help organizations address the special risks associated with generative AI technologies.

  • National Institute of Standards and Technology (NIST) Graphic

    NIST AI Safety Institute Consortium (AISIC): Principal Investigator, Fortanix, Inc.

    National Institute of Standards and Technology (NIST)

    - Present 6 months

    Science and Technology

    In support of efforts to create safe and trustworthy artificial intelligence (AI), NIST is establishing the U.S. Artificial Intelligence Safety Institute (USAISI) and a related Consortium (“Consortium”). The Consortium will help equip and empower the collaborative establishment of a new measurement science that will enable the identification of proven, scalable, and interoperable techniques and metrics to promote development and responsible use of safe and trustworthy AI.

Publications

  • Protecting Patient Confidentiality in the Internet of Medical Things through Confidential Computing

    Journal of Data Protection & Privacy

    The Internet of Medical Things (IoMT) provides a network of distributed devices that generate a wealth of data for clinicians and medical researchers. The global COVID-19 pandemic has demonstrated the benefits that IoMT data has brought about for remote medical services and clinical diagnosis. While the security of remote IoMT devices is an established area of concern, enforcing the privacy of the data that they both generate and process requires a data-first approach to network design. How can…

    The Internet of Medical Things (IoMT) provides a network of distributed devices that generate a wealth of data for clinicians and medical researchers. The global COVID-19 pandemic has demonstrated the benefits that IoMT data has brought about for remote medical services and clinical diagnosis. While the security of remote IoMT devices is an established area of concern, enforcing the privacy of the data that they both generate and process requires a data-first approach to network design. How can a distributed IoMT network simultaneously ensure the integrity of distributed devices and maintain the privacy and confidentiality of protected healthcare information (PHI)? In this positioning paper, we outline the issues that must be addressed by manufacturers of IoMT devices and those responsible for the system architectures that process gathered healthcare and contextual data. We consider how the nascent technology of confidential computing addresses the dual requirements of systemic security and data confidentiality, and we provide a conceptual architecture based on current developments within the field. Our analysis of the practical considerations associated with IoMT deployment reveals a fundamental requirement for a data-first approach to security that is governed by patient consent and zero-trust principles.

    Other authors
    See publication
  • Secure federated machine learning with flexible topology and distributed privacy controls

    Federated machine learning (FML) for training of deep neural network models is a useful technique where insufficient sample data is available at a local level. In applications where data privacy must be preserved, such as in health care, financial services, and defense contexts, it is important that there is no exchange of data between constituents of the distributed network. It may also be desirable to protect the integrity and secrecy of the algorithms and trained models deployed within the…

    Federated machine learning (FML) for training of deep neural network models is a useful technique where insufficient sample data is available at a local level. In applications where data privacy must be preserved, such as in health care, financial services, and defense contexts, it is important that there is no exchange of data between constituents of the distributed network. It may also be desirable to protect the integrity and secrecy of the algorithms and trained models deployed within the network. Demonstrating the privacy-enhancing technology of Confidential Computing, we present a novel solution for FML implementation that supports extensible graph-based network topology configuration under federated, distributed, or centralized training regimes. The presented solution provides for policy-based control of model training and automated monitoring of model convergence and network performance. Owners of private datasets can retain independent control of their data through local encryption, while global data anonymization policies can be applied over the sample data. Full auditability of the model training process is provided to distributed data owners and the model owner using hardware-based cryptographic secrets that underpin zero-trust implementation of the training network. Operation of the proposed secure FML solution is discussed in the context of model training over distributed radiological image data for weakly-supervised learning and classification of common thorax diseases. Cross-domain adaptation of the proposed solution and integrated model integrity protection against adversarial attacks reflects a breakthrough technology for data science teams working with distributed datasets.

    Other authors
    See publication
  • Secure Federated Machine Learning for Distributed Spectrum Sensing in Communication Networks

    SPIE Digital Library

    Federated machine learning (FML) has proved a useful technique for training of artificial intelligence and machine learning (AI/ML) models, using data that is distributed among different constituents of a network which may be geographically dispersed. Typically, the data privacy of individual constituents should be preserved, and it may also be desirable to protect the integrity and secrecy of the algorithms and trained models deployed within the network. Demonstrating the privacy-enhancing…

    Federated machine learning (FML) has proved a useful technique for training of artificial intelligence and machine learning (AI/ML) models, using data that is distributed among different constituents of a network which may be geographically dispersed. Typically, the data privacy of individual constituents should be preserved, and it may also be desirable to protect the integrity and secrecy of the algorithms and trained models deployed within the network. Demonstrating the privacy-enhancing technology of Confidential Computing, we present the results obtained using a novel solution for FML implementation that supports model training within a distributed network of data providers. Based upon recent research on the use of FML for distributed spectrum sensing in communication networks, we demonstrate the application of the proposed solution for distributed model training within a simulated sensor network of arbitrary topology. The presented solution provides for graph-based network configuration and model convergence within decentralized network applications. Cross-domain adaptation of the proposed solution and characteristics of confidential computing that support a zero-trust architecture are discussed, along with the integrated model integrity protection provided by attestation of trusted execution environments (TEEs). We conclude by looking ahead to the application of our solution to model training within distributed communications networks and sensor arrays, characterized by devices with limited electrical and computational power. We consider the use of physical unclonable functions (PUFs) to encrypt raw data before processing within a layered hierarchy secured with Confidential Computing technology.

    Other authors
    See publication
  • GPT-4 Provides Improved Answers While Posing New Questions

    Dark Reading

    As is typical with emerging technologies, both innovators and regulators struggle with developments in generative AI, much less the rules that should govern its use.

    See publication
  • Preparing for the Effects Of Quantum-Centric Supercomputing

    Dark Reading

    While it has been a perennial forecast that efficient universal quantum computers are “a decade away,” that prospect now seems a legitimate possibility. Organizations need to get ready now.

    See publication
  • Secure Implementation of Artificial Intelligence Applications for Anti-Money Laundering using Confidential Computing

    IEEE

    Paper No. S09210 of The 6th International Workshop on Big Data Analytic for Cyber Crime Investigation and Prevention, Osaka, Japan.

    Other authors
    • Prabhanjan Gururaj
    • Anubhav Gupta
    • Kiran Kannur
    See publication
  • What is Confidential Computing?

    The Register

    Enterprises need the highest levels of data privacy to innovate, build, and securely operate their applications. Confidential Computing helps you reach the highest level of privacy for your most sensitive workloads by encrypting data-in-use, allowing you to benefit from added security and run multi-party computation without giving access to your data.

    Watch this webinar to learn:

    - The foundations of confidential computing and why it is a key ingredient for any cloud…

    Enterprises need the highest levels of data privacy to innovate, build, and securely operate their applications. Confidential Computing helps you reach the highest level of privacy for your most sensitive workloads by encrypting data-in-use, allowing you to benefit from added security and run multi-party computation without giving access to your data.

    Watch this webinar to learn:

    - The foundations of confidential computing and why it is a key ingredient for any cloud infrastructure.
    - How organizations across the world and across industries are already leveraging confidential computing for added security and to unlock new opportunities and innovations.
    - How to get started on your confidential computing journey with Azure today – learn how you can take your existing virtual machines, containers, and other application platform capabilities, and make them confidential with the latest innovations brought by Azure confidential computing.

    Other authors
    • Paul O'Neill
    • Ivar Wiersma
    See publication
  • Establishing security and trust for object detection and classification with confidential AI

    SPIE Digital Library

    In the context of multi-domain operations (MDO), artificial intelligence (AI) systems support human operators by processing large volumes of electro-optical/infrared (EOIR) sensor data. In this paper, we demonstrate how confidential computing technology, incorporating a hardware-based root of trust, can provide systemic identity verification through mutual attestation, secure the integrity of AI models, and preserve the confidentiality of processed sensor data. Using the example of aircraft…

    In the context of multi-domain operations (MDO), artificial intelligence (AI) systems support human operators by processing large volumes of electro-optical/infrared (EOIR) sensor data. In this paper, we demonstrate how confidential computing technology, incorporating a hardware-based root of trust, can provide systemic identity verification through mutual attestation, secure the integrity of AI models, and preserve the confidentiality of processed sensor data. Using the example of aircraft detection and classification, we describe how confidential computing can defend against adversarial machine learning (AML) attacks by providing intrinsic security at the tactical edge and within the distributed applications environment that characterizes MDO.

    Other authors
    • Prabhanjan Gururaj
    See publication
  • Me, My Digital Self, and I: Why Identity Is the Foundation of a Decentralized Future

    Dark Reading

    A decentralized future is a grand ideal, but secure management of private keys is the prerequisite to ensure the integrity of decentralized applications and services.

    See publication
  • Outlining Risks to the World's Vital Cyber-Physical Systems

    Dark Reading

    The key to protecting these systems is not only to ensure the control environment is secure and protected but also to deploy emerging technologies such as confidential computing.

    See publication
  • Why Trust Matters for the National Artificial Intelligence Research Resource Task Force

    Dark Reading

    As the National Artificial Intelligence Research Resource Task Force sets about its work preparing recommendations for the creation of an AI research resource in the United States, fundamental problems of trust must be addressed.

    See publication

Patents

  • Confidential Computing Workflows

    Issued US 11481515

More activity by Richard

View Richard’s full profile

  • See who you know in common
  • Get introduced
  • Contact Richard directly
Join to view full profile

Other similar profiles

Explore collaborative articles

We’re unlocking community knowledge in a new way. Experts add insights directly into each article, started with the help of AI.

Explore More

Others named Richard Searle in United Kingdom