Daniel Massey

Daniel Massey

Longmont, Colorado, United States
2K followers 500+ connections

About

I am the program lead for the Operate Through portion of the DoD's 5G to NextG…

Articles by Daniel

Activity

Join now to see all activity

Experience & Education

  • United States Department of Defense

View Daniel’s full experience

By clicking Continue to join or sign in, you agree to LinkedIn’s User Agreement, Privacy Policy, and Cookie Policy.

Volunteer Experience

  • Silicon Flatirons Center at the University of Colorado Graphic

    Advisory Board Member

    Silicon Flatirons Center at the University of Colorado

    - Present 6 years 1 month

    Science and Technology

  • St. Vrain Valley Schools Graphic

    Computer Science and Engineering Advisory Board

    St. Vrain Valley Schools

    - Present 6 years 7 months

    Education

  • Adams 12 Five Star Schools Graphic

    Computer Science Advisory Board Member

    Adams 12 Five Star Schools

    - Present 6 years

    Education

  • St. Vrain Valley Schools Graphic

    Coach and Co-Founder of CyberPatriot Teams

    St. Vrain Valley Schools

    - Present 8 years

    Education

    Coach and co-founder of the Altona Middle School CyberPatriot program. Coach for Silver Creek High School teams. As described on the CyberPatriot National Youth Cyber Defense Competition website, teams of middle and high school students are given the role of newly hired IT professionals tasked with managing the network of a small company. In the rounds of competition, teams are given a set of virtual images that represent operating systems and are tasked with finding cybersecurity…

    Coach and co-founder of the Altona Middle School CyberPatriot program. Coach for Silver Creek High School teams. As described on the CyberPatriot National Youth Cyber Defense Competition website, teams of middle and high school students are given the role of newly hired IT professionals tasked with managing the network of a small company. In the rounds of competition, teams are given a set of virtual images that represent operating systems and are tasked with finding cybersecurity vulnerabilities within the images and hardening the system while maintaining critical services. Teams compete for the top placement within their state and region. In our first season, Altona Middle School had ten teams and seven advanced to the semi-finals.

  • Greenheart International Graphic

    Host Family For Foreign Exchange Student

    Greenheart International

    - Present 6 years

    Arts and Culture

Publications

  • Pragmatic Router FIB Caching

    Networking 2015

    Several recent studies have shown that router FIB caching offers excellent hit rates with cache sizes that are an order of magnitude smaller than the original forwarding table. However, hit rate alone is not sufficient - other performance metrics such as memory accesses, robustness to cache attacks, queuing delays from cache misses etc., should be considered before declaring FIB caching viable.
    In this paper, we tackle several pragmatic questions about FIB caching. We characterize cache…

    Several recent studies have shown that router FIB caching offers excellent hit rates with cache sizes that are an order of magnitude smaller than the original forwarding table. However, hit rate alone is not sufficient - other performance metrics such as memory accesses, robustness to cache attacks, queuing delays from cache misses etc., should be considered before declaring FIB caching viable.
    In this paper, we tackle several pragmatic questions about FIB caching. We characterize cache performance in terms of memory accesses and delay due to cache misses. We study cache robustness to pollution attacks and show that an attacker must sustain packet rates higher than the link capacity to evict the most popular prefixes. We show that caching was robust, even during a recent flare of NTP attacks. We carry out a longitudinal study of cache hit rates over four years and show the hit rate is unchanged over that duration. We characterize cache misses to determine which services are impacted by FIB caching. We conclude that FIB caching is viable by several metrics, not just impressive hit rates.

    Other authors
  • Pragmatic Router FIB Caching

    Networking 2015

    Several recent studies have shown that router FIB caching offers excellent hit rates with cache sizes that are an order of magnitude smaller than the original forwarding table. However, hit rate alone is not sufficient - other performance metrics such as memory accesses, robustness to cache attacks, queuing delays from cache misses etc., should be considered before declaring FIB caching viable.
    In this paper, we tackle several pragmatic questions about FIB caching. We characterize cache…

    Several recent studies have shown that router FIB caching offers excellent hit rates with cache sizes that are an order of magnitude smaller than the original forwarding table. However, hit rate alone is not sufficient - other performance metrics such as memory accesses, robustness to cache attacks, queuing delays from cache misses etc., should be considered before declaring FIB caching viable.
    In this paper, we tackle several pragmatic questions about FIB caching. We characterize cache performance in terms of memory accesses and delay due to cache misses. We study cache robustness to pollution attacks and show that an attacker must sustain packet rates higher than the link capacity to evict the most popular prefixes. We show that caching was robust, even during a recent flare of NTP attacks. We carry out a longitudinal study of cache hit rates over four years and show the hit rate is unchanged over that duration. We characterize cache misses to determine which services are impacted by FIB caching. We conclude that FIB caching is viable by several metrics, not just impressive hit rates.

    Other authors
  • Verifying Keys through Publicity and Communities of Trust: Quantifying Off-Axis Corroboration

    IEEE Transactions on Parallel and Distributed Systems (TPDS), vol. 25, no. 2, pp. 283-291

    The DNS Security Extensions (DNSSEC) arguably make DNS the first core Internet system to be protected using public key cryptography. The success of DNSSEC not only protects the DNS, but has generated interest in using this secured global database for new services such as those proposed by the IETF DANE working group. However, continued success is only possible if several important operational issues can be addressed. For example, .gov and .arpa have already suffered misconfigurations where DNS…

    The DNS Security Extensions (DNSSEC) arguably make DNS the first core Internet system to be protected using public key cryptography. The success of DNSSEC not only protects the DNS, but has generated interest in using this secured global database for new services such as those proposed by the IETF DANE working group. However, continued success is only possible if several important operational issues can be addressed. For example, .gov and .arpa have already suffered misconfigurations where DNS continued to function properly, but DNSSEC failed (thus, orphaning their entire subtrees in DNSSEC). Internet-scale verification systems must tolerate this type of chaos, but what kind of verification can one derive for systems with dynamism like this? In this paper, we propose to achieve robust verification with a new theoretical model, called Public Data, which treats operational deployments as Communities of Trust (CoTs) and makes them the verification substrate. Using a realization of the above idea, called Vantages, we quantitatively show that using a reasonable DNSSEC deployment model and a typical choice of a CoT, an adversary would need to be able to have visibility into and perform on-path Man-in-the-Middle (MitM) attacks on arbitrary traffic into and out of up to 90 percent of the all of the Autonomous Systems (ASes) in the Internet before having even a 10 percent chance of spoofing a DNSKEY. Further, our limited deployment of Vantages has outperformed the verifiability of DNSSEC and has properly validated its data up to 99.5 percent of the time.

    Other authors
    See publication
  • Delivering Diverse BGP Data in Real-Time and through Multi-Format Archiving

    IEEE

    The Internet relies on BGP for global routing, but
    there are many open questions related to BGP. Some researchers
    rely on BGP data to better understand routing behavior and
    develop new routing algorithms. Other researchers use BGP data
    to investigate issues that range from IP allocations to regional
    Internet connectivity in the face of political turmoil. And of
    course BGP data is used in routing security issue, both to detect
    issues, evaluate solutions, and even issue…

    The Internet relies on BGP for global routing, but
    there are many open questions related to BGP. Some researchers
    rely on BGP data to better understand routing behavior and
    develop new routing algorithms. Other researchers use BGP data
    to investigate issues that range from IP allocations to regional
    Internet connectivity in the face of political turmoil. And of
    course BGP data is used in routing security issue, both to detect
    issues, evaluate solutions, and even issue warnings or block invalid
    routes. All of these research challenges require access to a reliable
    set of BGP data from geographically diverse locations.

    This papers presents the BGPmon approach to collecting and
    distributing BGP data at global scale. BGPmon collects data from
    diverse set of peers and distributes the data in real-time to any
    interested client. BGPmon fulfills three main design objectives:
    it provides a scalable data collection solution, maintains data
    integrity despite traffic surges and slow client processing, and it
    provides a suite of associated tools in order to ease the overhead
    of developing BGP data processing tools. We demonstrate the
    effectiveness of the framework with a brief characterization of
    the data collected from direct peers.

    Other authors
    See publication
  • Behavior of DNS’ Top Talkers, a .com / .net View

    PAM 2012: Passive and Active Measurement Conference

    This paper provides the first systematic study of DNS data taken from one of the 13 servers for the .com / .net registry. DNS’ generic Top Level Domains (gTLDs) such .com and .net serve resolvers from throughout the Internet and respond to billions of DNS queries every day. This study uses gTLD data to characterize the DNS resolver population and profile DNS query types. The results show a small and relatively stable set of resolvers (i.e. the top-talkers) constitute 90% of the overall traffic.…

    This paper provides the first systematic study of DNS data taken from one of the 13 servers for the .com / .net registry. DNS’ generic Top Level Domains (gTLDs) such .com and .net serve resolvers from throughout the Internet and respond to billions of DNS queries every day. This study uses gTLD data to characterize the DNS resolver population and profile DNS query types. The results show a small and relatively stable set of resolvers (i.e. the top-talkers) constitute 90% of the overall traffic. The results provide a basis for understanding for this critical Internet service, insights on typical resolver behaviors and the use of IPv6 in DNS, and provides a foundation for further study of DNS behavior.

    Other authors
    See publication
  • The Great IPv4 Land Grab: Resource Certification for the IPv4 Grey Market

    Tenth ACM Workshop on SIGCOMM Hot Topics in Networks (HotNets-X)

    The era of free IPv4 address allocations has ended and the grey market in IPv4 addresses is now emerging. This paper argues that one cannot and should not try to regulate who sells addresses and at what price, but one does need to provide some proof of ownership in the form of resource certification. In this paper we identify key requirements of resource certification, gained from both theoretical analysis and operational history. We further argue these requirements
    can be achieved by making…

    The era of free IPv4 address allocations has ended and the grey market in IPv4 addresses is now emerging. This paper argues that one cannot and should not try to regulate who sells addresses and at what price, but one does need to provide some proof of ownership in the form of resource certification. In this paper we identify key requirements of resource certification, gained from both theoretical analysis and operational history. We further argue these requirements
    can be achieved by making use of the existing reverse DNS hierarchy, enhanced with DNS Security. Our analysis compares reverse DNS entries and BGP routing tables and shows this is both feasible and achievable today; an essential requirement as the grey market is also emerging today and solutions are needed now, not years in the future.

    Other authors
    See publication
  • Cross-Modal Vulnerabilities: An Illusive form of Hijacking

    Verisign Labs Technical Report #1140010

    Content, connection, and other types of hijacking are a common occurrence in todayÂ’s Internet. One can broadly classify various types of hijacks as being locally scoped to an administrative domain, or pushed externally; where one administrative domain (intentionally or unintentionally) hijacks users in other domains. Current work in identifying and reacting to various types of Internet hijacking has focused on the network control plane and has not included cross-modal hijacks that involve both…

    Content, connection, and other types of hijacking are a common occurrence in todayÂ’s Internet. One can broadly classify various types of hijacks as being locally scoped to an administrative domain, or pushed externally; where one administrative domain (intentionally or unintentionally) hijacks users in other domains. Current work in identifying and reacting to various types of Internet hijacking has focused on the network control plane and has not included cross-modal hijacks that involve both the control plane and the data plane of the Internet. In this work we introduce the idea that cross-modal threats exist in the Internet and form a highly illusive, but serious threat. Further, we detail an actual instance of Internet-scale cross-modal hijacking whose behavior depends on both network control data and data plane such as the order in which users request connections. Based on anecdotal evidence gleaned from several websites, it appears that this hijack existed for many months (and possibly years) before its recent detection.

    Other authors
    See publication
  • Deploying Cryptography in Internet-Scale Systems: A Case Study on DNSSEC

    Transactions on Dependable and Secure Computing, Volume 7, Issue 2

    The DNS Security Extensions (DNSSEC) are among the first attempts to deploy cryptographic protections in an Internet-scale operational system. DNSSEC applies well-established public key cryptography to ensure data integrity and origin authenticity in the DNS system. While the cryptographic design of DNSSEC is sound and seemingly simple, its development has taken the IETF over a decade and several protocol revisions, and even today its deployment is still in the early stage of rolling out. In…

    The DNS Security Extensions (DNSSEC) are among the first attempts to deploy cryptographic protections in an Internet-scale operational system. DNSSEC applies well-established public key cryptography to ensure data integrity and origin authenticity in the DNS system. While the cryptographic design of DNSSEC is sound and seemingly simple, its development has taken the IETF over a decade and several protocol revisions, and even today its deployment is still in the early stage of rolling out. In this paper, we provide the first systematic examination of the design, deployment, and operational challenges encountered by DNSSEC over the years. Our study reveals a fundamental gap between cryptographic designs and operational Internet systems. To be deployed in the global Internet, a cryptographic protocol must possess several critical properties including scalability, flexibility, incremental deployability, and ability to function in face of imperfect operations. We believe that the insights gained from this study can offer valuable inputs to future cryptographic designs for other Internet-scale systems.

    Other authors
    See publication
  • Deploying and Monitoring DNS Security (DNSSEC)

    Annual Computer Security Applications Conference (ACSAC)

    SecSpider is a DNSSEC monitoring system that helps identify operational errors in the DNSSEC deployment and discover unforeseen obstacles. It collects, verifies, and publishes the DNSSEC keys for DNSSEC-enabled zones, which enables operators of both authoritative zones and recursive resolvers to deploy DNSSEC immediately, and benefit from its cryptographic protections. In this paper we present the design and implementation of SecSpider as well as several general lessons that stem from its…

    SecSpider is a DNSSEC monitoring system that helps identify operational errors in the DNSSEC deployment and discover unforeseen obstacles. It collects, verifies, and publishes the DNSSEC keys for DNSSEC-enabled zones, which enables operators of both authoritative zones and recursive resolvers to deploy DNSSEC immediately, and benefit from its cryptographic protections. In this paper we present the design and implementation of SecSpider as well as several general lessons that stem from its design and implementation.

    Other authors
    See publication
  • Managing Trusted Keys in Internet-Scale Systems

    The Workshop on Trust and Security in the Future Internet (FIST)

  • Quantifying the Operational Status of the DNSSEC Deployment

    Proceedings of the 6th ACM/USENIX Internet Measurement Conference (IMC '08)

    This paper examines the deployment of the DNS Security Extensions (DNSSEC), which add cryptographic protection to DNS, one of the core components in the Internet infrastructure. We analyze the data collected from the initial DNSSEC deployment which started over 2 years ago, and identify three critical metrics to gauge the deployment: availability, verifiability, and validity. Our results provide the first comprehensive look at DNSSEC’s deployment and reveal a number of challenges that were not…

    This paper examines the deployment of the DNS Security Extensions (DNSSEC), which add cryptographic protection to DNS, one of the core components in the Internet infrastructure. We analyze the data collected from the initial DNSSEC deployment which started over 2 years ago, and identify three critical metrics to gauge the deployment: availability, verifiability, and validity. Our results provide the first comprehensive look at DNSSEC’s deployment and reveal a number of challenges that were not anticipated in the design but have become evident in the deployment. First, obstacles such as middle-boxes (firewalls, NATs, etc.) that exist in today’s Internet infrastructure have proven to be problematic and have resulted in unforeseen availability problems. Second, the public-key delegation system of DNSSEC has not evolved as it was hoped and it currently leaves over 97% of DNSSEC zones isolated and unverifiable, unless some external key authentication mechanism is added. Furthermore, our results show that cryptographic verification is not equivalent to validation; a piece of verified data can still contain the wrong value. Finally, our results demonstrate the essential role of monitoring and measurement in the DNSSEC deployment. We believe that the observations and lessons from the DNSSEC deployment can provide insights into measuring future Internet-scale cryptographic systems.

    Other authors
    See publication
  • Limiting Replay Vulnerabilities in DNSSEC

    IEEE ICNP Workshop on Secure Network Protocols (NPSec)

    The DNS Security Extensions (DNSSEC) added public key cryptography to the DNS, but problems remain in selecting signature lifetimes. A zone’s master server distributes signatures to secondary servers. The signatures lifetimes should be long so that a secondary server can still operate if the master fails. However, DNSSEC lacks revocation. Signed data can be replayed until the signature expires and thus zones should select a short signature lifetime. Operators must choose between reduced…

    The DNS Security Extensions (DNSSEC) added public key cryptography to the DNS, but problems remain in selecting signature lifetimes. A zone’s master server distributes signatures to secondary servers. The signatures lifetimes should be long so that a secondary server can still operate if the master fails. However, DNSSEC lacks revocation. Signed data can be replayed until the signature expires and thus zones should select a short signature lifetime. Operators must choose between reduced robustness or long replay vulnerability windows.

    This paper introduces a revised DNSSEC signature that allows secondary servers to operate even if the master has failed while simultaneously limiting replay windows to twice the TTL. Each secondary server constructs a hash chain and relays the hash chain anchor to the master server. The signature produced by the master server ensures the authenticity of the hash anchor and the DNS data. A secondary server includes both the signature and a hash chain value used by resolvers to limit signature replay. Our implementation shows the added costs are minimal compared to DNSSEC and ensures robustness against long-term master server failures. At the same time, we limit replay to twice the record TTL value.

    Other authors
    See publication
  • Limiting Replay Vulnerabilities in DNSSEC

    IEEE ICNP Workshop on Secure Network Protocols (NPSec)

    The DNS Security Extensions (DNSSEC) added public key cryptography to the DNS, but problems remain in selecting signature lifetimes. A zone’s master server distributes signatures to secondary servers. The signatures lifetimes should be long so that a secondary server can still operate if the master fails. However, DNSSEC lacks revocation. Signed data can be replayed until the signature expires and thus zones should select a short signature lifetime. Operators must choose between reduced…

    The DNS Security Extensions (DNSSEC) added public key cryptography to the DNS, but problems remain in selecting signature lifetimes. A zone’s master server distributes signatures to secondary servers. The signatures lifetimes should be long so that a secondary server can still operate if the master fails. However, DNSSEC lacks revocation. Signed data can be replayed until the signature expires and thus zones should select a short signature lifetime. Operators must choose between reduced robustness or long replay vulnerability windows.

    This paper introduces a revised DNSSEC signature that allows secondary servers to operate even if the master has failed while simultaneously limiting replay windows to twice the TTL. Each secondary server constructs a hash chain and relays the hash chain anchor to the master server. The signature produced by the master server ensures the authenticity of the hash anchor and the DNS data. A secondary server includes both the signature and a hash chain value used by resolvers to limit signature replay. Our implementation shows the added costs are minimal compared to DNSSEC and ensures robustness against long-term master server failures. At the same time, we limit replay to twice the record TTL value.

    Other authors
    See publication
  • Observations from the DNSSEC Deployment

    IEEE ICNP Workshop on Secure Network Protocols (NPSec)

    DNS Security Extensions have been developed to add cryptographic protection to the Internet name resolution service. In this paper we report the results from our monitoring effort with early DNSSEC deployment trials and the lessons learned.

    Other authors
    See publication
  • Zone State Revocation for DNSSEC

    Workshop on Large Scale Attack Defenses (LSAD)

    DNS Security Extensions (DNSSEC) are designed to add cryptographic protection to the Internet’s name resolution service. However the current design lacks a key revocation mechanism. In this paper we present Zone State Revocation (ZSR), a lightweight and backward compatible enhancement to DNSSEC. ZSR enables zones to explicitly revoke keys using self-certifying certificates, and enables DNS name-servers to opportunistically inform distributed caching resolvers of key revocations via lightweight…

    DNS Security Extensions (DNSSEC) are designed to add cryptographic protection to the Internet’s name resolution service. However the current design lacks a key revocation mechanism. In this paper we present Zone State Revocation (ZSR), a lightweight and backward compatible enhancement to DNSSEC. ZSR enables zones to explicitly revoke keys using self-certifying certificates, and enables DNS name-servers to opportunistically inform distributed caching resolvers of key revocations via lightweight control messages. Further, ZSR allows resolvers to distinguish between legitimate key changes and potential attacks when authentication chains are broken. ZSR is designed to work well with global-scale DNS operations, where millions of caches may need to be informed of a revocation, and where time is critical.

    Other authors
    See publication
  • Security Through Publicity

    USENIX First Workshop on Hot Topics in Security (HotSec '06),

    Current large-scale authentication and non-repudiation systems offer various security measures, but do not meet the needs of today’s Internet-scale applications. Though several designs exist, there have been no significant deployments of Internet-scale security infrastructures. In this paper we propose a novel concept called the public-space that makes complete information of digital entities’ actions publicly available to every user. It is a structured framework that maintains a large number…

    Current large-scale authentication and non-repudiation systems offer various security measures, but do not meet the needs of today’s Internet-scale applications. Though several designs exist, there have been no significant deployments of Internet-scale security infrastructures. In this paper we propose a novel concept called the public-space that makes complete information of digital entities’ actions publicly available to every user. It is a structured framework that maintains a large number of entities, their actions, relationships, and histories. Posting such information in public does not endorse the information’s correctness, but it does provide users with a quantifiable set of information that enables them to detect faults and make informed security decisions. Combined with traditional cryptographic techniques, the public-space system can support the intrinsic heterogeneity of user security requirements in Internet-scale infrastructures and applications.

    Other authors
    See publication
  • Security Through Publicity

    USENIX First Workshop on Hot Topics in Security (HotSec '06),

    Current large-scale authentication and non-repudiation systems offer various security measures, but do not meet the needs of today’s Internet-scale applications. Though several designs exist, there have been no significant deployments of Internet-scale security infrastructures. In this paper we propose a novel concept called the public-space that makes complete information of digital entities’ actions publicly available to every user. It is a structured framework that maintains a large number…

    Current large-scale authentication and non-repudiation systems offer various security measures, but do not meet the needs of today’s Internet-scale applications. Though several designs exist, there have been no significant deployments of Internet-scale security infrastructures. In this paper we propose a novel concept called the public-space that makes complete information of digital entities’ actions publicly available to every user. It is a structured framework that maintains a large number of entities, their actions, relationships, and histories. Posting such information in public does not endorse the information’s correctness, but it does provide users with a quantifiable set of information that enables them to detect faults and make informed security decisions. Combined with traditional cryptographic techniques, the public-space system can support the intrinsic heterogeneity of user security requirements in Internet-scale infrastructures and applications.

    Other authors
    See publication
  • Security Through Publicity

    USENIX First Workshop on Hot Topics in Security (HotSec '06),

    Current large-scale authentication and non-repudiation systems offer various security measures, but do not meet the needs of today’s Internet-scale applications. Though several designs exist, there have been no significant deployments of Internet-scale security infrastructures. In this paper we propose a novel concept called the public-space that makes complete information of digital entities’ actions publicly available to every user. It is a structured framework that maintains a large number…

    Current large-scale authentication and non-repudiation systems offer various security measures, but do not meet the needs of today’s Internet-scale applications. Though several designs exist, there have been no significant deployments of Internet-scale security infrastructures. In this paper we propose a novel concept called the public-space that makes complete information of digital entities’ actions publicly available to every user. It is a structured framework that maintains a large number of entities, their actions, relationships, and histories. Posting such information in public does not endorse the information’s correctness, but it does provide users with a quantifiable set of information that enables them to detect faults and make informed security decisions. Combined with traditional cryptographic techniques, the public-space system can support the intrinsic heterogeneity of user security requirements in Internet-scale infrastructures and applications.

    Other authors
    See publication
  • FRTR: a scalable mechanism for global routing table consistency

    -

    Other authors

Projects

  • BGPmon

    Real-time BGP routing information is an essential resource for both researchers and operation communities in Internet routing. In order to collect large number of data in real time, BGP Monitoring System (BGPmon) is designed to monitor BGP updates and routing tables from BGP routers. It uses modular architecture to scalably monitor many BGP routers by distributed deployment while allowing a consolidated and neat interface to end users. BGPmon uses the Extensible Markup Language (XML) for BGP…

    Real-time BGP routing information is an essential resource for both researchers and operation communities in Internet routing. In order to collect large number of data in real time, BGP Monitoring System (BGPmon) is designed to monitor BGP updates and routing tables from BGP routers. It uses modular architecture to scalably monitor many BGP routers by distributed deployment while allowing a consolidated and neat interface to end users. BGPmon uses the Extensible Markup Language (XML) for BGP data. This format can accurately record BGP data without any information loss and it is extendable for possible new features in BGP updates.

    Other creators
    See project

Recommendations received

More activity by Daniel

View Daniel’s full profile

  • See who you know in common
  • Get introduced
  • Contact Daniel directly
Join to view full profile

Other similar profiles

Explore collaborative articles

We’re unlocking community knowledge in a new way. Experts add insights directly into each article, started with the help of AI.

Explore More

Others named Daniel Massey in United States

Add new skills with these courses