Publications

See also Google Scholar and DBLP.

2024

  • R. Uetz, M. Herzog, L. Hackländer, S. Schwarz, and M. Henze, “You Cannot Escape Me: Detecting Evasions of SIEM Rules in Enterprise Networks,” in Proceedings of the 33rd USENIX Security Symposium (USENIX Sec), 2024.
    [BibTeX] [Abstract] [PDF]

    Cyberattacks have grown into a major risk for organizations, with common consequences being data theft, sabotage, and extortion. Since preventive measures do not suffice to repel attacks, timely detection of successful intruders is crucial to stop them from reaching their final goals. For this purpose, many organizations utilize Security Information and Event Management (SIEM) systems to centrally collect security-related events and scan them for attack indicators using expert-written detection rules. However, as we show by analyzing a set of widespread SIEM detection rules, adversaries can evade almost half of them easily, allowing them to perform common malicious actions within an enterprise network without being detected. To remedy these critical detection blind spots, we propose the idea of adaptive misuse detection, which utilizes machine learning to compare incoming events to SIEM rules on the one hand and known-benign events on the other hand to discover successful evasions. Based on this idea, we present AMIDES, an open-source proof-of-concept adaptive misuse detection system. Using four weeks of SIEM events from a large enterprise network and more than 500 hand-crafted evasions, we show that AMIDES successfully detects a majority of these evasions without any false alerts. In addition, AMIDES eases alert analysis by assessing which rules were evaded. Its computational efficiency qualifies AMIDES for real-world operation and hence enables organizations to significantly reduce detection blind spots with moderate effort.

    @inproceedings{UHH+24,
    author = {Uetz, Rafael and Herzog, Marco and Hackl{\"a}nder, Louis and Schwarz, Simon and Henze, Martin},
    title = {{You Cannot Escape Me: Detecting Evasions of SIEM Rules in Enterprise Networks}},
    booktitle = {Proceedings of the 33rd USENIX Security Symposium (USENIX Sec)},
    year = {2024},
    month = {08},
    abstract = {Cyberattacks have grown into a major risk for organizations, with common consequences being data theft, sabotage, and extortion. Since preventive measures do not suffice to repel attacks, timely detection of successful intruders is crucial to stop them from reaching their final goals. For this purpose, many organizations utilize Security Information and Event Management (SIEM) systems to centrally collect security-related events and scan them for attack indicators using expert-written detection rules. However, as we show by analyzing a set of widespread SIEM detection rules, adversaries can evade almost half of them easily, allowing them to perform common malicious actions within an enterprise network without being detected. To remedy these critical detection blind spots, we propose the idea of adaptive misuse detection, which utilizes machine learning to compare incoming events to SIEM rules on the one hand and known-benign events on the other hand to discover successful evasions. Based on this idea, we present AMIDES, an open-source proof-of-concept adaptive misuse detection system. Using four weeks of SIEM events from a large enterprise network and more than 500 hand-crafted evasions, we show that AMIDES successfully detects a majority of these evasions without any false alerts. In addition, AMIDES eases alert analysis by assessing which rules were evaded. Its computational efficiency qualifies AMIDES for real-world operation and hence enables organizations to significantly reduce detection blind spots with moderate effort.}
    }

  • P. Bönninghausen, R. Uetz, and M. Henze, “Introducing a Comprehensive, Continuous, and Collaborative Survey of Intrusion Detection Datasets,” in Proceedings of the 17th Cyber Security Experimentation and Test Workshop (CSET), 2024.
    [BibTeX] [Abstract] [DOI]

    Researchers in the highly active field of intrusion detection largely rely on public datasets for their experimental evaluations. However, the large number of existing datasets, the discovery of previously unknown flaws therein, and the frequent publication of new datasets make it hard to select suitable options and sufficiently understand their respective limitations. Hence, there is a great risk of drawing invalid conclusions from experimental results with respect to detection performance of novel methods in the real world. While there exist various surveys on intrusion detection datasets, they have deficiencies in providing researchers with a profound decision basis since they lack comprehensiveness, actionable details, and up-to-dateness. In this paper, we present Comidds, an ongoing effort to comprehensively survey intrusion detection datasets with an unprecedented level of detail, implemented as a website backed by a public GitHub repository. Comidds allows researchers to quickly identify suitable datasets depending on their requirements and provides structured and critical information on each dataset, including actual data samples and links to relevant publications. Comidds is freely accessible, regularly updated, and open to contributions.

    @inproceedings{BUH24,
    author = {B{\"o}nninghausen, Philipp and Uetz, Rafael and Henze, Martin},
    title = {{Introducing a Comprehensive, Continuous, and Collaborative Survey of Intrusion Detection Datasets}},
    booktitle = {Proceedings of the 17th Cyber Security Experimentation and Test Workshop (CSET)},
    year = {2024},
    month = {08},
    doi = {10.1145/3675741.3675754},
    abstract = {
    Researchers in the highly active field of intrusion detection largely rely on public datasets for their experimental evaluations. However, the large number of existing datasets, the discovery of previously unknown flaws therein, and the frequent publication of new datasets make it hard to select suitable options and sufficiently understand their respective limitations. Hence, there is a great risk of drawing invalid conclusions from experimental results with respect to detection performance of novel methods in the real world. While there exist various surveys on intrusion detection datasets, they have deficiencies in providing researchers with a profound decision basis since they lack comprehensiveness, actionable details, and up-to-dateness. In this paper, we present Comidds, an ongoing effort to comprehensively survey intrusion detection datasets with an unprecedented level of detail, implemented as a website backed by a public GitHub repository. Comidds allows researchers to quickly identify suitable datasets depending on their requirements and provides structured and critical information on each dataset, including actual data samples and links to relevant publications. Comidds is freely accessible, regularly updated, and open to contributions.
    },
    }

  • E. Wagner, D. Heye, M. Serror, I. Kunze, K. Wehrle, and M. Henze, “Madtls: Fine-grained Middlebox-aware End-to-end Security for Industrial Communication,” in Proceedings of the 19th ACM ASIA Conference on Computer and Communications Security (ASIA CCS), 2024.
    [BibTeX] [Abstract] [PDF] [DOI]

    Industrial control systems increasingly rely on middlebox functionality such as intrusion detection or in-network processing. However, traditional end-to-end security protocols interfere with the necessary access to in-flight data. While recent work on middlebox-aware end-to-end security protocols for the traditional Internet promises to address the dilemma between end-to-end security guarantees and middleboxes, the current state-of-the-art lacks critical features for industrial communication. Most importantly, industrial settings require fine-grained access control for middleboxes to truly operate in a least-privilege mode. Likewise, advanced applications even require that middleboxes can inject specific messages (e.g., emergency shutdowns). Meanwhile, industrial scenarios often expose tight latency and bandwidth constraints not found in the traditional Internet. As the current state-of-the-art misses critical features, we propose Middlebox-aware DTLS (Madtls), a middlebox-aware end-to-end security protocol specifically tailored to the needs of industrial networks. Madtls provides bit-level read and write access control of middleboxes to communicated data with minimal bandwidth and processing overhead, even on constrained hardware.

    @inproceedings{WHS+24,
    author = {Wagner, Eric and Heye, David and Serror, Martin and Kunze, Ike and Wehrle, Klaus and Henze, Martin},
    title = {{Madtls: Fine-grained Middlebox-aware End-to-end Security for Industrial Communication}},
    booktitle = {Proceedings of the 19th ACM ASIA Conference on Computer and Communications Security (ASIA CCS)},
    doi = {10.1145/3634737.3637640},
    month = {07},
    year = {2024},
    abstract = {Industrial control systems increasingly rely on middlebox functionality such as intrusion detection or in-network processing. However, traditional end-to-end security protocols interfere with the necessary access to in-flight data. While recent work on middlebox-aware end-to-end security protocols for the traditional Internet promises to address the dilemma between end-to-end security guarantees and middleboxes, the current state-of-the-art lacks critical features for industrial communication. Most importantly, industrial settings require fine-grained access control for middleboxes to truly operate in a least-privilege mode. Likewise, advanced applications even require that middleboxes can inject specific messages (e.g., emergency shutdowns). Meanwhile, industrial scenarios often expose tight latency and bandwidth constraints not found in the traditional Internet. As the current state-of-the-art misses critical features, we propose Middlebox-aware DTLS (Madtls), a middlebox-aware end-to-end security protocol specifically tailored to the needs of industrial networks. Madtls provides bit-level read and write access control of middleboxes to communicated data with minimal bandwidth and processing overhead, even on constrained hardware.},
    }

  • M. Dahlmanns, F. Heidenreich, J. Lohmöller, J. Pennekamp, K. Wehrle, and M. Henze, “Unconsidered Installations: Discovering IoT Deployments in the IPv6 Internet,” in Proceedings of the 2024 IEEE/IFIP Network Operations and Management Symposium (NOMS), 2024.
    [BibTeX] [Abstract] [PDF] [DOI]

    Internet-wide studies provide extremely valuable insight into how operators manage their Internet of Things (IoT) deployments in reality and often reveal grievances, e.g., significant security issues. However, while IoT devices often use IPv6, past studies resorted to comprehensively scan the IPv4 address space. To fully understand how the IoT and all its services and devices is operated, including IPv6-reachable deployments is inevitable – although scanning the entire IPv6 address space is infeasible. In this paper, we close this gap and examine how to discover IPv6-reachable IoT deployments. Using three sources of active IPv6 addresses and eleven address generators, we discovered 6658 IoT deployments. We derive that the available address sources are a good starting point for finding IoT deployments. Additionally, we show that using two address generators is sufficient to cover most found deployments. Assessing the security of the deployments, we surprisingly find similar issues as in the IPv4 Internet, although IPv6 deployments might be newer and generally more up-to-date: Only 39 % of deployments have access control in place and only 6.2 % make use of TLS inviting attackers, e.g., to eavesdrop sensitive data.

    @inproceedings{DHL+24,
    author = {Dahlmanns, Markus and Heidenreich, Felix and Lohm{\"o}ller, Johannes and Pennekamp, Jan and Wehrle, Klaus and Henze, Martin},
    title = {{Unconsidered Installations: Discovering IoT Deployments in the IPv6 Internet}},
    booktitle = {Proceedings of the 2024 IEEE/IFIP Network Operations and Management Symposium (NOMS)},
    year = {2024},
    month = {05},
    doi = {10.1109/NOMS59830.2024.10574963},
    abstract = {Internet-wide studies provide extremely valuable insight into how operators manage their Internet of Things (IoT) deployments in reality and often reveal grievances, e.g., significant security issues. However, while IoT devices often use IPv6, past studies resorted to comprehensively scan the IPv4 address space. To fully understand how the IoT and all its services and devices is operated, including IPv6-reachable deployments is inevitable -- although scanning the entire IPv6 address space is infeasible. In this paper, we close this gap and examine how to discover IPv6-reachable IoT deployments. Using three sources of active IPv6 addresses and eleven address generators, we discovered 6658 IoT deployments. We derive that the available address sources are a good starting point for finding IoT deployments. Additionally, we show that using two address generators is sufficient to cover most found deployments. Assessing the security of the deployments, we surprisingly find similar issues as in the IPv4 Internet, although IPv6 deployments might be newer and generally more up-to-date: Only 39 % of deployments have access control in place and only 6.2 % make use of TLS inviting attackers, e.g., to eavesdrop sensitive data.
    },
    }

  • E. Wagner, M. Serror, K. Wehrle, and M. Henze, “When and How to Aggregate Message Authentication Codes on Lossy Channels?,” in Proceedings of the 22nd Conference on Applied Cryptography and Network Security (ACNS), 2024.
    [BibTeX] [Abstract] [PDF] [DOI]

    Aggregation of message authentication codes (MACs) is a proven and efficient method to preserve valuable bandwidth in resource-constrained environments: Instead of appending a long authentication tag to each message, the integrity protection of multiple messages is aggregated into a single tag. However, while such aggregation saves bandwidth, a single lost message typically means that authentication information for multiple messages cannot be verified anymore. With the significant increase of bandwidth-constrained lossy communication, as applications shift towards wireless channels, it thus becomes paramount to study the impact of packet loss on the diverse MAC aggregation schemes proposed over the past 15 years to assess when and how to aggregate message authentication. Therefore, we empirically study all relevant MAC aggregation schemes in the context of lossy channels, investigating achievable goodput improvements, the resulting verification delays, processing overhead, and resilience to denial-of-service attacks. Our analysis shows the importance of carefully choosing and configuring MAC aggregation, as selecting and correctly parameterizing the right scheme can, e.g., improve goodput by 39 % to 444 %, depending on the scenario. However, since no aggregation scheme performs best in all scenarios, we provide guidelines for network operators to select optimal schemes and parameterizations suiting specific network settings.

    @inproceedings{WSWH24,
    author = {Wagner, Eric and Serror, Martin and Wehrle, Klaus and Henze, Martin},
    title = {{When and How to Aggregate Message Authentication Codes on Lossy Channels?}},
    booktitle = {Proceedings of the 22nd Conference on Applied Cryptography and Network Security (ACNS)},
    year = {2024},
    doi = {10.1007/978-3-031-54773-7_10},
    month = {03},
    abstract = {Aggregation of message authentication codes (MACs) is a proven and efficient method to preserve valuable bandwidth in resource-constrained environments: Instead of appending a long authentication tag to each message, the integrity protection of multiple messages is aggregated into a single tag. However, while such aggregation saves bandwidth, a single lost message typically means that authentication information for multiple messages cannot be verified anymore. With the significant increase of bandwidth-constrained lossy communication, as applications shift towards wireless channels, it thus becomes paramount to study the impact of packet loss on the diverse MAC aggregation schemes proposed over the past 15 years to assess when and how to aggregate message authentication. Therefore, we empirically study all relevant MAC aggregation schemes in the context of lossy channels, investigating achievable goodput improvements, the resulting verification delays, processing overhead, and resilience to denial-of-service attacks. Our analysis shows the importance of carefully choosing and configuring MAC aggregation, as selecting and correctly parameterizing the right scheme can, e.g., improve goodput by 39 % to 444 %, depending on the scenario. However, since no aggregation scheme performs best in all scenarios, we provide guidelines for network operators to select optimal schemes and parameterizations suiting specific network settings.},
    }

  • M. Henze, M. Ortmann, T. Vogt, O. Ugus, K. Hermann, S. Nohr, Z. Lu, S. Michaelides, A. Massonet, and R. H. Schmitt, “Towards Secure 5G Infrastructures for Production Systems,” in Proceedings of the 22nd Conference on Applied Cryptography and Network Security (ACNS) – Poster Session, 2024.
    [BibTeX] [Abstract] [DOI]

    To meet the requirements of modern production, industrial communication increasingly shifts from wired fieldbus to wireless 5G communication. Besides tremendous benefits, this shift introduces severe novel risks, ranging from limited reliability over new security vulnerabilities to a lack of accountability. To address these risks, we present approaches to (i) prevent attacks through authentication and redundant communication, (ii) detect anomalies and jamming, and (iii) respond to detected attacks through device exclusion and accountability measures.

    @inproceedings{HOV+24,
    author = {Henze, Martin and Ortmann, Maximilian and Vogt, Thomas and Ugus, Osman and Hermann, Kai and Nohr, Svenja and Lu, Zeren and Michaelides, Sotiris and Massonet, Angela and Schmitt, Robert H.},
    title = {{Towards Secure 5G Infrastructures for Production Systems}},
    booktitle = {Proceedings of the 22nd Conference on Applied Cryptography and Network Security (ACNS) – Poster Session},
    month = {03},
    year = {2024},
    doi = {10.1007/978-3-031-61489-7_14},
    abstract = {To meet the requirements of modern production, industrial communication increasingly shifts from wired fieldbus to wireless 5G communication. Besides tremendous benefits, this shift introduces severe novel risks, ranging from limited reliability over new security vulnerabilities to a lack of accountability. To address these risks, we present approaches to (i) prevent attacks through authentication and redundant communication, (ii) detect anomalies and jamming, and (iii) respond to detected attacks through device exclusion and accountability measures.
    },
    }

  • R. Matzutt, M. Henze, D. Müllmann, and K. Wehrle, “Illicit Blockchain Content: Its Different Shapes, Consequences, and Remedies,” in Blockchains – A Handbook on Fundamentals, Platforms and Applications, Springer, 2024.
    [BibTeX] [Abstract] [DOI]

    Augmenting public blockchains with arbitrary, nonfinancial content fuels novel applications that facilitate the interactions between mutually distrusting parties. However, new risks emerge at the same time when illegal content is added. This chapter thus provides a holistic overview of the risks of content insertion as well as proposed countermeasures. We first establish a simple framework for how content is added to the blockchain and subsequently distributed across the blockchain’s underlying peer-to-peer network. We then discuss technical as well as legal implications of this form of content distribution and give a systematic overview of basic methods and high-level services for inserting arbitrary blockchain content. Afterward, we assess to which extent these methods and services have been used in the past on the blockchains of Bitcoin Core, Bitcoin Cash, and Bitcoin SV, respectively. Based on this assessment of the current state of (unwanted) blockchain content, we discuss (a) countermeasures to mitigate its insertion, (b) how pruning blockchains relates to this issue, and (c) how strategically weakening the otherwise desired immutability of a blockchain allows for redacting objectionable content. We conclude this chapter by identifying future research directions in the domain of blockchain content insertion.

    @incollection{MHMW+24,
    author = {Matzutt, Roman and Henze, Martin and M{\"u}llmann, Dirk and Wehrle, Klaus},
    title = {{Illicit Blockchain Content: Its Different Shapes, Consequences, and Remedies}},
    booktitle = {Blockchains -- A Handbook on Fundamentals, Platforms and Applications},
    publisher = {Springer},
    month = {03},
    year = {2024},
    doi = {10.1007/978-3-031-32146-7_10},
    abstract = {Augmenting public blockchains with arbitrary, nonfinancial content fuels novel applications that facilitate the interactions between mutually distrusting parties. However, new risks emerge at the same time when illegal content is added. This chapter thus provides a holistic overview of the risks of content insertion as well as proposed countermeasures. We first establish a simple framework for how content is added to the blockchain and subsequently distributed across the blockchain’s underlying peer-to-peer network. We then discuss technical as well as legal implications of this form of content distribution and give a systematic overview of basic methods and high-level services for inserting arbitrary blockchain content. Afterward, we assess to which extent these methods and services have been used in the past on the blockchains of Bitcoin Core, Bitcoin Cash, and Bitcoin SV, respectively. Based on this assessment of the current state of (unwanted) blockchain content, we discuss (a) countermeasures to mitigate its insertion, (b) how pruning blockchains relates to this issue, and (c) how strategically weakening the otherwise desired immutability of a blockchain allows for redacting objectionable content. We conclude this chapter by identifying future research directions in the domain of blockchain content insertion.},
    }

2023

  • J. Bodenhausen, C. Sorgatz, T. Vogt, K. Grafflage, S. Rötzel, M. Rademacher, and M. Henze, “Securing Wireless Communication in Critical Infrastructure: Challenges and Opportunities,” in Proceedings of the 20th EAI International Conference on Mobile and Ubiquitous Systems: Computing, Networking and Services (MobiQuitous), 2023.
    [BibTeX] [Abstract] [PDF]

    Critical infrastructure constitutes the foundation of every society. While traditionally solely relying on dedicated cable-based communication, this infrastructure rapidly transforms to highly digitized and interconnected systems which increasingly rely on wireless communication. Besides providing tremendous benefits, especially affording the easy, cheap, and flexible interconnection of a large number of assets spread over larger geographic areas, wireless communication in critical infrastructure also raises unique security challenges. Most importantly, the shift from dedicated private wired networks to heterogeneous wireless communication over public and shared networks requires significantly more involved security measures. In this paper, we identify the most relevant challenges resulting from the use of wireless communication in critical infrastructure and use those to identify a comprehensive set of promising opportunities to preserve the high security standards of critical infrastructure even when switching from wired to wireless communication.

    @inproceedings{BSV+23,
    author = {Bodenhausen, J{\"o}rn and Sorgatz, Christian and Vogt, Thomas and Grafflage, Kolja and R{\"o}tzel, Sebastian and Rademacher, Michael and Henze, Martin},
    title = {{Securing Wireless Communication in Critical Infrastructure: Challenges and Opportunities}},
    booktitle = {Proceedings of the 20th EAI International Conference on Mobile and Ubiquitous Systems: Computing, Networking and Services (MobiQuitous)},
    year = {2023},
    month = {11},
    abstract = {Critical infrastructure constitutes the foundation of every society. While traditionally solely relying on dedicated cable-based communication, this infrastructure rapidly transforms to highly digitized and interconnected systems which increasingly rely on wireless communication. Besides providing tremendous benefits, especially affording the easy, cheap, and flexible interconnection of a large number of assets spread over larger geographic areas, wireless communication in critical infrastructure also raises unique security challenges. Most importantly, the shift from dedicated private wired networks to heterogeneous wireless communication over public and shared networks requires significantly more involved security measures. In this paper, we identify the most relevant challenges resulting from the use of wireless communication in critical infrastructure and use those to identify a comprehensive set of promising opportunities to preserve the high security standards of critical infrastructure even when switching from wired to wireless communication.},
    }

  • E. Wagner, N. Rothaug, K. Wolsing, L. Bader, K. Wehrle, and M. Henze, “Retrofitting Integrity Protection into Unused Header Fields of Legacy Industrial Protocols,” in Proceedings of the 48th IEEE Conference on Local Computer Networks (LCN), 2023.
    [BibTeX] [Abstract] [PDF]

    Industrial networks become increasingly interconnected, which opens the floodgates for cyberattacks on legacy networks designed without security in mind. Consequently, the vast landscape of legacy industrial communication protocols urgently demands a universal solution to integrate security features retroactively. However, current proposals are hardly adaptable to new scenarios and protocols, even though most industrial protocols share a common theme: Due to their progressive development, previously important legacy features became irrelevant and resulting unused protocol fields now offer a unique opportunity for retrofitting security. Our analysis of three prominent protocols shows that headers offer between 36 and 63 bits of unused space. To take advantage of this space, we designed the REtrofittable ProtEction Library (RePeL), which supports embedding authentication tags into arbitrary combinations of unused header fields. We show that RePeL incurs negligible overhead beyond the cryptographic processing, which can be adapted to hit performance targets or fulfill legal requirements.

    @inproceedings{WRW+23,
    author = {Wagner, Eric and Rothaug, Nils and Wolsing, Konrad and Bader, Lennart and Wehrle, Klaus and Henze, Martin},
    title = {{Retrofitting Integrity Protection into Unused Header Fields of Legacy Industrial Protocols}},
    booktitle = {Proceedings of the 48th IEEE Conference on Local Computer Networks (LCN)},
    month = {10},
    year = {2023},
    abstract = {Industrial networks become increasingly interconnected, which opens the floodgates for cyberattacks on legacy networks designed without security in mind. Consequently, the vast landscape of legacy industrial communication protocols urgently demands a universal solution to integrate security features retroactively. However, current proposals are hardly adaptable to new scenarios and protocols, even though most industrial protocols share a common theme: Due to their progressive development, previously important legacy features became irrelevant and resulting unused protocol fields now offer a unique opportunity for retrofitting security. Our analysis of three prominent protocols shows that headers offer between 36 and 63 bits of unused space. To take advantage of this space, we designed the REtrofittable ProtEction Library (RePeL), which supports embedding authentication tags into arbitrary combinations of unused header fields. We show that RePeL incurs negligible overhead beyond the cryptographic processing, which can be adapted to hit performance targets or fulfill legal requirements.}
    }

  • Ö. Sen, S. Glomb, M. Henze, and A. Ulbig, “Benchmark Evaluation of Anomaly-Based Intrusion Detection Systems in the Context of Smart Grids,” in Proceedings of the 2023 IEEE PES Innovative Smart Grid Technologies Conference Europe (ISGT-Europe), 2023.
    [BibTeX] [Abstract]

    The increasing digitization of smart grids has made addressing cybersecurity issues crucial in order to secure the power supply. Anomaly detection has emerged as a key technology for cybersecurity in smart grids, enabling the detection of unknown threats. Many research efforts have proposed various machine-learning-based approaches for anomaly detection in grid operations. However, there is a need for a reproducible and comprehensive evaluation environment to investigate and compare different approaches to anomaly detection. The assessment process is highly dependent on the specific application and requires an evaluation that considers representative datasets from the use case as well as the specific characteristics of the use case. In this work, we present an evaluation environment for anomaly detection methods in smart grids that facilitates reproducible and comprehensive evaluation of different anomaly detection methods.

    @inproceedings{SGHU23,
    author = {Sen, {\"O}mer and Glomb, Simon and Henze, Martin and Ulbig, Andreas},
    title = {{Benchmark Evaluation of Anomaly-Based Intrusion Detection Systems in the Context of Smart Grids}},
    booktitle = {Proceedings of the 2023 IEEE PES Innovative Smart Grid Technologies Conference Europe (ISGT-Europe)},
    year = {2023},
    month = {10},
    abstract = {The increasing digitization of smart grids has made addressing cybersecurity issues crucial in order to secure the power supply. Anomaly detection has emerged as a key technology for cybersecurity in smart grids, enabling the detection of unknown threats. Many research efforts have proposed various machine-learning-based approaches for anomaly detection in grid operations. However, there is a need for a reproducible and comprehensive evaluation environment to investigate and compare different approaches to anomaly detection. The assessment process is highly dependent on the specific application and requires an evaluation that considers representative datasets from the use case as well as the specific characteristics of the use case. In this work, we present an evaluation environment for anomaly detection methods in smart grids that facilitates reproducible and comprehensive evaluation of different anomaly detection methods.}
    }

  • O. Lamberts, K. Wolsing, E. Wagner, J. Pennekamp, J. Bauer, K. Wehrle, and M. Henze, “SoK: Evaluations in Industrial Intrusion Detection Research,” Journal of Systems Research, vol. 3, iss. 1, 2023.
    [BibTeX] [Abstract] [PDF] [DOI]

    Industrial systems are increasingly threatened by cyberattacks with potentially disastrous consequences. To counter such attacks, industrial intrusion detection systems strive to timely uncover even the most sophisticated breaches. Due to its criticality for society, this fast-growing field attracts researchers from diverse backgrounds, resulting in 130 new detection approaches in 2021 alone. This huge momentum facilitates the exploration of diverse promising paths but likewise risks fragmenting the research landscape and burying promising progress. Consequently, it needs sound and comprehensible evaluations to mitigate this risk and catalyze efforts into sustainable scientific progress with real-world applicability. In this paper, we therefore systematically analyze the evaluation methodologies of this field to understand the current state of industrial intrusion detection research. Our analysis of 609 publications shows that the rapid growth of this research field has positive and negative consequences. While we observe an increased use of public datasets, publications still only evaluate 1.3 datasets on average, and frequently used benchmarking metrics are ambiguous. At the same time, the adoption of newly developed benchmarking metrics sees little advancement. Finally, our systematic analysis enables us to provide actionable recommendations for all actors involved and thus bring the entire research field forward.

    @article{LWW+23,
    author = {Lamberts, Olav and Wolsing, Konrad and Wagner, Eric and Pennekamp, Jan and Bauer, Jan and Wehrle, Klaus and Henze, Martin},
    title = {{SoK: Evaluations in Industrial Intrusion Detection Research}},
    journal = {Journal of Systems Research},
    year = {2023},
    volume = {3},
    number = {1},
    month = {10},
    doi = {10.5070/SR33162445},
    abstract = {Industrial systems are increasingly threatened by cyberattacks with potentially disastrous consequences. To counter such attacks, industrial intrusion detection systems strive to timely uncover even the most sophisticated breaches. Due to its criticality for society, this fast-growing field attracts researchers from diverse backgrounds, resulting in 130 new detection approaches in 2021 alone. This huge momentum facilitates the exploration of diverse promising paths but likewise risks fragmenting the research landscape and burying promising progress. Consequently, it needs sound and comprehensible evaluations to mitigate this risk and catalyze efforts into sustainable scientific progress with real-world applicability. In this paper, we therefore systematically analyze the evaluation methodologies of this field to understand the current state of industrial intrusion detection research. Our analysis of 609 publications shows that the rapid growth of this research field has positive and negative consequences. While we observe an increased use of public datasets, publications still only evaluate 1.3 datasets on average, and frequently used benchmarking metrics are ambiguous. At the same time, the adoption of newly developed benchmarking metrics sees little advancement. Finally, our systematic analysis enables us to provide actionable recommendations for all actors involved and thus bring the entire research field forward.},
    }

  • L. Bader, E. Wagner, M. Henze, and M. Serror, “METRICS: A Methodology for Evaluating and Testing the Resilience of Industrial Control Systems to Cyberattacks,” in Proceedings of the 8th Workshop on the Security of Industrial Control Systems & of Cyber-Physical Systems (CyberICPS), 2023.
    [BibTeX] [Abstract] [PDF]

    The increasing digitalization and interconnectivity of industrial control systems (ICSs) create enormous benefits, such as enhanced productivity and flexibility, but also amplify the impact of cyberattacks. Cybersecurity research thus continuously needs to adapt to new threats while proposing comprehensive security mechanisms for the ICS domain. As a prerequisite, researchers need to understand the resilience of ICSs against cyberattacks by systematically testing new security approaches without interfering with productive systems. Therefore, one possibility for such evaluations is using already available ICS testbeds and datasets. However, the heterogeneity of the industrial landscape poses great challenges to obtaining comparable and transferable results. In this paper, we propose to bridge this gap with METRICS, a methodology for systematic resilience evaluation of ICSs. METRICS complements existing ICS testbeds by enabling the configuration of measurement campaigns for comprehensive resilience evaluations. Therefore, the user specifies individual evaluation scenarios consisting of cyberattacks and countermeasures while facilitating manual and automatic interventions. Moreover, METRICS provides domain-agnostic evaluation capabilities to achieve comparable results, which user-defined domain-specific metrics can complement. We apply the methodology in a use case study with the power grid simulator Wattson, demonstrating its effectiveness in providing valuable insights for security practitioners and researchers.

    @inproceedings{BWHS23,
    author = {Bader, Lennart and Wagner, Eric and Henze, Martin and Serror, Martin },
    title = {{METRICS: A Methodology for Evaluating and Testing the Resilience of Industrial Control Systems to Cyberattacks}},
    booktitle = {Proceedings of the 8th Workshop on the Security of Industrial Control Systems \& of Cyber-Physical Systems (CyberICPS)},
    month = {09},
    year = {2023},
    abstract = {The increasing digitalization and interconnectivity of industrial control systems (ICSs) create enormous benefits, such as enhanced productivity and flexibility, but also amplify the impact of cyberattacks. Cybersecurity research thus continuously needs to adapt to new threats while proposing comprehensive security mechanisms for the ICS domain. As a prerequisite, researchers need to understand the resilience of ICSs against cyberattacks by systematically testing new security approaches without interfering with productive systems. Therefore, one possibility for such evaluations is using already available ICS testbeds and datasets. However, the heterogeneity of the industrial landscape poses great challenges to obtaining comparable and transferable results. In this paper, we propose to bridge this gap with METRICS, a methodology for systematic resilience evaluation of ICSs. METRICS complements existing ICS testbeds by enabling the configuration of measurement campaigns for comprehensive resilience evaluations. Therefore, the user specifies individual evaluation scenarios consisting of cyberattacks and countermeasures while facilitating manual and automatic interventions. Moreover, METRICS provides domain-agnostic evaluation capabilities to achieve comparable results, which user-defined domain-specific metrics can complement. We apply the methodology in a use case study with the power grid simulator Wattson, demonstrating its effectiveness in providing valuable insights for security practitioners and researchers.},
    }

  • K. Wolsing, D. Kus, E. Wagner, J. Pennekamp, K. Wehrle, and M. Henze, “One IDS is not Enough! Exploring Ensemble Learning for Industrial Intrusion Detection,” in Proceedings of the 28th European Symposium on Research in Computer Security (ESORICS), 2023.
    [BibTeX] [Abstract] [PDF]

    Industrial Intrusion Detection Systems (IIDSs) play a critical role in safeguarding Industrial Control Systems (ICSs) against targeted cyberattacks. Unsupervised anomaly detectors, capable of learning the expected behavior of physical processes, have proven effective in detecting even novel cyberattacks. While offering decent attack detection, these systems, however, still suffer from too many False-Positive Alarms (FPAs) that operators need to investigate, eventually leading to alarm fatigue. To address this issue, in this paper, we challenge the notion of relying on a single IIDS and explore the benefits of combining multiple IIDSs. To this end, we examine the concept of ensemble learning, where a collection of classifiers (IIDSs in our case) are combined to optimize attack detection and reduce FPAs. While training ensembles for supervised classifiers is relatively straightforward, retaining the unsupervised nature of IIDSs proves challenging. In that regard, novel time-aware ensemble methods that incorporate temporal correlations between alerts and transfer-learning to best utilize the scarce training data constitute viable solutions. By combining diverse IIDSs, the detection performance can be improved beyond the individual approaches with close to no FPAs, resulting in a promising path for strengthening ICS cybersecurity.

    @inproceedings{WKW+23,
    author = {Wolsing, Konrad and Kus, Dominik and Wagner, Eric and Pennekamp, Jan and Wehrle, Klaus and Henze, Martin},
    title = {{One IDS is not Enough! Exploring Ensemble Learning for Industrial Intrusion Detection}},
    booktitle = {Proceedings of the 28th European Symposium on Research in Computer Security (ESORICS)},
    year = {2023},
    month = {09},
    abstract = {Industrial Intrusion Detection Systems (IIDSs) play a critical role in safeguarding Industrial Control Systems (ICSs) against targeted cyberattacks. Unsupervised anomaly detectors, capable of learning the expected behavior of physical processes, have proven effective in detecting even novel cyberattacks. While offering decent attack detection, these systems, however, still suffer from too many False-Positive Alarms (FPAs) that operators need to investigate, eventually leading to alarm fatigue. To address this issue, in this paper, we challenge the notion of relying on a single IIDS and explore the benefits of combining multiple IIDSs. To this end, we examine the concept of ensemble learning, where a collection of classifiers (IIDSs in our case) are combined to optimize attack detection and reduce FPAs. While training ensembles for supervised classifiers is relatively straightforward, retaining the unsupervised nature of IIDSs proves challenging. In that regard, novel time-aware ensemble methods that incorporate temporal correlations between alerts and transfer-learning to best utilize the scarce training data constitute viable solutions. By combining diverse IIDSs, the detection performance can be improved beyond the individual approaches with close to no FPAs, resulting in a promising path for strengthening ICS cybersecurity.},
    }

  • Ö. Sen, B. Ivanov, M. Henze, and A. Ulbig, “Investigation of Multi-stage Attack and Defense Simulation for Data Synthesis,” in Proceedings of the 6th International Conference on Smart Energy Systems and Technologies (SEST), 2023.
    [BibTeX] [Abstract]

    The power grid is a critical infrastructure that plays a vital role in modern society. Its availability is of utmost importance, as a loss can endanger human lives. However, with the increasing digitalization of the power grid, it also becomes vulnerable to new cyberattacks that can compromise its availability. To counter these threats, intrusion detection systems are developed and deployed to detect cyberattacks targeting the power grid. Among intrusion detection systems, anomaly detection models based on machine learning have shown potential in detecting unknown attack vectors. However, the scarcity of data for training these models remains a challenge due to confidentiality concerns. To overcome this challenge, this study proposes a model for generating synthetic data of multi-stage cyber attacks in the power grid, using attack trees to model the attacker’s sequence of steps and a game-theoretic approach to incorporate the defender’s actions. This model aims to create diverse attack data on which machine learning algorithms can be trained.

    @inproceedings{SIHU23,
    author = {Sen, {\"O}mer and Ivanov, Bozhidar and Henze, Martin and Ulbig, Andreas},
    title = {{Investigation of Multi-stage Attack and Defense Simulation for Data Synthesis}},
    booktitle = {Proceedings of the 6th International Conference on Smart Energy Systems and Technologies (SEST)},
    month = {09},
    year = {2023},
    abstract = {The power grid is a critical infrastructure that plays a vital role in modern society. Its availability is of utmost importance, as a loss can endanger human lives. However, with the increasing digitalization of the power grid, it also becomes vulnerable to new cyberattacks that can compromise its availability. To counter these threats, intrusion detection systems are developed and deployed to detect cyberattacks targeting the power grid. Among intrusion detection systems, anomaly detection models based on machine learning have shown potential in detecting unknown attack vectors. However, the scarcity of data for training these models remains a challenge due to confidentiality concerns. To overcome this challenge, this study proposes a model for generating synthetic data of multi-stage cyber attacks in the power grid, using attack trees to model the attacker's sequence of steps and a game-theoretic approach to incorporate the defender's actions. This model aims to create diverse attack data on which machine learning algorithms can be trained.},
    }

  • L. Bader, M. Serror, O. Lamberts, Ö. Sen, D. van der Velde, I. Hacker, J. Filter, E. Padilla, and M. Henze, “Comprehensively Analyzing the Impact of Cyberattacks on Power Grids,” in Proceedings of the 2023 IEEE 8th European Symposium on Security and Privacy (EuroS&P), 2023.
    [BibTeX] [Abstract] [PDF] [DOI]

    The increasing digitalization of power grids and especially the shift towards IP-based communication drastically increase the susceptibility to cyberattacks, potentially leading to blackouts and physical damage. Understanding the involved risks, the interplay of communication and physical assets, and the effects of cyberattacks are paramount for the uninterrupted operation of this critical infrastructure. However, as the impact of cyberattacks cannot be researched in real-world power grids, current efforts tend to focus on analyzing isolated aspects at small scales, often covering only either physical or communication assets. To fill this gap, we present WATTSON, a comprehensive research environment that facilitates reproducing, implementing, and analyzing cyberattacks against power grids and, in particular, their impact on both communication and physical processes. We validate WATTSON’s accuracy against a physical testbed and show its scalability to realistic power grid sizes. We then perform authentic cyberattacks, such as Industroyer, within the environment and study their impact on the power grid’s energy and communication side. Besides known vulnerabilities, our results reveal the ripple effects of susceptible communication on complex cyber-physical processes and thus lay the foundation for effective countermeasures.

    @inproceedings{BSL+23,
    author = {Bader, Lennart and Serror, Martin and Lamberts, Olav and Sen, {\"O}mer and van der Velde, Dennis and Hacker, Immanuel and Filter, Julian and Padilla, Elmar and Henze, Martin},
    title = {{Comprehensively Analyzing the Impact of Cyberattacks on Power Grids}},
    booktitle = {Proceedings of the 2023 IEEE 8th European Symposium on Security and Privacy (EuroS\&P)},
    month = {07},
    year = {2023},
    doi = {10.1109/EuroSP57164.2023.00066},
    abstract = {The increasing digitalization of power grids and especially the shift towards IP-based communication drastically increase the susceptibility to cyberattacks, potentially leading to blackouts and physical damage. Understanding the involved risks, the interplay of communication and physical assets, and the effects of cyberattacks are paramount for the uninterrupted operation of this critical infrastructure. However, as the impact of cyberattacks cannot be researched in real-world power grids, current efforts tend to focus on analyzing isolated aspects at small scales, often covering only either physical or communication assets. To fill this gap, we present WATTSON, a comprehensive research environment that facilitates reproducing, implementing, and analyzing cyberattacks against power grids and, in particular, their impact on both communication and physical processes. We validate WATTSON's accuracy against a physical testbed and show its scalability to realistic power grid sizes. We then perform authentic cyberattacks, such as Industroyer, within the environment and study their impact on the power grid's energy and communication side. Besides known vulnerabilities, our results reveal the ripple effects of susceptible communication on complex cyber-physical processes and thus lay the foundation for effective countermeasures.},
    }

  • J. Pennekamp, J. Lohmöller, E. Vlad, J. Loos, N. Rodemann, P. Sapel, I. B. Fink, S. Schmitz, C. Hopmann, M. Jarke, G. Schuh, K. Wehrle, and M. Henze, “Designing Secure and Privacy-Preserving Information Systems for Industry Benchmarking,” in Proceedings of the 35th International Conference on Advanced Information Systems Engineering (CAiSE), 2023.
    [BibTeX] [Abstract] [PDF] [DOI]

    Benchmarking is an essential tool for industrial organizations to identify potentials that allows them to improve their competitive position through operational and strategic means. However, the handling of sensitive information, in terms of (i) internal company data and (ii) the underlying algorithm to compute the benchmark, demands strict (technical) confidentiality guarantees–-an aspect that existing approaches fail to address adequately. Still, advances in private computing provide us with building blocks to reliably secure even complex computations and their inputs, as present in industry benchmarks. In this paper, we thus compare two promising and fundamentally different concepts (hardware- and software-based) to realize privacy-preserving benchmarks. Thereby, we provide detailed insights into the concept-specific benefits. Our evaluation of two real-world use cases from different industries underlines that realizing and deploying secure information systems for industry benchmarking is possible with today’s building blocks from private computing.

    @inproceedings{PLV+23,
    author = {Pennekamp, Jan and Lohm{\"o}ller, Johannes and Vlad, Eduard and Loos, Joscha and Rodemann, Niklas and Sapel, Patrick and Fink, Ina Berenice and Schmitz, Seth and Hopmann, Christian and Jarke, Matthias and Schuh, G{\"u}nther and Wehrle, Klaus and Henze, Martin},
    title = {{Designing Secure and Privacy-Preserving Information Systems for Industry Benchmarking}},
    booktitle = {Proceedings of the 35th International Conference on Advanced Information Systems Engineering (CAiSE)},
    year = {2023},
    month = {06},
    doi = {10.1007/978-3-031-34560-9_29},
    abstract = {Benchmarking is an essential tool for industrial organizations to identify potentials that allows them to improve their competitive position through operational and strategic means. However, the handling of sensitive information, in terms of (i) internal company data and (ii) the underlying algorithm to compute the benchmark, demands strict (technical) confidentiality guarantees---an aspect that existing approaches fail to address adequately. Still, advances in private computing provide us with building blocks to reliably secure even complex computations and their inputs, as present in industry benchmarks. In this paper, we thus compare two promising and fundamentally different concepts (hardware- and software-based) to realize privacy-preserving benchmarks. Thereby, we provide detailed insights into the concept-specific benefits. Our evaluation of two real-world use cases from different industries underlines that realizing and deploying secure information systems for industry benchmarking is possible with today's building blocks from private computing.},
    }

  • Ö. Sen, P. Malskorn, S. Glomb, I. Hacker, M. Henze, and A. Ulbig, “An Approach To Abstract Multi-Stage Cyberattack Data Generation For ML-based IDS In Smart Grids,” in Proceedings of 2023 IEEE Belgrade PowerTech, 2023.
    [BibTeX] [Abstract] [DOI]

    Power grids are becoming more digitized resulting in new opportunities for the grid operation but also new challenges, such as new threats from the cyber-domain. To address these challenges, cybersecurity solutions are being considered in the form of preventive, detective, and reactive measures. Machine learning-based intrusion detection systems are used as part of detection efforts to detect and defend against cyberattacks. However, training and testing data are often not available or suitable for use in machine learning models for detecting multistage cyberattacks in smart grids. In this paper, we propose a method to generate synthetic data in the form using a graphbased approach for training machine learning models in smart grids. We use an abstract form of multi-stage cyberattacks defined via graph formulations and simulate the propagation behavior of attacks in the network. The results showed that machine learning models trained on synthetic data can accurately

    @inproceedings{SMG+23,
    author = {Sen, {\"O}mer and Malskorn, Philipp and Glomb, Simon and Hacker, Immanuel and Henze, Martin and Ulbig, Andreas},
    title = {{An Approach To Abstract Multi-Stage Cyberattack Data Generation For ML-based IDS In Smart Grids}},
    booktitle = {Proceedings of 2023 IEEE Belgrade PowerTech},
    year = {2023},
    month = {06},
    doi = {10.1109/PowerTech55446.2023.10202747},
    abstract = {Power grids are becoming more digitized resulting in new opportunities for the grid operation but also new challenges, such as new threats from the cyber-domain. To address these challenges, cybersecurity solutions are being considered in the form of preventive, detective, and reactive measures. Machine learning-based intrusion detection systems are used as part of detection efforts to detect and defend against cyberattacks. However, training and testing data are often not available or suitable for use in machine learning models for detecting multistage cyberattacks in smart grids. In this paper, we propose a method to generate synthetic data in the form using a graphbased approach for training machine learning models in smart grids. We use an abstract form of multi-stage cyberattacks defined via graph formulations and simulate the propagation behavior of attacks in the network. The results showed that machine learning models trained on synthetic data can accurately},
    }

  • Ö. Sen, N. Bleser, M. Henze, and A. Ulbig, “A Cyber-Physical Digital Twin Approach to Replicating Realistic Multi-Stage Cyberattacks on Smart Grids,” in Proceedings of the 2023 International Conference on Electricity Distribution (CIRED), 2023.
    [BibTeX] [Abstract]

    The integration of information and communication technology in distribution grids presents opportunities for active grid operation management, but also increases the need for security against power outages and cyberattacks. This paper examines the impact of cyberattacks on smart grids by replicating the power grid in a secure laboratory environment as a cyber-physical digital twin. A simulation is used to study communication infrastructures for secure operation of smart grids. The cyber-physical digital twin approach combines communication network emulation and power grid simulation in a common modular environment, and is demonstrated through laboratory tests and attack replications.

    @inproceedings{SBHU23,
    author = {Sen, {\"O}mer and Bleser, Nathalie and Henze, Martin and Ulbig, Andreas},
    title = {{A Cyber-Physical Digital Twin Approach to Replicating Realistic Multi-Stage Cyberattacks on Smart Grids}},
    booktitle = {Proceedings of the 2023 International Conference on Electricity Distribution (CIRED)},
    year = {2023},
    month = {06},
    abstract = {The integration of information and communication technology in distribution grids presents opportunities for active grid operation management, but also increases the need for security against power outages and cyberattacks. This paper examines the impact of cyberattacks on smart grids by replicating the power grid in a secure laboratory environment as a cyber-physical digital twin. A simulation is used to study communication infrastructures for secure operation of smart grids. The cyber-physical digital twin approach combines communication network emulation and power grid simulation in a common modular environment, and is demonstrated through laboratory tests and attack replications.}
    }

  • J. Pennekamp, A. Belova, T. Bergs, M. Bodenbenner, A. Bührig-Polaczek, M. Dahlmanns, I. Kunze, M. Kröger, S. Geisler, M. Henze, D. Lütticke, B. Montavon, P. Niemietz, L. Ortjohann, M. Rudack, R. H. Schmitt, U. Vroomen, K. Wehrle, and M. Zeng, “Evolving the Digital Industrial Infrastructure for Production: Steps Taken and the Road Ahead,” in Internet of Production: Fundamentals, Applications and Proceedings, C. Brecher, G. Schuh, W. van der Aalst, M. Jarke, F. T. Piller, and M. Padberg, Eds., Springer, 2023.
    [BibTeX] [Abstract] [PDF] [DOI]

    The Internet of Production (IoP) leverages concepts such as digital shadows, data lakes, and a World Wide Lab (WWL) to advance today’s production. Consequently, it requires a technical infrastructure that can support the agile deployment of these concepts and corresponding high-level applications, which, e.g., demand the processing of massive data in motion and at rest. As such, key research aspects are the support for low-latency control loops, concepts on scalable data stream processing, deployable information security, and semantically rich and efficient long-term storage. In particular, such an infrastructure cannot continue to be limited to machines and sensors, but additionally needs to encompass networked environments: production cells, edge computing, and location-independent cloud infrastructures. Finally, in light of the envisioned WWL, i.e., the interconnection of production sites, the technical infrastructure must be advanced to support secure and privacy-preserving industrial collaboration. To evolve today’s production sites and lay the infrastructural foundation for the IoP, we identify five broad streams of research: (1) adapting data and stream processing to heterogeneous data from distributed sources, (2) ensuring data interoperability between systems and production sites, (3) exchanging and sharing data with different stakeholders, (4) network security approaches addressing the risks of increasing interconnectivity, and (5) security architectures to enable secure and privacy-preserving industrial collaboration. With our research, we evolve the underlying infrastructure from isolated, sparsely networked production sites toward an architecture that supports high-level applications and sophisticated digital shadows while facilitating the transition toward a WWL.

    @incollection{PBB+23,
    author = {Pennekamp, Jan and Belova, Anastasiia and Bergs, Thomas and Bodenbenner, Matthias and B{\"u}hrig-Polaczek, Andreas and Dahlmanns, Markus and Kunze, Ike and Kr{\"o}ger, Moritz and Geisler, Sandra and Henze, Martin and L{\"u}tticke, Daniel and Montavon, Benjamin and Niemietz, Philipp and Ortjohann, Lucia and Rudack, Maximilian and Schmitt, Robert H. and Vroomen, Uwe and Wehrle, Klaus and Zeng, Michael},
    title = {{Evolving the Digital Industrial Infrastructure for Production: Steps Taken and the Road Ahead}},
    booktitle = {Internet of Production: Fundamentals, Applications and Proceedings},
    editor = {Brecher, Christian and Schuh, G{\"u}nther and van der Aalst, Wil and Jarke, Matthias and Piller, Frank T. and Padberg, Melanie},
    publisher = {Springer},
    year = {2023},
    month = {02},
    doi = {10.1007/978-3-030-98062-7_2-1},
    abstract = {The Internet of Production (IoP) leverages concepts such as digital shadows, data lakes, and a World Wide Lab (WWL) to advance today's production. Consequently, it requires a technical infrastructure that can support the agile deployment of these concepts and corresponding high-level applications, which, e.g., demand the processing of massive data in motion and at rest. As such, key research aspects are the support for low-latency control loops, concepts on scalable data stream processing, deployable information security, and semantically rich and efficient long-term storage. In particular, such an infrastructure cannot continue to be limited to machines and sensors, but additionally needs to encompass networked environments: production cells, edge computing, and location-independent cloud infrastructures. Finally, in light of the envisioned WWL, i.e., the interconnection of production sites, the technical infrastructure must be advanced to support secure and privacy-preserving industrial collaboration. To evolve today's production sites and lay the infrastructural foundation for the IoP, we identify five broad streams of research: (1) adapting data and stream processing to heterogeneous data from distributed sources, (2) ensuring data interoperability between systems and production sites, (3) exchanging and sharing data with different stakeholders, (4) network security approaches addressing the risks of increasing interconnectivity, and (5) security architectures to enable secure and privacy-preserving industrial collaboration. With our research, we evolve the underlying infrastructure from isolated, sparsely networked production sites toward an architecture that supports high-level applications and sophisticated digital shadows while facilitating the transition toward a WWL.},
    }

2022

  • Ö. Sen, D. van der Velde, K. A. Wehrmeister, I. Hacker, M. Henze, and M. Andres, “On Using Contextual Correlation to Detect Multi-stage Cyber Attacks in Smart Grids,” Sustainable Energy, Grids and Networks, vol. 32, 2022.
    [BibTeX] [Abstract] [PDF] [DOI]

    While the digitization of the distribution grids brings numerous benefits to grid operations, it also increases the risks imposed by serious cyber security threats such as coordinated, timed attacks. Addressing this new threat landscape requires an advanced security approach beyond established preventive IT security measures such as encryption, network segmentation, or access control. Here, detective capabilities and reactive countermeasures as part of incident response strategies promise to complement nicely the security-by-design approach by providing cyber security situational awareness. However, manually evaluating extensive cyber intelligence within a reasonable timeframe requires an unmanageable effort to process a large amount of cross-domain information. An automated procedure is needed to systematically process and correlate the various cyber intelligence to correctly assess the situation to reduce the manuel effort and support security operations. In this paper, we present an approach that leverages cyber intelligence from multiple sources to detect multi-stage cyber attacks that threaten the smart grid. We investigate the detection quality of the presented correlation approach and discuss the results to highlight the challenges in automated methods for contextual assessment and understanding of the cyber security situation.

    @article{SVW+22,
    author = {Sen, {\"O}mer and van der Velde, Dennis and Wehrmeister, Katharina A. and Hacker, Immanuel and Henze, Martin and Andres, Michael},
    title = {{On Using Contextual Correlation to Detect Multi-stage Cyber Attacks in Smart Grids}},
    journal = {Sustainable Energy, Grids and Networks},
    volume = {32},
    month = {12},
    year = {2022},
    doi = {10.1016/j.segan.2022.100821},
    abstract = {While the digitization of the distribution grids brings numerous benefits to grid operations, it also increases the risks imposed by serious cyber security threats such as coordinated, timed attacks. Addressing this new threat landscape requires an advanced security approach beyond established preventive IT security measures such as encryption, network segmentation, or access control. Here, detective capabilities and reactive countermeasures as part of incident response strategies promise to complement nicely the security-by-design approach by providing cyber security situational awareness. However, manually evaluating extensive cyber intelligence within a reasonable timeframe requires an unmanageable effort to process a large amount of cross-domain information. An automated procedure is needed to systematically process and correlate the various cyber intelligence to correctly assess the situation to reduce the manuel effort and support security operations. In this paper, we present an approach that leverages cyber intelligence from multiple sources to detect multi-stage cyber attacks that threaten the smart grid. We investigate the detection quality of the presented correlation approach and discuss the results to highlight the challenges in automated methods for contextual assessment and understanding of the cyber security situation.},
    }

  • J. Pennekamp, M. Henze, A. Zinnen, F. Lanze, K. Wehrle, and A. Panchenko, “CUMUL & Co: High-Impact Artifacts for Website Fingerprinting Research,” Cybersecurity Artifacts Competition and Impact Award at the 38th Annual Computer Security Applications Conference (ACSAC), 2022.
    [BibTeX] [Abstract] [PDF] [DOI]

    Anonymous communication on the Internet is about hiding the relationship between communicating parties. At NDSS ’16, we presented a new website fingerprinting approach, CUMUL, that utilizes novel features and a simple yet powerful algorithm to attack anonymization networks such as Tor. Based on pattern observation of data flows, this attack aims at identifying the content of encrypted and anonymized connections. Apart from the feature generation and the used classifier, we also provided a large dataset to the research community to study the attack at Internet scale. In this paper, we emphasize the impact of our artifacts by analyzing publications referring to our work with respect to the dataset, feature extraction method, and source code of the implementation. Based on this data, we draw conclusions about the impact of our artifacts on the research field and discuss their influence on related cybersecurity topics. Overall, from 393 unique citations, we discover more than 130 academic references that utilize our artifacts, 61 among them are highly influential (according to SemanticScholar), and at least 35 are from top-ranked security venues. This data underlines the significant relevance and impact of our work as well as of our artifacts in the community and beyond.

    @misc{PHZ+22,
    title = {{CUMUL {\&} Co: High-Impact Artifacts for Website Fingerprinting Research}},
    author = {Pennekamp, Jan and Henze, Martin and Zinnen, Andreas and Lanze, Fabian and Wehrle, Klaus and Panchenko, Andriy},
    howpublished = {Cybersecurity Artifacts Competition and Impact Award at the 38th Annual Computer Security Applications Conference (ACSAC)},
    month = {12},
    year = {2022},
    doi = {10.18154/RWTH-2022-10811},
    abstract = {Anonymous communication on the Internet is about hiding the relationship between communicating parties. At NDSS '16, we presented a new website fingerprinting approach, CUMUL, that utilizes novel features and a simple yet powerful algorithm to attack anonymization networks such as Tor. Based on pattern observation of data flows, this attack aims at identifying the content of encrypted and anonymized connections. Apart from the feature generation and the used classifier, we also provided a large dataset to the research community to study the attack at Internet scale. In this paper, we emphasize the impact of our artifacts by analyzing publications referring to our work with respect to the dataset, feature extraction method, and source code of the implementation. Based on this data, we draw conclusions about the impact of our artifacts on the research field and discuss their influence on related cybersecurity topics. Overall, from 393 unique citations, we discover more than 130 academic references that utilize our artifacts, 61 among them are highly influential (according to SemanticScholar), and at least 35 are from top-ranked security venues. This data underlines the significant relevance and impact of our work as well as of our artifacts in the community and beyond.},
    }

  • D. Kus, K. Wolsing, J. Pennekamp, E. Wagner, M. Henze, and K. Wehrle, “Poster: Ensemble Learning for Industrial Intrusion Detection,” Poster Session at the 38th Annual Computer Security Applications Conference (ACSAC), 2022.
    [BibTeX] [Abstract] [PDF] [DOI]

    Industrial intrusion detection promises to protect networked industrial control systems by monitoring them and raising an alarm in case of suspicious behavior. Many monolithic intrusion detection systems are proposed in literature. These detectors are often specialized and, thus, work particularly well on certain types of attacks or monitor different parts of the system, e.g., the network or the physical process. Combining multiple such systems promises to leverage their joint strengths, allowing the detection of a wider range of attacks due to their diverse specializations and reducing false positives. We study this concept’s feasibility with initial results of various methods to combine detectors.

    @misc{KWP+22b,
    author = {Kus, Dominik and Wolsing, Konrad and Pennekamp, Jan and Wagner, Eric and Henze, Martin and Wehrle, Klaus},
    title = {{Poster: Ensemble Learning for Industrial Intrusion Detection}},
    month = {12},
    year = {2022},
    howpublished = {Poster Session at the 38th Annual Computer Security Applications Conference (ACSAC)},
    doi = {10.18154/RWTH-2022-10809},
    abstract = {Industrial intrusion detection promises to protect networked industrial control systems by monitoring them and raising an alarm in case of suspicious behavior. Many monolithic intrusion detection systems are proposed in literature. These detectors are often specialized and, thus, work particularly well on certain types of attacks or monitor different parts of the system, e.g., the network or the physical process. Combining multiple such systems promises to leverage their joint strengths, allowing the detection of a wider range of attacks due to their diverse specializations and reducing false positives. We study this concept's feasibility with initial results of various methods to combine detectors.},
    }

  • M. Serror, L. Bader, M. Henze, A. Schwarze, and K. Nürnberger, “Poster: INSIDE – Enhancing Network Intrusion Detection in Power Grids with Automated Facility Monitoring,” in Proceedings of the 2022 ACM SIGSAC Conference on Computer and Communications Security (CCS) – Poster Session, 2022.
    [BibTeX] [Abstract] [PDF] [DOI]

    Advances in digitalization and networking of power grids have increased the risks of cyberattacks against such critical infrastructures, where the attacks often originate from within the power grid’s network. Adequate detection must hence consider both physical access violations and network anomalies to identify the attack’s origin. Therefore, we propose INSIDE, combining network intrusion detection with automated facility monitoring to swiftly detect cyberattacks on power grids based on unauthorized access. Besides providing an initial design for INSIDE, we discuss potential use cases illustrating the benefits of such a comprehensive methodology.

    @inproceedings{SBH+22,
    author = {Serror, Martin and Bader, Lennart and Henze, Martin and Schwarze, Arne and N{\"u}rnberger, Kai},
    title = {{Poster: INSIDE - Enhancing Network Intrusion Detection in Power Grids with Automated Facility Monitoring}},
    booktitle = {Proceedings of the 2022 ACM SIGSAC Conference on Computer and Communications Security (CCS) – Poster Session},
    month = {11},
    year = {2022},
    doi = {10.1145/3548606.3563500},
    abstract = {Advances in digitalization and networking of power grids have increased the risks of cyberattacks against such critical infrastructures, where the attacks often originate from within the power grid's network. Adequate detection must hence consider both physical access violations and network anomalies to identify the attack's origin. Therefore, we propose INSIDE, combining network intrusion detection with automated facility monitoring to swiftly detect cyberattacks on power grids based on unauthorized access. Besides providing an initial design for INSIDE, we discuss potential use cases illustrating the benefits of such a comprehensive methodology.},
    }

  • K. Wolsing, E. Wagner, A. Saillard, and M. Henze, “IPAL: Breaking up Silos of Protocol-dependent and Domain-specific Industrial Intrusion Detection Systems,” in Proceedings of the 25th International Symposium on Research in Attacks, Intrusions and Defenses (RAID), 2022.
    [BibTeX] [Abstract] [PDF] [DOI]

    The increasing interconnection of industrial networks exposes them to an ever-growing risk of cyber attacks. To reveal such attacks early and prevent any damage, industrial intrusion detection searches for anomalies in otherwise predictable communication or process behavior. However, current efforts mostly focus on specific domains and protocols, leading to a research landscape broken up into isolated silos. Thus, existing approaches cannot be applied to other industries that would equally benefit from powerful detection. To better understand this issue, we survey 53 detection systems and find no fundamental reason for their narrow focus. Although they are often coupled to specific industrial protocols in practice, many approaches could generalize to new industrial scenarios in theory. To unlock this potential, we propose IPAL, our industrial protocol abstraction layer, to decouple intrusion detection from domain-specific industrial protocols. After proving IPAL’s correctness in a reproducibility study of related work, we showcase its unique benefits by studying the generalizability of existing approaches to new datasets and conclude that they are indeed not restricted to specific domains or protocols and can perform outside their restricted silos.

    @inproceedings{WWSH22,
    author = {Wolsing, Konrad and Wagner, Eric and Saillard, Antoine and Henze, Martin},
    title = {{IPAL: Breaking up Silos of Protocol-dependent and Domain-specific Industrial Intrusion Detection Systems}},
    booktitle = {Proceedings of the 25th International Symposium on Research in Attacks, Intrusions and Defenses (RAID)},
    month = {10},
    year = {2022},
    doi = {10.1145/3545948.3545968},
    abstract = {The increasing interconnection of industrial networks exposes them to an ever-growing risk of cyber attacks. To reveal such attacks early and prevent any damage, industrial intrusion detection searches for anomalies in otherwise predictable communication or process behavior. However, current efforts mostly focus on specific domains and protocols, leading to a research landscape broken up into isolated silos. Thus, existing approaches cannot be applied to other industries that would equally benefit from powerful detection. To better understand this issue, we survey 53 detection systems and find no fundamental reason for their narrow focus. Although they are often coupled to specific industrial protocols in practice, many approaches could generalize to new industrial scenarios in theory. To unlock this potential, we propose IPAL, our industrial protocol abstraction layer, to decouple intrusion detection from domain-specific industrial protocols. After proving IPAL's correctness in a reproducibility study of related work, we showcase its unique benefits by studying the generalizability of existing approaches to new datasets and conclude that they are indeed not restricted to specific domains or protocols and can perform outside their restricted silos.},
    }

  • K. Wolsing, L. Thiemt, C. van Sloun, E. Wagner, K. Wehrle, and M. Henze, “Can Industrial Intrusion Detection Be SIMPLE?,” in Proceedings of the 27th European Symposium on Research in Computer Security (ESORICS), 2022.
    [BibTeX] [Abstract] [PDF] [DOI]

    Cyberattacks against industrial control systems pose a serious risk to the safety of humans and the environment. Industrial intrusion detection systems oppose this threat by continuously monitoring industrial processes and alerting any deviations from learned normal behavior. To this end, various streams of research rely on advanced and complex approaches, i.e., artificial neural networks, thus achieving allegedly high detection rates. However, as we show in an analysis of 70 approaches from related work, their inherent complexity comes with undesired properties. For example, they exhibit incomprehensible alarms and models only specialized personnel can understand, thus limiting their broad applicability in a heterogeneous industrial domain. Consequentially, we ask whether industrial intrusion detection indeed has to be complex or can be SIMPLE instead, i.e., Sufficient to detect most attacks, Independent of hyperparameters to dial-in, Meaningful in model and alerts, Portable to other industrial domains, Local to a part of the physical process, and computationally Efficient. To answer this question, we propose our design of four SIMPLE industrial intrusion detection systems, such as simple tests for the minima and maxima of process values or the rate at which process values change. Our evaluation of these SIMPLE approaches on four state-of-the-art industrial security datasets reveals that SIMPLE approaches can perform on par with existing complex approaches from related work while simultaneously being comprehensible and easily portable to other scenarios. Thus, it is indeed justified to raise the question of whether industrial intrusion detection needs to be inherently complex.

    @inproceedings{WTS+22,
    author = {Wolsing, Konrad and Thiemt, Lea and van Sloun, Christian and Wagner, Eric and Wehrle, Klaus and Henze, Martin},
    title = {{Can Industrial Intrusion Detection Be SIMPLE?}},
    booktitle = {Proceedings of the 27th European Symposium on Research in Computer Security (ESORICS)},
    month = {09},
    year = {2022},
    doi = {10.1007/978-3-031-17143-7_28},
    abstract = {Cyberattacks against industrial control systems pose a serious risk to the safety of humans and the environment. Industrial intrusion detection systems oppose this threat by continuously monitoring industrial processes and alerting any deviations from learned normal behavior. To this end, various streams of research rely on advanced and complex approaches, i.e., artificial neural networks, thus achieving allegedly high detection rates. However, as we show in an analysis of 70 approaches from related work, their inherent complexity comes with undesired properties. For example, they exhibit incomprehensible alarms and models only specialized personnel can understand, thus limiting their broad applicability in a heterogeneous industrial domain. Consequentially, we ask whether industrial intrusion detection indeed has to be complex or can be SIMPLE instead, i.e., Sufficient to detect most attacks, Independent of hyperparameters to dial-in, Meaningful in model and alerts, Portable to other industrial domains, Local to a part of the physical process, and computationally Efficient. To answer this question, we propose our design of four SIMPLE industrial intrusion detection systems, such as simple tests for the minima and maxima of process values or the rate at which process values change. Our evaluation of these SIMPLE approaches on four state-of-the-art industrial security datasets reveals that SIMPLE approaches can perform on par with existing complex approaches from related work while simultaneously being comprehensible and easily portable to other scenarios. Thus, it is indeed justified to raise the question of whether industrial intrusion detection needs to be inherently complex.},
    }

  • K. Wolsing, A. Saillard, J. Bauer, E. Wagner, C. van Sloun, I. B. Fink, M. Schmidt, K. Wehrle, and M. Henze, “Network Attacks Against Marine Radar Systems: A Taxonomy, Simulation Environment, and Dataset,” in Proceedings of the 47th IEEE Conference on Local Computer Networks (LCN), 2022.
    [BibTeX] [Abstract] [PDF] [DOI]

    Shipboard marine radar systems are essential for safe navigation, helping seafarers perceive their surroundings as they provide bearing and range estimations, object detection, and tracking. Since onboard systems have become increasingly digitized, interconnecting distributed electronics, radars have been integrated into modern bridge systems. But digitization increases the risk of cyberattacks, especially as vessels cannot be considered air-gapped. Consequently, in-depth security is crucial. However, particularly radar systems are not sufficiently protected against harmful network-level adversaries. Therefore, we ask: Can seafarers believe their eyes? In this paper, we identify possible attacks on radar communication and discuss how these threaten safe vessel operation in an attack taxonomy. Furthermore, we develop a holistic simulation environment with radar, complementary nautical sensors, and prototypically implemented cyberattacks from our taxonomy. Finally, leveraging this environment, we create a comprehensive dataset (RadarPWN) with radar network attacks that provides a foundation for future security research to secure marine radar communication.

    @inproceedings{WSB+22,
    author = {Wolsing, Konrad and Saillard, Antoine and Bauer, Jan and Wagner, Eric and van Sloun, Christian and Fink, Ina Berenice and Schmidt, Mari and Wehrle, Klaus and Henze, Martin},
    title = {{Network Attacks Against Marine Radar Systems: A Taxonomy, Simulation Environment, and Dataset}},
    booktitle = {Proceedings of the 47th IEEE Conference on Local Computer Networks (LCN)},
    month = {09},
    year = {2022},
    doi = {10.1109/LCN53696.2022.9843801},
    abstract = {Shipboard marine radar systems are essential for safe navigation, helping seafarers perceive their surroundings as they provide bearing and range estimations, object detection, and tracking. Since onboard systems have become increasingly digitized, interconnecting distributed electronics, radars have been integrated into modern bridge systems. But digitization increases the risk of cyberattacks, especially as vessels cannot be considered air-gapped. Consequently, in-depth security is crucial. However, particularly radar systems are not sufficiently protected against harmful network-level adversaries. Therefore, we ask: Can seafarers believe their eyes? In this paper, we identify possible attacks on radar communication and discuss how these threaten safe vessel operation in an attack taxonomy. Furthermore, we develop a holistic simulation environment with radar, complementary nautical sensors, and prototypically implemented cyberattacks from our taxonomy. Finally, leveraging this environment, we create a comprehensive dataset (RadarPWN) with radar network attacks that provides a foundation for future security research to secure marine radar communication.},
    }

  • Ö. Sen, D. van der Velde, M. Lühman, F. Sprünken, I. Hacker, A. Ulbig, M. Andres, and M. Henze, “On Specification-based Cyber-Attack Detection in Smart Grids,” in Proceedings of the 11th DACH+ Conference on Energy Informatics, 2022.
    [BibTeX] [Abstract] [PDF] [DOI]

    The transformation of power grids into intelligent cyber-physical systems brings numerous benefits, but also significantly increases the surface for cyber-attacks, demanding appropriate countermeasures. However, the development, validation, and testing of data-driven countermeasures against cyber-attacks, such as machine learning-based detection approaches, lack important data from real-world cyber incidents. Unlike attack data from real-world cyber incidents, infrastructure knowledge and standards are accessible through expert and domain knowledge. Our proposed approach uses domain knowledge to define the behavior of a smart grid under non-attack conditions and detect attack patterns and anomalies. Using a graph-based specification formalism, we combine cross-domain knowledge that enables the generation of whitelisting rules not only for statically defined protocol fields but also for communication flows and technical operation boundaries. Finally, we evaluate our specification-based intrusion detection system against various attack scenarios and assess detection quality and performance. In particular, we investigate a data manipulation attack in a future-orientated use case of an IEC 60870-based SCADA system that controls distributed energy resources in the distribution grid. Our approach can detect severe data manipulation attacks with high accuracy in a timely and reliable manner.

    @inproceedings{SVL+22,
    author = {Sen, {\"O}mer and van der Velde, Dennis and L{\"u}hman, Maik and Spr{\"u}nken, Florian and Hacker, Immanuel and Ulbig, Andreas and Andres, Michael and Henze, Martin},
    title = {{On Specification-based Cyber-Attack Detection in Smart Grids}},
    booktitle = {Proceedings of the 11th DACH+ Conference on Energy Informatics},
    month = {09},
    year = {2022},
    doi = {10.1186/s42162-022-00206-7},
    abstract = {The transformation of power grids into intelligent cyber-physical systems brings numerous benefits, but also significantly increases the surface for cyber-attacks, demanding appropriate countermeasures. However, the development, validation, and testing of data-driven countermeasures against cyber-attacks, such as machine learning-based detection approaches, lack important data from real-world cyber incidents. Unlike attack data from real-world cyber incidents, infrastructure knowledge and standards are accessible through expert and domain knowledge. Our proposed approach uses domain knowledge to define the behavior of a smart grid under non-attack conditions and detect attack patterns and anomalies. Using a graph-based specification formalism, we combine cross-domain knowledge that enables the generation of whitelisting rules not only for statically defined protocol fields but also for communication flows and technical operation boundaries. Finally, we evaluate our specification-based intrusion detection system against various attack scenarios and assess detection quality and performance. In particular, we investigate a data manipulation attack in a future-orientated use case of an IEC 60870-based SCADA system that controls distributed energy resources in the distribution grid. Our approach can detect severe data manipulation attacks with high accuracy in a timely and reliable manner.},
    }

  • M. Henze, R. Matzutt, J. Hiller, E. Mühmer, J. H. Ziegeldorf, J. van der Giet, and K. Wehrle, “Complying with Data Handling Requirements in Cloud Storage Systems,” IEEE Transactions on Cloud Computing, vol. 10, iss. 3, 2022.
    [BibTeX] [Abstract] [PDF] [DOI]

    In past years, cloud storage systems saw an enormous rise in usage. However, despite their popularity and importance as underlying infrastructure for more complex cloud services, today’s cloud storage systems do not account for compliance with regulatory, organizational, or contractual data handling requirements by design. Since legislation increasingly responds to rising data protection and privacy concerns, complying with data handling requirements becomes a crucial property for cloud storage systems. We present PRADA, a practical approach to account for compliance with data handling requirements in key-value based cloud storage systems. To achieve this goal, PRADA introduces a transparent data handling layer, which empowers clients to request specific data handling requirements and enables operators of cloud storage systems to comply with them. We implement PRADA on top of the distributed database Cassandra and show in our evaluation that complying with data handling requirements in cloud storage systems is practical in real-world cloud deployments as used for microblogging, data sharing in the Internet of Things, and distributed email storage.

    @article{HMH+20,
    author = {Henze, Martin and Matzutt, Roman and Hiller, Jens and M{\"u}hmer, Erik and Ziegeldorf, Jan Henrik and van der Giet, Johannes and Wehrle, Klaus},
    title = {{Complying with Data Handling Requirements in Cloud Storage Systems}},
    journal = {IEEE Transactions on Cloud Computing},
    volume = {10},
    number = {3},
    month = {09},
    year = {2022},
    doi = {10.1109/TCC.2020.3000336},
    abstract = {In past years, cloud storage systems saw an enormous rise in usage. However, despite their popularity and importance as underlying infrastructure for more complex cloud services, today's cloud storage systems do not account for compliance with regulatory, organizational, or contractual data handling requirements by design. Since legislation increasingly responds to rising data protection and privacy concerns, complying with data handling requirements becomes a crucial property for cloud storage systems. We present PRADA, a practical approach to account for compliance with data handling requirements in key-value based cloud storage systems. To achieve this goal, PRADA introduces a transparent data handling layer, which empowers clients to request specific data handling requirements and enables operators of cloud storage systems to comply with them. We implement PRADA on top of the distributed database Cassandra and show in our evaluation that complying with data handling requirements in cloud storage systems is practical in real-world cloud deployments as used for microblogging, data sharing in the Internet of Things, and distributed email storage.},
    }

  • S. Zemanek, I. Hacker, K. Wolsing, E. Wagner, M. Henze, and M. Serror, “PowerDuck: A GOOSE Data Set of Cyberattacks in Substations,” in Proceedings of the 15th Workshop on Cyber Security Experimentation and Test (CSET), 2022.
    [BibTeX] [Abstract] [PDF] [DOI]

    Power grids worldwide are increasingly victims of cyberattacks, where attackers can cause immense damage to critical infrastructure. The growing digitalization and networking in power grids combined with insufficient protection against cyberattacks further exacerbate this trend. Hence, security engineers and researchers must counter these new risks by continuously improving security measures. Data sets of real network traffic during cyberattacks play a decisive role in analyzing and understanding such attacks. Therefore, this paper presents PowerDuck, a publicly available security data set containing network traces of GOOSE communication in a physical substation testbed. The data set includes recordings of various scenarios with and without the presence of attacks. Furthermore, all network packets originating from the attacker are clearly labeled to facilitate their identification. We thus envision PowerDuck improving and complementing existing data sets of substations, which are often generated synthetically, thus enhancing the security of power grids.

    @inproceedings{ZHW+22,
    author = {Zemanek, Sven and Hacker, Immanuel and Wolsing, Konrad and Wagner, Eric and Henze, Martin and Serror, Martin},
    title = {{PowerDuck: A GOOSE Data Set of Cyberattacks in Substations}},
    booktitle = {Proceedings of the 15th Workshop on Cyber Security Experimentation and Test (CSET)},
    month = {08},
    year = {2022},
    doi = {10.1145/3546096.3546102},
    abstract = {Power grids worldwide are increasingly victims of cyberattacks, where attackers can cause immense damage to critical infrastructure. The growing digitalization and networking in power grids combined with insufficient protection against cyberattacks further exacerbate this trend. Hence, security engineers and researchers must counter these new risks by continuously improving security measures. Data sets of real network traffic during cyberattacks play a decisive role in analyzing and understanding such attacks. Therefore, this paper presents PowerDuck, a publicly available security data set containing network traces of GOOSE communication in a physical substation testbed. The data set includes recordings of various scenarios with and without the presence of attacks. Furthermore, all network packets originating from the attacker are clearly labeled to facilitate their identification. We thus envision PowerDuck improving and complementing existing data sets of substations, which are often generated synthetically, thus enhancing the security of power grids.},
    }

  • M. Dahlmanns, J. Lohmöller, J. Pennekamp, J. Bodenhausen, K. Wehrle, and M. Henze, “Missed Opportunities: Measuring the Untapped TLS Support in the Industrial Internet of Things,” in Proceedings of the 17th ACM ASIA Conference on Computer and Communications Security (ASIA CCS), 2022.
    [BibTeX] [Abstract] [PDF] [DOI]

    The ongoing trend to move industrial appliances from previously isolated networks to the Internet requires fundamental changes in security to uphold secure and safe operation. Consequently, to ensure end-to-end secure communication and authentication, (i) traditional industrial protocols, e.g., Modbus, are retrofitted with TLS support, and (ii) modern protocols, e.g., MQTT, are directly designed to use TLS. To understand whether these changes indeed lead to secure Industrial Internet of Things deployments, i.e., using TLS-based protocols, which are configured according to security best practices, we perform an Internet-wide security assessment of ten industrial protocols covering the complete IPv4 address space. Our results show that both, retrofitted existing protocols and newly developed secure alternatives, are barely noticeable in the wild. While we find that new protocols have a higher TLS adoption rate than traditional protocols (7.2 {\%} vs. 0.4 {\%}), the overall adoption of TLS is comparably low (6.5 {\%} of hosts). Thus, most industrial deployments (934,736 hosts) are insecurely connected to the Internet. Furthermore, we identify that 42 {\%} of hosts with TLS support (26,665 hosts) show security deficits, e.g., missing access control. Finally, we show that support in configuring systems securely, e.g., via configuration templates, is promising to strengthen security.

    @inproceedings{DLP+22,
    author = {Dahlmanns, Markus and Lohm{\"o}ller, Johannes and Pennekamp, Jan and Bodenhausen, J{\"o}rn and Wehrle, Klaus and Henze, Martin},
    title = {{Missed Opportunities: Measuring the Untapped TLS Support in the Industrial Internet of Things}},
    booktitle = {Proceedings of the 17th ACM ASIA Conference on Computer and Communications Security (ASIA CCS)},
    month = {05},
    year = {2022},
    doi = {10.1145/3488932.3497762},
    abstract = {The ongoing trend to move industrial appliances from previously isolated networks to the Internet requires fundamental changes in security to uphold secure and safe operation. Consequently, to ensure end-to-end secure communication and authentication, (i) traditional industrial protocols, e.g., Modbus, are retrofitted with TLS support, and (ii) modern protocols, e.g., MQTT, are directly designed to use TLS. To understand whether these changes indeed lead to secure Industrial Internet of Things deployments, i.e., using TLS-based protocols, which are configured according to security best practices, we perform an Internet-wide security assessment of ten industrial protocols covering the complete IPv4 address space. Our results show that both, retrofitted existing protocols and newly developed secure alternatives, are barely noticeable in the wild. While we find that new protocols have a higher TLS adoption rate than traditional protocols (7.2 {\%} vs. 0.4 {\%}), the overall adoption of TLS is comparably low (6.5 {\%} of hosts). Thus, most industrial deployments (934,736 hosts) are insecurely connected to the Internet. Furthermore, we identify that 42 {\%} of hosts with TLS support (26,665 hosts) show security deficits, e.g., missing access control. Finally, we show that support in configuring systems securely, e.g., via configuration templates, is promising to strengthen security.}
    }

  • D. Kus, E. Wagner, J. Pennekamp, K. Wolsing, I. B. Fink, M. Dahlmanns, K. Wehrle, and M. Henze, “A False Sense of Security? Revisiting the State of Machine Learning-Based Industrial Intrusion Detection,” in Proceedings of the 8th ACM Cyber-Physical System Security Workshop (CPSS), 2022.
    [BibTeX] [Abstract] [PDF] [DOI]

    Anomaly-based intrusion detection promises to detect novel or unknown attacks on industrial control systems by modeling expected system behavior and raising corresponding alarms for any deviations. As manually creating these behavioral models is tedious and error-prone, research focuses on machine learning to train them automatically, achieving detection rates upwards of 99 {\%}. However, these approaches are typically trained not only on benign traffic but also on attacks and then evaluated against the same type of attack used for training. Hence, their actual, real-world performance on unknown (not trained on) attacks remains unclear. In turn, the reported near-perfect detection rates of machine learning-based intrusion detection might create a false sense of security. To assess this situation and clarify the real potential of machine learning-based industrial intrusion detection, we develop an evaluation methodology and examine multiple approaches from literature for their performance on unknown attacks (excluded from training). Our results highlight an ineffectiveness in detecting unknown attacks, with detection rates dropping to between 3.2 {\%} and 14.7 {\%} for some types of attacks. Moving forward, we derive recommendations for further research on machine learning-based approaches to ensure clarity on their ability to detect unknown attacks.

    @inproceedings{KWP+22,
    author = {Kus, Dominik and Wagner, Eric and Pennekamp, Jan and Wolsing, Konrad and Fink, Ina Berenice and Dahlmanns, Markus and Wehrle, Klaus and Henze, Martin},
    title = {{A False Sense of Security? Revisiting the State of Machine Learning-Based Industrial Intrusion Detection}},
    booktitle = {Proceedings of the 8th ACM Cyber-Physical System Security Workshop (CPSS)},
    month = {05},
    year = {2022},
    doi = {10.1145/3494107.3522773},
    abstract = {Anomaly-based intrusion detection promises to detect novel or unknown attacks on industrial control systems by modeling expected system behavior and raising corresponding alarms for any deviations. As manually creating these behavioral models is tedious and error-prone, research focuses on machine learning to train them automatically, achieving detection rates upwards of 99 {\%}. However, these approaches are typically trained not only on benign traffic but also on attacks and then evaluated against the same type of attack used for training. Hence, their actual, real-world performance on unknown (not trained on) attacks remains unclear. In turn, the reported near-perfect detection rates of machine learning-based intrusion detection might create a false sense of security. To assess this situation and clarify the real potential of machine learning-based industrial intrusion detection, we develop an evaluation methodology and examine multiple approaches from literature for their performance on unknown attacks (excluded from training). Our results highlight an ineffectiveness in detecting unknown attacks, with detection rates dropping to between 3.2 {\%} and 14.7 {\%} for some types of attacks. Moving forward, we derive recommendations for further research on machine learning-based approaches to ensure clarity on their ability to detect unknown attacks.},
    }

  • E. Wagner, J. Bauer, and M. Henze, “Take a Bite of the Reality Sandwich: Revisiting the Security of Progressive Message Authentication Codes,” in Proceedings of the 15th ACM Conference on Security and Privacy in Wireless and Mobile Networks (WiSec), 2022.
    [BibTeX] [Abstract] [PDF] [DOI]

    Message authentication guarantees the integrity of messages exchanged over untrusted channels. However, to achieve this goal, message authentication considerably expands packet sizes, which is especially problematic in constrained wireless environments. To address this issue, progressive message authentication provides initially reduced integrity protection that is often sufficient to process messages upon reception. This reduced security is then successively improved with subsequent messages to uphold the strong guarantees of traditional integrity protection. However, contrary to previous claims, we show in this paper that existing progressive message authentication schemes are highly susceptible to packet loss induced by poor channel conditions or jamming attacks. Thus, we consider it imperative to rethink how authentication tags depend on the successful reception of surrounding packets. To this end, we propose R2-D2, which uses randomized dependencies with parameterized security guarantees to increase the resilience of progressive authentication against packet loss. To deploy our approach to resource-constrained devices, we introduce SP-MAC, which implements R2-D2 using efficient XOR operations. Our evaluation shows that SP-MAC is resilient to sophisticated network-level attacks and operates as resources-conscious and fast as existing, yet insecure, progressive message authentication schemes.

    @inproceedings{WBH22,
    author = {Wagner, Eric and Bauer, Jan and Henze, Martin},
    title = {{Take a Bite of the Reality Sandwich: Revisiting the Security of Progressive Message Authentication Codes}},
    booktitle = {Proceedings of the 15th ACM Conference on Security and Privacy in Wireless and Mobile Networks (WiSec)},
    month = {05},
    year = {2022},
    doi = {10.1145/3507657.3528539},
    abstract = {Message authentication guarantees the integrity of messages exchanged over untrusted channels. However, to achieve this goal, message authentication considerably expands packet sizes, which is especially problematic in constrained wireless environments. To address this issue, progressive message authentication provides initially reduced integrity protection that is often sufficient to process messages upon reception. This reduced security is then successively improved with subsequent messages to uphold the strong guarantees of traditional integrity protection. However, contrary to previous claims, we show in this paper that existing progressive message authentication schemes are highly susceptible to packet loss induced by poor channel conditions or jamming attacks. Thus, we consider it imperative to rethink how authentication tags depend on the successful reception of surrounding packets. To this end, we propose R2-D2, which uses randomized dependencies with parameterized security guarantees to increase the resilience of progressive authentication against packet loss. To deploy our approach to resource-constrained devices, we introduce SP-MAC, which implements R2-D2 using efficient XOR operations. Our evaluation shows that SP-MAC is resilient to sophisticated network-level attacks and operates as resources-conscious and fast as existing, yet insecure, progressive message authentication schemes.},
    }

  • E. Wagner, M. Serror, K. Wehrle, and M. Henze, “BP-MAC: Fast Authentication for Short Messages,” in Proceedings of the 15th ACM Conference on Security and Privacy in Wireless and Mobile Networks (WiSec), 2022.
    [BibTeX] [Abstract] [PDF] [DOI]

    Resource-constrained devices increasingly rely on wireless communication for the reliable and low-latency transmission of short messages. However, especially the implementation of adequate integrity protection of time-critical messages places a significant burden on these devices. We address this issue by proposing BP-MAC, a fast and memory-efficient approach for computing message authentication codes based on the well-established Carter-Wegman construction. Our key idea is to offload resource-intensive computations to idle phases and thus save valuable time in latency-critical phases, i.e., when new data awaits processing. Therefore, BP-MAC leverages a universal hash function designed for the bitwise preprocessing of integrity protection to later only require a few XOR operations during the latency-critical phase. Our evaluation on embedded hardware shows that BP-MAC outperforms the state-of-the-art in terms of latency and memory overhead, notably for small messages, as required to adequately protect resource-constrained devices with stringent security and latency requirements.

    @inproceedings{WSWH22,
    author = {Wagner, Eric and Serror, Martin and Wehrle, Klaus and Henze, Martin},
    title = {{BP-MAC: Fast Authentication for Short Messages}},
    booktitle = {Proceedings of the 15th ACM Conference on Security and Privacy in Wireless and Mobile Networks (WiSec)},
    month = {05},
    year = {2022},
    doi = {10.1145/3507657.3528554},
    abstract = {Resource-constrained devices increasingly rely on wireless communication for the reliable and low-latency transmission of short messages. However, especially the implementation of adequate integrity protection of time-critical messages places a significant burden on these devices. We address this issue by proposing BP-MAC, a fast and memory-efficient approach for computing message authentication codes based on the well-established Carter-Wegman construction. Our key idea is to offload resource-intensive computations to idle phases and thus save valuable time in latency-critical phases, i.e., when new data awaits processing. Therefore, BP-MAC leverages a universal hash function designed for the bitwise preprocessing of integrity protection to later only require a few XOR operations during the latency-critical phase. Our evaluation on embedded hardware shows that BP-MAC outperforms the state-of-the-art in terms of latency and memory overhead, notably for small messages, as required to adequately protect resource-constrained devices with stringent security and latency requirements.},
    }

  • E. Wagner, R. Matzutt, J. Pennekamp, L. Bader, I. Bajelidze, K. Wehrle, and M. Henze, “Scalable and Privacy-Focused Company-Centric Supply Chain Management,” in Proceedings of the 2022 IEEE International Conference on Blockchain and Cryptocurrency (ICBC), 2022.
    [BibTeX] [Abstract] [PDF] [DOI]

    Blockchain technology promises to overcome trust and privacy concerns inherent to centralized information sharing. However, current decentralized supply chain management systems do either not meet privacy and scalability requirements or require a trustworthy consortium, which is challenging for increasingly dynamic supply chains with constantly changing participants. In this paper, we propose CCChain, a scalable and privacy-aware supply chain management system that stores all information locally to give companies complete sovereignty over who accesses their data. Still, tamper protection of all data through a permissionless blockchain enables on-demand tracking and tracing of products as well as reliable information sharing while affording the detection of data inconsistencies. Our evaluation confirms that CCChain offers superior scalability in comparison to alternatives while also enabling near real-time tracking and tracing for many, less complex products.

    @inproceedings{WMP+22,
    author = {Wagner, Eric and Matzutt, Roman and Pennekamp, Jan and Bader, Lennart and Bajelidze, Irakli and Wehrle, Klaus and Henze, Martin},
    title = {{Scalable and Privacy-Focused Company-Centric Supply Chain Management}},
    booktitle = {Proceedings of the 2022 IEEE International Conference on Blockchain and Cryptocurrency (ICBC)},
    month = {05},
    year = {2022},
    doi = {10.1109/ICBC54727.2022.9805503},
    abstract = {Blockchain technology promises to overcome trust and privacy concerns inherent to centralized information sharing. However, current decentralized supply chain management systems do either not meet privacy and scalability requirements or require a trustworthy consortium, which is challenging for increasingly dynamic supply chains with constantly changing participants. In this paper, we propose CCChain, a scalable and privacy-aware supply chain management system that stores all information locally to give companies complete sovereignty over who accesses their data. Still, tamper protection of all data through a permissionless blockchain enables on-demand tracking and tracing of products as well as reliable information sharing while affording the detection of data inconsistencies. Our evaluation confirms that CCChain offers superior scalability in comparison to alternatives while also enabling near real-time tracking and tracing for many, less complex products.},
    }

2021

  • R. Uetz, C. Hemminghaus, L. Hackländer, P. Schlipper, and M. Henze, “Reproducible and Adaptable Log Data Generation for Sound Cybersecurity Experiments,” in Proceedings of the 37th Annual Computer Security Applications Conference (ACSAC), 2021.
    [BibTeX] [Abstract] [PDF] [DOI]

    Artifacts such as log data and network traffic are fundamental for cybersecurity research, e.g., in the area of intrusion detection. Yet, most research is based on artifacts that are not available to others or cannot be adapted to own purposes, thus making it difficult to reproduce and build on existing work. In this paper, we identify the challenges of artifact generation with the goal of conducting sound experiments that are valid, controlled, and reproducible. We argue that testbeds for artifact generation have to be designed specifically with reproducibility and adaptability in mind. To achieve this goal, we present SOCBED, our proof-of-concept implementation and the first testbed with a focus on generating realistic log data for cybersecurity experiments in a reproducible and adaptable manner. SOCBED enables researchers to reproduce testbed instances on commodity computers, adapt them according to own requirements, and verify their correct functionality. We evaluate SOCBED with an exemplary, practical experiment on detecting a multi-step intrusion of an enterprise network and show that the resulting experiment is indeed valid, controlled, and reproducible. Both SOCBED and the log dataset underlying our evaluation are freely available.

    @inproceedings{UHH+21,
    author = {Uetz, Rafael and Hemminghaus, Christian and Hackl{\"a}nder, Louis and Schlipper, Philipp and Henze, Martin},
    title = {{Reproducible and Adaptable Log Data Generation for Sound Cybersecurity Experiments}},
    booktitle = {Proceedings of the 37th Annual Computer Security Applications Conference (ACSAC)},
    month = {12},
    year = {2021},
    doi = {10.1145/3485832.3488020},
    abstract = {Artifacts such as log data and network traffic are fundamental for cybersecurity research, e.g., in the area of intrusion detection. Yet, most research is based on artifacts that are not available to others or cannot be adapted to own purposes, thus making it difficult to reproduce and build on existing work. In this paper, we identify the challenges of artifact generation with the goal of conducting sound experiments that are valid, controlled, and reproducible. We argue that testbeds for artifact generation have to be designed specifically with reproducibility and adaptability in mind. To achieve this goal, we present SOCBED, our proof-of-concept implementation and the first testbed with a focus on generating realistic log data for cybersecurity experiments in a reproducible and adaptable manner. SOCBED enables researchers to reproduce testbed instances on commodity computers, adapt them according to own requirements, and verify their correct functionality. We evaluate SOCBED with an exemplary, practical experiment on detecting a multi-step intrusion of an enterprise network and show that the resulting experiment is indeed valid, controlled, and reproducible. Both SOCBED and the log dataset underlying our evaluation are freely available.},
    }

  • Ö. Sen, D. van der Velde, P. Linnartz, I. Hacker, M. Henze, M. Andres, and A. Ulbig, “Investigating Man-in-the-Middle-based False Data Injection in a Smart Grid Laboratory Environment,” in Proceedings of 2021 IEEE PES Innovative Smart Grid Technologies Europe (ISGT-Europe), 2021.
    [BibTeX] [Abstract] [PDF] [DOI]

    With the increasing use of information and communication technology in electrical power grids, the security of energy supply is increasingly threatened by cyber-attacks. Traditional cyber-security measures, such as firewalls or intrusion detection/prevention systems, can be used as mitigation and prevention measures, but their effective use requires a deep understanding of the potential threat landscape and complex attack processes in energy information systems. Given the complexity and lack of detailed knowledge of coordinated, timed attacks in smart grid applications, we need information and insight into realistic attack scenarios in an appropriate and practical setting. In this paper, we present a man-in-the-middle-based attack scenario that intercepts process communication between control systems and field devices, employs false data injection techniques, and performs data corruption such as sending false commands to field devices. We demonstrate the applicability of the presented attack scenario in a physical smart grid laboratory environment and analyze the generated data under normal and attack conditions to extract domain-specific knowledge for detection mechanisms.

    @inproceedings{SVL+21,
    author = {Sen, {\"O}mer and van der Velde, Dennis and Linnartz, Philipp and Hacker, Immanuel and Henze, Martin and Andres, Michael and Ulbig, Andreas},
    title = {{Investigating Man-in-the-Middle-based False Data Injection in a Smart Grid Laboratory Environment}},
    booktitle = {Proceedings of 2021 IEEE PES Innovative Smart Grid Technologies Europe (ISGT-Europe)},
    month = {10},
    year = {2021},
    abstract = {With the increasing use of information and communication technology in electrical power grids, the security of energy supply is increasingly threatened by cyber-attacks. Traditional cyber-security measures, such as firewalls or intrusion detection/prevention systems, can be used as mitigation and prevention measures, but their effective use requires a deep understanding of the potential threat landscape and complex attack processes in energy information systems. Given the complexity and lack of detailed knowledge of coordinated, timed attacks in smart grid applications, we need information and insight into realistic attack scenarios in an appropriate and practical setting. In this paper, we present a man-in-the-middle-based attack scenario that intercepts process communication between control systems and field devices, employs false data injection techniques, and performs data corruption such as sending false commands to field devices. We demonstrate the applicability of the presented attack scenario in a physical smart grid laboratory environment and analyze the generated data under normal and attack conditions to extract domain-specific knowledge for detection mechanisms.},
    doi = {10.1109/ISGTEurope52324.2021.9640002},
    }

  • M. Rademacher, H. Linka, T. Horstmann, and M. Henze, “Path Loss in Urban LoRa Networks: A Large-Scale Measurement Study,” in Proceedings of the 2021 IEEE 94th Vehicular Technology Conference (VTC2021-Fall), 2021.
    [BibTeX] [Abstract] [PDF] [DOI]

    Urban LoRa networks promise to provide a cost-efficient and scalable communication backbone for smart cities. One core challenge in rolling out and operating these networks is radio network planning, i.e., precise predictions about possible new locations and their impact on network coverage. Path loss models aid in this task, but evaluating and comparing different models requires a sufficiently large set of high-quality received packet power samples. In this paper, we report on a corresponding large-scale measurement study covering an urban area of 200 km2 over a period of 230 days using sensors deployed on garbage trucks, resulting in more than 112 thousand high-quality samples for received packet power. Using this data, we compare eleven previously proposed path loss models and additionally provide new coefficients for the Log-distance model. Our results reveal that the Log-distance model and other well-known empirical models such as Okumura or Winner+ provide reasonable estimations in an urban environment, and terrain based models such as ITM or ITWOM have no advantages. In addition, we derive estimations for the needed sample size in similar measurement campaigns. To stimulate further research in this direction, we make all our data publicly available.

    @inproceedings{RLHH21,
    author = {Rademacher, Michael and Linka, Hendrik and Horstmann, Thorsten and Henze, Martin},
    title = {{Path Loss in Urban LoRa Networks: A Large-Scale Measurement Study}},
    booktitle = {Proceedings of the 2021 IEEE 94th Vehicular Technology Conference (VTC2021-Fall)},
    month = {09},
    year = {2021},
    doi = {10.1109/VTC2021-Fall52928.2021.9625531},
    abstract = {Urban LoRa networks promise to provide a cost-efficient and scalable communication backbone for smart cities. One core challenge in rolling out and operating these networks is radio network planning, i.e., precise predictions about possible new locations and their impact on network coverage. Path loss models aid in this task, but evaluating and comparing different models requires a sufficiently large set of high-quality received packet power samples. In this paper, we report on a corresponding large-scale measurement study covering an urban area of 200 km2 over a period of 230 days using sensors deployed on garbage trucks, resulting in more than 112 thousand high-quality samples for received packet power. Using this data, we compare eleven previously proposed path loss models and additionally provide new coefficients for the Log-distance model. Our results reveal that the Log-distance model and other well-known empirical models such as Okumura or Winner+ provide reasonable estimations in an urban environment, and terrain based models such as ITM or ITWOM have no advantages. In addition, we derive estimations for the needed sample size in similar measurement campaigns. To stimulate further research in this direction, we make all our data publicly available.},
    }

  • Ö. Sen, D. van der Velde, S. N. Peters, and M. Henze, “An Approach of Replicating Multi-Staged Cyber-Attacks and Countermeasures in a Smart Grid Co-Simulation Environment,” in Proceedings of the 26th International Conference on Electricity Distribution (CIRED), 2021.
    [BibTeX] [Abstract] [PDF] [DOI]

    While the digitization of power distribution grids brings many benefits, it also introduces new vulnerabilities for cyber-attacks. To maintain secure operations in the emerging threat landscape, detecting and implementing countermeasures against cyber-attacks are paramount. However, due to the lack of publicly available attack data against Smart Grids (SGs) for countermeasure development, simulation-based data generation approaches offer the potential to provide the needed data foundation. Therefore, our proposed approach provides flexible and scalable replication of multi-staged cyber-attacks in an SG Co-Simulation Environment (COSE). The COSE consists of an energy grid simulator, simulators for Operation Technology (OT) devices, and a network emulator for realistic IT process networks. Focusing on defensive and offensive use cases in COSE, our simulated attacker can perform network scans, find vulnerabilities, exploit them, gain administrative privileges, and execute malicious commands on OT devices. As an exemplary countermeasure, we present a built-in Intrusion Detection System (IDS) that analyzes generated network traffic using anomaly detection with Machine Learning (ML) approaches. In this work, we provide an overview of the SG COSE, present a multi-stage attack model with the potential to disrupt grid operations, and show exemplary performance evaluations of the IDS in specific scenarios.

    @inproceedings{SVPH21,
    author = {Sen, {\"O}mer and van der Velde, Dennis and Peters, Sebastian N. and Henze, Martin},
    title = {{An Approach of Replicating Multi-Staged Cyber-Attacks and Countermeasures in a Smart Grid Co-Simulation Environment}},
    booktitle = {Proceedings of the 26th International Conference on Electricity Distribution (CIRED)},
    month = {09},
    year = {2021},
    doi = {10.1049/icp.2021.1632},
    abstract = {While the digitization of power distribution grids brings many benefits, it also introduces new vulnerabilities for cyber-attacks. To maintain secure operations in the emerging threat landscape, detecting and implementing countermeasures against cyber-attacks are paramount. However, due to the lack of publicly available attack data against Smart Grids (SGs) for countermeasure development, simulation-based data generation approaches offer the potential to provide the needed data foundation. Therefore, our proposed approach provides flexible and scalable replication of multi-staged cyber-attacks in an SG Co-Simulation Environment (COSE). The COSE consists of an energy grid simulator, simulators for Operation Technology (OT) devices, and a network emulator for realistic IT process networks. Focusing on defensive and offensive use cases in COSE, our simulated attacker can perform network scans, find vulnerabilities, exploit them, gain administrative privileges, and execute malicious commands on OT devices. As an exemplary countermeasure, we present a built-in Intrusion Detection System (IDS) that analyzes generated network traffic using anomaly detection with Machine Learning (ML) approaches. In this work, we provide an overview of the SG COSE, present a multi-stage attack model with the potential to disrupt grid operations, and show exemplary performance evaluations of the IDS in specific scenarios.},
    }

  • T. Krause, R. Ernst, B. Klaer, I. Hacker, and M. Henze, “Cybersecurity in Power Grids: Challenges and Opportunities,” Sensors, vol. 21, iss. 18, 2021.
    [BibTeX] [Abstract] [PDF] [DOI]

    Increasing volatilities within power transmission and distribution force power grid operators to amplify their use of communication infrastructure to monitor and control their grid. The resulting increase in communication creates a larger attack surface for malicious actors. Indeed, cyberattacks on power grids have already succeeded in causing temporary, large-scale blackouts in the recent past. In this paper, we analyze the communication infrastructure of power grids to derive resulting fundamental challenges of power grids with respect to cybersecurity. Based on these challenges, we identify a broad set of resulting attack vectors and attack scenarios that threaten the security of power grids. To address these challenges, we propose to rely on a defense-in-depth strategy, which encompasses measures for (i) device and application security, (ii) network security, (iii) physical security, as well as (iv) policies, procedures, and awareness. For each of these categories, we distill and discuss a comprehensive set of state-of-the art approaches as well as identify further opportunities to strengthen cybersecurity in interconnected power grids.

    @article{KEK+21,
    author = {Krause, Tim and Ernst, Raphael and Klaer, Benedikt and Hacker, Immanuel and Henze, Martin},
    title = {{Cybersecurity in Power Grids: Challenges and Opportunities}},
    journal = {Sensors},
    volume = {21},
    number = {18},
    month = {09},
    year = {2021},
    doi = {10.3390/s21186225},
    abstract = {Increasing volatilities within power transmission and distribution force power grid operators to amplify their use of communication infrastructure to monitor and control their grid. The resulting increase in communication creates a larger attack surface for malicious actors. Indeed, cyberattacks on power grids have already succeeded in causing temporary, large-scale blackouts in the recent past. In this paper, we analyze the communication infrastructure of power grids to derive resulting fundamental challenges of power grids with respect to cybersecurity. Based on these challenges, we identify a broad set of resulting attack vectors and attack scenarios that threaten the security of power grids. To address these challenges, we propose to rely on a defense-in-depth strategy, which encompasses measures for (i) device and application security, (ii) network security, (iii) physical security, as well as (iv) policies, procedures, and awareness. For each of these categories, we distill and discuss a comprehensive set of state-of-the art approaches as well as identify further opportunities to strengthen cybersecurity in interconnected power grids.},
    }

  • Ö. Sen, D. van der Velde, K. A. Wehrmeister, I. Hacker, M. Henze, and M. Andres, “Towards an Approach to Contextual Detection of Multi-Stage Cyber Attacks in Smart Grids,” in Proceedings of the 4th International Conference on Smart Energy Systems and Technologies (SEST), 2021.
    [BibTeX] [Abstract] [PDF] [DOI]

    Electric power grids are at risk of being compromised by high-impact cyber-security threats such as coordinated, timed attacks. Navigating this new threat landscape requires a deep understanding of the potential risks and complex attack processes in energy information systems, which in turn demands an unmanageable manual effort to timely process a large amount of cross-domain information. To provide an adequate basis to contextually assess and understand the situation of smart grids in case of coordinated cyber-attacks, we need a systematic and coherent approach to identify cyber incidents. In this paper, we present an approach that collects and correlates cross-domain cyber threat information to detect multi-stage cyber-attacks in energy information systems. We investigate the applicability and performance of the presented correlation approach and discuss the results to highlight challenges in domain-specific detection mechanisms.

    @inproceedings{SVW+21,
    author = {Sen, {\"O}mer and van der Velde, Dennis and Wehrmeister, Katharina A. and Hacker, Immanuel and Henze, Martin and Andres, Michael},
    title = {{Towards an Approach to Contextual Detection of Multi-Stage Cyber Attacks in Smart Grids}},
    booktitle = {Proceedings of the 4th International Conference on Smart Energy Systems and Technologies (SEST)},
    month = {09},
    year = {2021},
    doi = {10.1109/SEST50973.2021.9543359},
    abstract = {Electric power grids are at risk of being compromised by high-impact cyber-security threats such as coordinated, timed attacks. Navigating this new threat landscape requires a deep understanding of the potential risks and complex attack processes in energy information systems, which in turn demands an unmanageable manual effort to timely process a large amount of cross-domain information. To provide an adequate basis to contextually assess and understand the situation of smart grids in case of coordinated cyber-attacks, we need a systematic and coherent approach to identify cyber incidents. In this paper, we present an approach that collects and correlates cross-domain cyber threat information to detect multi-stage cyber-attacks in energy information systems. We investigate the applicability and performance of the presented correlation approach and discuss the results to highlight challenges in domain-specific detection mechanisms.},
    }

  • R. Matzutt, B. Kalde, J. Pennekamp, A. Drichel, M. Henze, and K. Wehrle, “CoinPrune: Shrinking Bitcoin’s Blockchain Retrospectively,” IEEE Transactions on Network and Service Management, vol. 18, iss. 3, 2021.
    [BibTeX] [Abstract] [PDF] [DOI]

    Popular cryptocurrencies continue to face serious scalability issues due to their ever-growing blockchains. Thus, modern blockchain designs began to prune old blocks and rely on recent snapshots for their bootstrapping processes instead. Unfortunately, established systems are often considered incapable of adopting these improvements. In this work, we present CoinPrune, our block-pruning scheme with full Bitcoin compatibility, to revise this popular belief. CoinPrune bootstraps joining nodes via snapshots that are periodically created from Bitcoin’s set of unspent transaction outputs (UTXO set). Our scheme establishes trust in these snapshots by relying on CoinPrune-supporting miners to mutually reaffirm a snapshot’s correctness on the blockchain. This way, snapshots remain trustworthy even if adversaries attempt to tamper with them. Our scheme maintains its retrospective deployability by relying on positive feedback only, i.e., blocks containing invalid reaffirmations are not rejected, but invalid reaffirmations are outpaced by the benign ones created by an honest majority among CoinPrune-supporting miners. Already today, CoinPrune reduces the storage requirements for Bitcoin nodes by two orders of magnitude, as joining nodes need to fetch and process only 6 GiB instead of 271 GiB of data in our evaluation, reducing the synchronization time of powerful devices from currently 7 h to 51 min, with even larger potential drops for less powerful devices. CoinPrune is further aware of higher-level application data, i.e., it conserves otherwise pruned application data and allows nodes to obfuscate objectionable and potentially illegal blockchain content from their UTXO set and the snapshots they distribute.

    @article{MKP+21,
    author = {Matzutt, Roman and Kalde, Benedikt and Pennekamp, Jan and Drichel, Arthur and Henze, Martin and Wehrle, Klaus},
    title = {{CoinPrune: Shrinking Bitcoin's Blockchain Retrospectively}},
    journal = {IEEE Transactions on Network and Service Management},
    volume = {18},
    number = {3},
    month = {09},
    year = {2021},
    doi = {10.1109/TNSM.2021.3073270},
    abstract = {Popular cryptocurrencies continue to face serious scalability issues due to their ever-growing blockchains. Thus, modern blockchain designs began to prune old blocks and rely on recent snapshots for their bootstrapping processes instead. Unfortunately, established systems are often considered incapable of adopting these improvements. In this work, we present CoinPrune, our block-pruning scheme with full Bitcoin compatibility, to revise this popular belief. CoinPrune bootstraps joining nodes via snapshots that are periodically created from Bitcoin's set of unspent transaction outputs (UTXO set). Our scheme establishes trust in these snapshots by relying on CoinPrune-supporting miners to mutually reaffirm a snapshot's correctness on the blockchain. This way, snapshots remain trustworthy even if adversaries attempt to tamper with them. Our scheme maintains its retrospective deployability by relying on positive feedback only, i.e., blocks containing invalid reaffirmations are not rejected, but invalid reaffirmations are outpaced by the benign ones created by an honest majority among CoinPrune-supporting miners. Already today, CoinPrune reduces the storage requirements for Bitcoin nodes by two orders of magnitude, as joining nodes need to fetch and process only 6 GiB instead of 271 GiB of data in our evaluation, reducing the synchronization time of powerful devices from currently 7 h to 51 min, with even larger potential drops for less powerful devices. CoinPrune is further aware of higher-level application data, i.e., it conserves otherwise pruned application data and allows nodes to obfuscate objectionable and potentially illegal blockchain content from their UTXO set and the snapshots they distribute.},
    }

  • M. Serror, S. Hack, M. Henze, M. Schuba, and K. Wehrle, “Challenges and Opportunities in Securing the Industrial Internet of Things,” IEEE Transactions on Industrial Informatics, vol. 17, iss. 5, 2021.
    [BibTeX] [Abstract] [PDF] [DOI]

    Given the tremendous success of the Internet of Things in interconnecting consumer devices, we observe a natural trend to likewise interconnect devices in industrial settings, referred to as Industrial Internet of Things or Industry 4.0. While this coupling of industrial components provides many benefits, it also introduces serious security challenges. Although sharing many similarities with the consumer Internet of Things, securing the Industrial Internet of Things introduces its own challenges but also opportunities, mainly resulting from a longer lifetime of components and a larger scale of networks. In this paper, we identify the unique security goals and challenges of the Industrial Internet of Things, which, unlike consumer deployments, mainly follow from safety and productivity requirements. To address these security goals and challenges, we provide a comprehensive survey of research efforts to secure the Industrial Internet of Things, discuss their applicability, and analyze their security benefits.

    @article{SHH+20,
    author = {Serror, Martin and Hack, Sacha and Henze, Martin and Schuba, Marko and Wehrle, Klaus},
    title = {{Challenges and Opportunities in Securing the Industrial Internet of Things}},
    journal = {IEEE Transactions on Industrial Informatics},
    volume = {17},
    number = {5},
    month = {05},
    year = {2021},
    doi = {10.1109/TII.2020.3023507},
    abstract = {Given the tremendous success of the Internet of Things in interconnecting consumer devices, we observe a natural trend to likewise interconnect devices in industrial settings, referred to as Industrial Internet of Things or Industry 4.0. While this coupling of industrial components provides many benefits, it also introduces serious security challenges. Although sharing many similarities with the consumer Internet of Things, securing the Industrial Internet of Things introduces its own challenges but also opportunities, mainly resulting from a longer lifetime of components and a larger scale of networks. In this paper, we identify the unique security goals and challenges of the Industrial Internet of Things, which, unlike consumer deployments, mainly follow from safety and productivity requirements. To address these security goals and challenges, we provide a comprehensive survey of research efforts to secure the Industrial Internet of Things, discuss their applicability, and analyze their security benefits.},
    }

  • M. Dahlmanns, J. Pennekamp, I. B. Fink, B. Schoolmann, K. Wehrle, and M. Henze, “Transparent End-to-End Security for Publish/Subscribe Communication in Cyber-Physical Systems,” in Proceedings of the 2021 ACM Workshop on Secure and Trustworthy Cyber-Physical Systems (SaT-CPS), 2021.
    [BibTeX] [Abstract] [PDF] [DOI]

    The ongoing digitization of industrial manufacturing leads to a decisive change in industrial communication paradigms. Moving from traditional one-to-one to many-to-many communication, publish/subscribe systems promise a more dynamic and efficient exchange of data. However, the resulting significantly more complex communication relationships render traditional end-to-end security futile for sufficiently protecting the sensitive and safety-critical data transmitted in industrial systems. Most notably, the central message brokers inherent in publish/subscribe systems introduce a designated weak spot for security as they can access all communication messages. To address this issue, we propose ENTRUST, a novel solution for key server-based end-to-end security in publish/subscribe systems. ENTRUST transparently realizes confidentiality, integrity, and authentication for publish/subscribe systems without any modification of the underlying protocol. We exemplarily implement ENTRUST on top of MQTT, the de-facto standard for machine-to-machine communication, showing that ENTRUST can integrate seamlessly into existing publish/subscribe systems.

    @inproceedings{DPF+21,
    author = {Dahlmanns, Markus and Pennekamp, Jan and Fink, Ina Berenice and Schoolmann, Bernd and Wehrle, Klaus and Henze, Martin},
    title = {{Transparent End-to-End Security for Publish/Subscribe Communication in Cyber-Physical Systems}},
    booktitle = {Proceedings of the 2021 ACM Workshop on Secure and Trustworthy Cyber-Physical Systems (SaT-CPS)},
    year = {2021},
    month = {04},
    doi = {10.1145/3445969.3450423},
    abstract = {The ongoing digitization of industrial manufacturing leads to a decisive change in industrial communication paradigms. Moving from traditional one-to-one to many-to-many communication, publish/subscribe systems promise a more dynamic and efficient exchange of data. However, the resulting significantly more complex communication relationships render traditional end-to-end security futile for sufficiently protecting the sensitive and safety-critical data transmitted in industrial systems. Most notably, the central message brokers inherent in publish/subscribe systems introduce a designated weak spot for security as they can access all communication messages. To address this issue, we propose ENTRUST, a novel solution for key server-based end-to-end security in publish/subscribe systems. ENTRUST transparently realizes confidentiality, integrity, and authentication for publish/subscribe systems without any modification of the underlying protocol. We exemplarily implement ENTRUST on top of MQTT, the de-facto standard for machine-to-machine communication, showing that ENTRUST can integrate seamlessly into existing publish/subscribe systems.},
    }

2020

  • J. Pennekamp, P. Sapel, I. B. Fink, S. Wagner, S. Reuter, C. Hopmann, K. Wehrle, and M. Henze, “Revisiting the Privacy Needs of Real-World Applicable Company Benchmarking,” in Proceedings of the 8th Workshop on Encrypted Computing & Applied Homomorphic Cryptography (WAHC), 2020.
    [BibTeX] [Abstract] [PDF] [DOI]

    Benchmarking the performance of companies is essential to identify improvement potentials in various industries. Due to a competitive environment, this process imposes strong privacy needs, as leaked business secrets can have devastating effects on participating companies. Consequently, related work proposes to protect sensitive input data of companies using secure multi-party computation or homomorphic encryption. However, related work so far does not consider that also the benchmarking algorithm, used in today’s applied real-world scenarios to compute all relevant statistics, itself contains significant intellectual property, and thus needs to be protected. Addressing this issue, we present PCB – a practical design for Privacy-preserving Company Benchmarking that utilizes homomorphic encryption and a privacy proxy – which is specifically tailored for realistic real-world applications in which we protect companies’ sensitive input data and the valuable algorithms used to compute underlying key performance indicators. We evaluate PCB’s performance using synthetic measurements and showcase its applicability alongside an actual company benchmarking performed in the domain of injection molding, covering 48 distinct key performance indicators calculated out of hundreds of different input values. By protecting the privacy of all participants, we enable them to fully profit from the benefits of company benchmarking.

    @inproceedings{PSF+20,
    author = {Pennekamp, Jan and Sapel, Patrick and Fink, Ina Berenice and Wagner, Simon and Reuter, Sebastian and Hopmann, Christian and Wehrle, Klaus and Henze, Martin},
    title = {{Revisiting the Privacy Needs of Real-World Applicable Company Benchmarking}},
    booktitle = {Proceedings of the 8th Workshop on Encrypted Computing {\&} Applied Homomorphic Cryptography (WAHC)},
    month = {12},
    year = {2020},
    doi = {10.25835/0072999},
    abstract = {Benchmarking the performance of companies is essential to identify improvement potentials in various industries. Due to a competitive environment, this process imposes strong privacy needs, as leaked business secrets can have devastating effects on participating companies. Consequently, related work proposes to protect sensitive input data of companies using secure multi-party computation or homomorphic encryption. However, related work so far does not consider that also the benchmarking algorithm, used in today's applied real-world scenarios to compute all relevant statistics, itself contains significant intellectual property, and thus needs to be protected. Addressing this issue, we present PCB -- a practical design for Privacy-preserving Company Benchmarking that utilizes homomorphic encryption and a privacy proxy -- which is specifically tailored for realistic real-world applications in which we protect companies' sensitive input data and the valuable algorithms used to compute underlying key performance indicators. We evaluate PCB's performance using synthetic measurements and showcase its applicability alongside an actual company benchmarking performed in the domain of injection molding, covering 48 distinct key performance indicators calculated out of hundreds of different input values. By protecting the privacy of all participants, we enable them to fully profit from the benefits of company benchmarking.
    },
    }

  • J. Pennekamp, E. Buchholz, M. Dahlmanns, I. Kunze, S. Braun, E. Wagner, M. Brockmann, K. Wehrle, and M. Henze, “Collaboration is not Evil: A Systematic Look at Security Research for Industrial Use,” in Proceedings of the 2020 ACSAC Workshop on Learning from Authoritative Security Experiment Results (LASER), 2020.
    [BibTeX] [Abstract] [PDF] [DOI]

    Following the recent Internet of Things-induced trends on digitization in general, industrial applications will further evolve as well. With a focus on the domains of manufacturing and production, the Internet of Production pursues the vision of a digitized, globally interconnected, yet secure environment by establishing a distributed knowledge base. Background. As part of our collaborative research of advancing the scope of industrial applications through cybersecurity and privacy, we identified a set of common challenges and pitfalls that surface in such applied interdisciplinary collaborations. Aim. Our goal with this paper is to support researchers in the emerging field of cybersecurity in industrial settings by formalizing our experiences as reference for other research efforts, in industry and academia alike. Method. Based on our experience, we derived a process cycle of performing such interdisciplinary research, from the initial idea to the eventual dissemination and paper writing. This presented methodology strives to successfully bootstrap further research and to encourage further work in this emerging area. Results. Apart from our newly proposed process cycle, we report on our experiences and conduct a case study applying this methodology, raising awareness for challenges in cybersecurity research for industrial applications. We further detail the interplay between our process cycle and the data lifecycle in applied research data management. Finally, we augment our discussion with an industrial as well as an academic view on this research area and highlight that both areas still have to overcome significant challenges to sustainably and securely advance industrial applications. Conclusions. With our proposed process cycle for interdisciplinary research in the intersection of cybersecurity and industrial application, we provide a foundation for further research. We look forward to promising research initiatives, projects, and directions that emerge based on our methodological work.

    @inproceedings{PBD+20,
    author = {Pennekamp, Jan and Buchholz, Erik and Dahlmanns, Markus and Kunze, Ike and Braun, Stefan and Wagner, Eric and Brockmann, Matthias and Wehrle, Klaus and Henze, Martin},
    title = {{Collaboration is not Evil: A Systematic Look at Security Research for Industrial Use}},
    booktitle = {Proceedings of the 2020 ACSAC Workshop on Learning from Authoritative Security Experiment Results (LASER)},
    year = {2020},
    month = {12},
    doi = {10.14722/laser-acsac.2020.23088},
    abstract = {Following the recent Internet of Things-induced trends on digitization in general, industrial applications will further evolve as well. With a focus on the domains of manufacturing and production, the Internet of Production pursues the vision of a digitized, globally interconnected, yet secure environment by establishing a distributed knowledge base.
    Background. As part of our collaborative research of advancing the scope of industrial applications through cybersecurity and privacy, we identified a set of common challenges and pitfalls that surface in such applied interdisciplinary collaborations.
    Aim. Our goal with this paper is to support researchers in the emerging field of cybersecurity in industrial settings by formalizing our experiences as reference for other research efforts, in industry and academia alike.
    Method. Based on our experience, we derived a process cycle of performing such interdisciplinary research, from the initial idea to the eventual dissemination and paper writing. This presented methodology strives to successfully bootstrap further research and to encourage further work in this emerging area.
    Results. Apart from our newly proposed process cycle, we report on our experiences and conduct a case study applying this methodology, raising awareness for challenges in cybersecurity research for industrial applications. We further detail the interplay between our process cycle and the data lifecycle in applied research data management. Finally, we augment our discussion with an industrial as well as an academic view on this research area and highlight that both areas still have to overcome significant challenges to sustainably and securely advance industrial applications.
    Conclusions. With our proposed process cycle for interdisciplinary research in the intersection of cybersecurity and industrial application, we provide a foundation for further research. We look forward to promising research initiatives, projects, and directions that emerge based on our methodological work.},
    }

  • M. Henze, L. Bader, J. Filter, O. Lamberts, S. Ofner, and D. van der Velde, “Poster: Cybersecurity Research and Training for Power Distribution Grids – A Blueprint,” in Proceedings of the 2020 ACM SIGSAC Conference on Computer and Communications Security (CCS) – Poster Session, 2020.
    [BibTeX] [Abstract] [PDF] [DOI]

    Mitigating cybersecurity threats in power distribution grids requires a testbed for cybersecurity, e.g., to evaluate the (physical) impact of cyberattacks, generate datasets, test and validate security approaches, as well as train technical personnel. In this paper, we present a blueprint for such a testbed that relies on network emulation and power flow computation to couple real network applications with a simulated power grid. We discuss the benefits of our approach alongside preliminary results and various use cases for cybersecurity research and training for power distribution grids.

    @inproceedings{HBF+20,
    author = {Henze, Martin and Bader, Lennart and Filter, Julian and Lamberts, Olav and Ofner, Simon and van der Velde, Dennis},
    title = {{Poster: Cybersecurity Research and Training for Power Distribution Grids -- A Blueprint}},
    booktitle = {Proceedings of the 2020 ACM SIGSAC Conference on Computer and Communications Security (CCS) - Poster Session},
    month = {11},
    year = {2020},
    doi = {10.1145/3372297.3420016},
    abstract = {Mitigating cybersecurity threats in power distribution grids requires a testbed for cybersecurity, e.g., to evaluate the (physical) impact of cyberattacks, generate datasets, test and validate security approaches, as well as train technical personnel. In this paper, we present a blueprint for such a testbed that relies on network emulation and power flow computation to couple real network applications with a simulated power grid. We discuss the benefits of our approach alongside preliminary results and various use cases for cybersecurity research and training for power distribution grids.},
    }

  • K. Wolsing, E. Wagner, and M. Henze, “Poster: Facilitating Protocol-independent Industrial Intrusion Detection Systems,” in Proceedings of the 2020 ACM SIGSAC Conference on Computer and Communications Security (CCS) – Poster Session, 2020.
    [BibTeX] [Abstract] [PDF] [DOI]

    Cyber-physical systems are increasingly threatened by sophisticated attackers, also attacking the physical aspect of systems. Supplementing protective measures, industrial intrusion detection systems promise to detect such attacks. However, due to industrial protocol diversity and lack of standard interfaces, great efforts are required to adapt these technologies to a large number of different protocols. To address this issue, we identify existing universally applicable intrusion detection approaches and propose a transcription for industrial protocols to realize protocol-independent semantic intrusion detection on top of different industrial protocols.

    @inproceedings{WWH20,
    author = {Wolsing, Konrad and Wagner, Eric and Henze, Martin},
    title = {{Poster: Facilitating Protocol-independent Industrial Intrusion Detection Systems}},
    booktitle = {Proceedings of the 2020 ACM SIGSAC Conference on Computer and Communications Security (CCS) - Poster Session},
    month = {11},
    year = {2020},
    doi = {10.1145/3372297.3420019},
    abstract = {Cyber-physical systems are increasingly threatened by sophisticated attackers, also attacking the physical aspect of systems. Supplementing protective measures, industrial intrusion detection systems promise to detect such attacks. However, due to industrial protocol diversity and lack of standard interfaces, great efforts are required to adapt these technologies to a large number of different protocols. To address this issue, we identify existing universally applicable intrusion detection approaches and propose a transcription for industrial protocols to realize protocol-independent semantic intrusion detection on top of different industrial protocols.}
    }

  • M. Dahlmanns, J. Lohmöller, I. B. Fink, J. Pennekamp, K. Wehrle, and M. Henze, “Easing the Conscience with OPC UA: An Internet-Wide Study on Insecure Deployments,” in Proceedings of the Internet Measurement Conference (IMC), 2020.
    [BibTeX] [Abstract] [PDF] [DOI]

    Due to increasing digitalization, formerly isolated industrial networks, e.g., for factory and process automation, move closer and closer to the Internet, mandating secure communication. However, securely setting up OPC UA, the prime candidate for secure industrial communication, is challenging due to a large variety of insecure options. To study whether Internet-facing OPC UA appliances are configured securely, we actively scan the IPv4 address space for publicly reachable OPC UA systems and assess the security of their configurations. We observe problematic security configurations such as missing access control (on 24% of hosts), disabled security functionality (24%), or use of deprecated cryptographic primitives (25%) on in total 92% of the reachable deployments. Furthermore, we discover several hundred devices in multiple autonomous systems sharing the same security certificate, opening the door for impersonation attacks. Overall, in this paper, we highlight commonly found security misconfigurations and underline the importance of appropriate configuration for security-featuring protocols.

    @inproceedings{DLF+20,
    author = {Dahlmanns, Markus and Lohm{\"o}ller, Johannes and Fink, Ina Berenice and Pennekamp, Jan and Wehrle, Klaus and Henze, Martin},
    title = {{Easing the Conscience with OPC UA: An Internet-Wide Study on Insecure Deployments}},
    booktitle = {Proceedings of the Internet Measurement Conference (IMC)},
    month = {10},
    year = {2020},
    doi = {10.1145/3419394.3423666},
    abstract = {Due to increasing digitalization, formerly isolated industrial networks, e.g., for factory and process automation, move closer and closer to the Internet, mandating secure communication. However, securely setting up OPC UA, the prime candidate for secure industrial communication, is challenging due to a large variety of insecure options. To study whether Internet-facing OPC UA appliances are configured securely, we actively scan the IPv4 address space for publicly reachable OPC UA systems and assess the security of their configurations. We observe problematic security configurations such as missing access control (on 24% of hosts), disabled security functionality (24%), or use of deprecated cryptographic primitives (25%) on in total 92% of the reachable deployments. Furthermore, we discover several hundred devices in multiple autonomous systems sharing the same security certificate, opening the door for impersonation attacks. Overall, in this paper, we highlight commonly found security misconfigurations and underline the importance of appropriate configuration for security-featuring protocols.}
    }

  • B. Klaer, Ö. Sen, D. van der Velde, I. Hacker, M. Andres, and M. Henze, “Graph-based Model of Smart Grid Architectures,” in Proceedings of the 3rd International Conference on Smart Energy Systems and Technologies (SEST), 2020.
    [BibTeX] [Abstract] [PDF] [DOI]

    The rising use of information and communication technology in smart grids likewise increases the risk of failures that endanger the security of power supply, e.g., due to errors in the communication configuration, faulty control algorithms, or cyber-attacks. Co-simulations can be used to investigate such effects, but require precise modeling of the energy, communication, and information domain within an integrated smart grid infrastructure model. Given the complexity and lack of detailed publicly available communication network models for smart grid scenarios, there is a need for an automated and systematic approach to creating such coupled models. In this paper, we present an approach to automatically generate smart grid infrastructure models based on an arbitrary electrical distribution grid model using a generic architectural template. We demonstrate the applicability and unique features of our approach alongside examples concerning network planning, co-simulation setup, and specification of domain-specific intrusion detection systems.

    @inproceedings{KSV+20,
    author = {Klaer, Benedikt and Sen, {\"O}mer and van der Velde, Dennis and Hacker, Immanuel and Andres, Michael and Henze, Martin},
    title = {{Graph-based Model of Smart Grid Architectures}},
    booktitle = {Proceedings of the 3rd International Conference on Smart Energy Systems and Technologies (SEST)},
    month = {09},
    year = {2020},
    doi = {10.1109/SEST48500.2020.9203113},
    abstract = {The rising use of information and communication technology in smart grids likewise increases the risk of failures that endanger the security of power supply, e.g., due to errors in the communication configuration, faulty control algorithms, or cyber-attacks. Co-simulations can be used to investigate such effects, but require precise modeling of the energy, communication, and information domain within an integrated smart grid infrastructure model. Given the complexity and lack of detailed publicly available communication network models for smart grid scenarios, there is a need for an automated and systematic approach to creating such coupled models. In this paper, we present an approach to automatically generate smart grid infrastructure models based on an arbitrary electrical distribution grid model using a generic architectural template. We demonstrate the applicability and unique features of our approach alongside examples concerning network planning, co-simulation setup, and specification of domain-specific intrusion detection systems.},
    }

  • D. van der Velde, M. Henze, P. Kathmann, E. Wassermann, M. Andres, D. Bracht, R. Ernst, G. Hallak, B. Klaer, P. Linnartz, B. Meyer, S. Ofner, T. Pletzer, and R. Sethmann, “Methods for Actors in the Electric Power System to Prevent, Detect and React to ICT Attacks and Failures,” in Proceedings of the 6th IEEE International Energy Conference (ENERGYCon), 2020.
    [BibTeX] [Abstract] [PDF] [DOI]

    The fundamental changes in power supply and increasing decentralization require more active grid operation and an increased integration of ICT at all power system actors. This trend raises complexity and increasingly leads to interactions between primary grid operation and ICT as well as different power system actors. For example, virtual power plants control various assets in the distribution grid via ICT to jointly market existing flexibilities. Failures of ICT or targeted attacks can thus have serious effects on security of supply and system stability. This paper presents a holistic approach to providing methods specifically for actors in the power system for prevention, detection, and reaction to ICT attacks and failures. The focus of our measures are solutions for ICT monitoring, systems for the detection of ICT attacks and intrusions in the process network, and the provision of actionable guidelines as well as a practice environment for the response to potential ICT security incidents.

    @inproceedings{VHK+20,
    author = {van der Velde, Dennis and Henze, Martin and Kathmann, Philipp and Wassermann, Erik and Andres, Michael and Bracht, Detert and Ernst, Raphael and Hallak, George and Klaer, Benedikt and Linnartz, Philipp and Meyer, Benjamin and Ofner, Simon and Pletzer, Tobias and Sethmann, Richard},
    title = {{Methods for Actors in the Electric Power System to Prevent, Detect and React to ICT Attacks and Failures}},
    booktitle = {Proceedings of the 6th IEEE International Energy Conference (ENERGYCon)},
    month = {09},
    year = {2020},
    doi = {10.1109/ENERGYCon48941.2020.9236523},
    abstract = {The fundamental changes in power supply and increasing decentralization require more active grid operation and an increased integration of ICT at all power system actors. This trend raises complexity and increasingly leads to interactions between primary grid operation and ICT as well as different power system actors. For example, virtual power plants control various assets in the distribution grid via ICT to jointly market existing flexibilities. Failures of ICT or targeted attacks can thus have serious effects on security of supply and system stability. This paper presents a holistic approach to providing methods specifically for actors in the power system for prevention, detection, and reaction to ICT attacks and failures. The focus of our measures are solutions for ICT monitoring, systems for the detection of ICT attacks and intrusions in the process network, and the provision of actionable guidelines as well as a practice environment for the response to potential ICT security incidents.},
    }

  • M. Henze, “The Quest for Secure and Privacy-preserving Cloud-based Industrial Cooperation,” in Proceedings of the 6th International Workshop on Security and Privacy in the Cloud (SPC), 2020.
    [BibTeX] [Abstract] [PDF] [DOI]

    Industrial cooperation promises to leverage the huge amounts of data generated by and collected in industrial deployments to realize valuable improvements such as increases in product quality and profit margins. Cloud computing with its adjustable resources is a prime candidate to serve as the technical foundation for industrial cooperation. However, cloud computing further exaggerates existing security and privacy concerns of industrial companies, leading them to refrain from participating in cloud-based industrial cooperation. To overcome these concerns and thus allow companies to benefit from its advantages, we identify and discuss different aspects of secure and privacy-preserving cloud-based industrial cooperation, ranging from securing industrial devices and networks to secure storage and processing of industrial data in the cloud. By discussing already usable and emerging technical approaches as well as identifying open research challenges, we contribute to realizing the vision of secure and privacy-preserving industrial cooperation.

    @inproceedings{Hen20,
    author = {Henze, Martin},
    title = {{The Quest for Secure and Privacy-preserving Cloud-based Industrial Cooperation}},
    booktitle = {Proceedings of the 6th International Workshop on Security and Privacy in the Cloud (SPC)},
    month = {07},
    year = {2020},
    doi = {10.1109/CNS48642.2020.9162199},
    abstract = {Industrial cooperation promises to leverage the huge amounts of data generated by and collected in industrial deployments to realize valuable improvements such as increases in product quality and profit margins. Cloud computing with its adjustable resources is a prime candidate to serve as the technical foundation for industrial cooperation. However, cloud computing further exaggerates existing security and privacy concerns of industrial companies, leading them to refrain from participating in cloud-based industrial cooperation. To overcome these concerns and thus allow companies to benefit from its advantages, we identify and discuss different aspects of secure and privacy-preserving cloud-based industrial cooperation, ranging from securing industrial devices and networks to secure storage and processing of industrial data in the cloud. By discussing already usable and emerging technical approaches as well as identifying open research challenges, we contribute to realizing the vision of secure and privacy-preserving industrial cooperation.},
    }

  • J. Pennekamp, L. Bader, R. Matzutt, P. Niemietz, D. Trauth, M. Henze, T. Bergs, and K. Wehrle, “Private Multi-Hop Accountability for Supply Chains,” in Proceedings of the Workshop on Blockchain for IoT and Cyber-Physical Systems (BIoTCPS), 2020.
    [BibTeX] [Abstract] [PDF] [DOI]

    Today’s supply chains are becoming increasingly flexible in nature. While adaptability is vastly increased, these more dynamic associations necessitate more extensive data sharing among different stakeholders while simultaneously overturning previously established levels of trust. Hence, manufacturers’ demand to track goods and to investigate root causes of issues across their supply chains becomes more challenging to satisfy within these now untrusted environments. Complementarily, suppliers need to keep any data irrelevant to such routine checks secret to remain competitive. To bridge the needs of contractors and suppliers in increasingly flexible supply chains, we thus propose to establish a privacy-preserving and distributed multi-hop accountability log among the involved stakeholders based on Attribute-based Encryption and backed by a blockchain. Our large-scale feasibility study is motivated by a real-world manufacturing process, i.e., a fine blanking line, and reveals only modest costs for multi-hop tracing and tracking of goods.

    @inproceedings{PBM+20,
    author = {Pennekamp, Jan and Bader, Lennart and Matzutt, Roman and Niemietz, Philipp and Trauth, Daniel and Henze, Martin and Bergs, Thomas and Wehrle, Klaus},
    title = {{Private Multi-Hop Accountability for Supply Chains}},
    booktitle = {Proceedings of the Workshop on Blockchain for IoT and Cyber-Physical Systems (BIoTCPS)},
    month = {06},
    year = {2020},
    doi = {10.1109/ICCWorkshops49005.2020.9145100},
    abstract = {Today's supply chains are becoming increasingly flexible in nature. While adaptability is vastly increased, these more dynamic associations necessitate more extensive data sharing among different stakeholders while simultaneously overturning previously established levels of trust. Hence, manufacturers' demand to track goods and to investigate root causes of issues across their supply chains becomes more challenging to satisfy within these now untrusted environments. Complementarily, suppliers need to keep any data irrelevant to such routine checks secret to remain competitive. To bridge the needs of contractors and suppliers in increasingly flexible supply chains, we thus propose to establish a privacy-preserving and distributed multi-hop accountability log among the involved stakeholders based on Attribute-based Encryption and backed by a blockchain. Our large-scale feasibility study is motivated by a real-world manufacturing process, i.e., a fine blanking line, and reveals only modest costs for multi-hop tracing and tracking of goods.},
    }

  • R. Matzutt, B. Kalde, J. Pennekamp, A. Drichel, M. Henze, and K. Wehrle, “How to Securely Prune Bitcoin’s Blockchain,” in Proceedings of the 19th IFIP Networking Conference (NETWORKING), 2020.
    [BibTeX] [Abstract] [PDF]

    Bitcoin was the first successful decentralized cryptocurrency and remains the most popular of its kind to this day. Despite the benefits of its blockchain, Bitcoin still faces serious scalability issues, most importantly its ever-increasing blockchain size. While alternative designs introduced schemes to periodically create snapshots and thereafter prune older blocks, already-deployed systems such as Bitcoin are often considered incapable of adopting corresponding approaches. In this work, we revise this popular belief and present CoinPrune, a snapshot-based pruning scheme that is fully compatible with Bitcoin. CoinPrune can be deployed through an opt-in velvet fork, i.e., without impeding the established Bitcoin network. By requiring miners to publicly announce and jointly reaffirm recent snapshots on the blockchain, CoinPrune establishes trust into the snapshots’ correctness even in the presence of powerful adversaries. Our evaluation shows that CoinPrune reduces the storage requirements of Bitcoin already by two orders of magnitude today, with further relative savings as the blockchain grows. In our experiments, nodes only have to fetch and process 5 GiB instead of 230 GiB of data when joining the network, reducing the synchronization time on powerful devices from currently 5 h to 46 min, with even more savings for less powerful devices.

    @inproceedings{MKP+20,
    author = {Matzutt, Roman and Kalde, Benedikt and Pennekamp, Jan and Drichel, Arthur and Henze, Martin and Wehrle, Klaus},
    title = {{How to Securely Prune Bitcoin's Blockchain}},
    booktitle = {Proceedings of the 19th IFIP Networking Conference (NETWORKING)},
    month = {06},
    year = {2020},
    abstract = {Bitcoin was the first successful decentralized cryptocurrency and remains the most popular of its kind to this day. Despite the benefits of its blockchain, Bitcoin still faces serious scalability issues, most importantly its ever-increasing blockchain size. While alternative designs introduced schemes to periodically create snapshots and thereafter prune older blocks, already-deployed systems such as Bitcoin are often considered incapable of adopting corresponding approaches. In this work, we revise this popular belief and present CoinPrune, a snapshot-based pruning scheme that is fully compatible with Bitcoin. CoinPrune can be deployed through an opt-in velvet fork, i.e., without impeding the established Bitcoin network. By requiring miners to publicly announce and jointly reaffirm recent snapshots on the blockchain, CoinPrune establishes trust into the snapshots' correctness even in the presence of powerful adversaries. Our evaluation shows that CoinPrune reduces the storage requirements of Bitcoin already by two orders of magnitude today, with further relative savings as the blockchain grows. In our experiments, nodes only have to fetch and process 5 GiB instead of 230 GiB of data when joining the network, reducing the synchronization time on powerful devices from currently 5 h to 46 min, with even more savings for less powerful devices.},
    }

  • L. Roepert, M. Dahlmanns, I. B. Fink, J. Pennekamp, and M. Henze, “Assessing the Security of OPC UA Deployments,” in Proceedings of the ITG Workshop on IT Security (ITSec), 2020.
    [BibTeX] [Abstract] [PDF] [DOI]

    To address the increasing security demands of industrial deployments, OPC UA is one of the first industrial protocols explicitly designed with security in mind. However, deploying it securely requires a thorough configuration of a wide range of options. Thus, assessing the security of OPC UA deployments and their configuration is necessary to ensure secure operation, most importantly confidentiality and integrity of industrial processes. In this work, we present extensions to the popular Metasploit Framework to ease network-based security assessments of OPC UA deployments. To this end, we discuss methods to discover OPC UA servers, test their authentication, obtain their configuration, and check for vulnerabilities. Ultimately, our work enables operators to verify the (security) configuration of their systems and identify potential attack vectors.

    @inproceedings{RDF+20,
    author = {Roepert, Linus and Dahlmanns, Markus and Fink, Ina Berenice and Pennekamp, Jan and Henze, Martin},
    title = {{Assessing the Security of OPC UA Deployments}},
    booktitle = {Proceedings of the ITG Workshop on IT Security (ITSec)},
    month = {04},
    year = {2020},
    doi = {10.15496/publikation-41813},
    abstract = {To address the increasing security demands of industrial deployments, OPC UA is one of the first industrial protocols explicitly designed with security in mind. However, deploying it securely requires a thorough configuration of a wide range of options. Thus, assessing the security of OPC UA deployments and their configuration is necessary to ensure secure operation, most importantly confidentiality and integrity of industrial processes. In this work, we present extensions to the popular Metasploit Framework to ease network-based security assessments of OPC UA deployments. To this end, we discuss methods to discover OPC UA servers, test their authentication, obtain their configuration, and check for vulnerabilities. Ultimately, our work enables operators to verify the (security) configuration of their systems and identify potential attack vectors.},
    }

2019

  • J. Pennekamp, M. Henze, S. Schmidt, P. Niemietz, M. Fey, D. Trauth, T. Bergs, C. Brecher, and K. Wehrle, “Dataflow Challenges in an Internet of Production: A Security & Privacy Perspective,” in Proceedings of the 5th ACM Workshop on Cyber-Physical Systems Security and PrivaCy (CPS-SPC), 2019.
    [BibTeX] [Abstract] [PDF] [DOI]

    The Internet of Production (IoP) envisions the interconnection of previously isolated CPS in the area of manufacturing across institutional boundaries to realize benefits such as increased profit margins and product quality as well as reduced product development costs and time to market. This interconnection of CPS will lead to a plethora of new dataflows, especially between (partially) distrusting entities. In this paper, we identify and illustrate these envisioned inter-organizational dataflows and the participating entities alongside two real-world use cases from the production domain: a fine blanking line and a connected job shop. Our analysis allows us to identify distinct security and privacy demands and challenges for these new dataflows. As a foundation to address the resulting requirements, we provide a survey of promising technical building blocks to secure inter-organizational dataflows in an IoP and propose next steps for future research. Consequently, we move an important step forward to overcome security and privacy concerns as an obstacle for realizing the promised potentials in an Internet of Production.

    @inproceedings{PHS+19,
    author = {Pennekamp, Jan and Henze, Martin and Schmidt, Simo and Niemietz, Philipp and Fey, Marcel and Trauth, Daniel and Bergs, Thomas and Brecher, Christian and Wehrle, Klaus},
    title = {{Dataflow Challenges in an Internet of Production: A Security & Privacy Perspective}},
    booktitle = {Proceedings of the 5th ACM Workshop on Cyber-Physical Systems Security and PrivaCy (CPS-SPC)},
    month = {11},
    year = {2019},
    doi = {10.1145/3338499.3357357},
    abstract = {The Internet of Production (IoP) envisions the interconnection of previously isolated CPS in the area of manufacturing across institutional boundaries to realize benefits such as increased profit margins and product quality as well as reduced product development costs and time to market. This interconnection of CPS will lead to a plethora of new dataflows, especially between (partially) distrusting entities. In this paper, we identify and illustrate these envisioned inter-organizational dataflows and the participating entities alongside two real-world use cases from the production domain: a fine blanking line and a connected job shop. Our analysis allows us to identify distinct security and privacy demands and challenges for these new dataflows. As a foundation to address the resulting requirements, we provide a survey of promising technical building blocks to secure inter-organizational dataflows in an IoP and propose next steps for future research. Consequently, we move an important step forward to overcome security and privacy concerns as an obstacle for realizing the promised potentials in an Internet of Production.}
    }

  • J. Pennekamp, J. Hiller, S. Reuter, W. De la Cadena, A. Mitseva, M. Henze, T. Engel, K. Wehrle, and A. Panchenko, “Multipathing Traffic to Reduce Entry Node Exposure in Onion Routing,” in Proceedings of the 27th IEEE International Conference on Network Protocols (ICNP) – Poster Session, 2019.
    [BibTeX] [Abstract] [PDF] [DOI]

    Users of an onion routing network, such as Tor, depend on its anonymity properties. However, especially malicious entry nodes, which know the client’s identity, can also observe the whole communication on their link to the client and, thus, conduct several de-anonymization attacks. To limit this exposure and to impede corresponding attacks, we propose to multipath traffic between the client and the middle node to reduce the information an attacker can obtain at a single vantage point. To facilitate the deployment, only clients and selected middle nodes need to implement our approach, which works transparently for the remaining legacy nodes. Furthermore, we let clients control the splitting strategy to prevent any external manipulation.

    @inproceedings{PHR+19,
    author = {Pennekamp, Jan and Hiller, Jens and Reuter, Sebastian and De la Cadena, Wladimir and Mitseva, Asya and Henze, Martin and Engel, Thomas and Wehrle, Klaus and Panchenko, Andriy},
    title = {{Multipathing Traffic to Reduce Entry Node Exposure in Onion Routing}},
    booktitle = {Proceedings of the 27th IEEE International Conference on Network Protocols (ICNP) - Poster Session},
    month = {10},
    year = {2019},
    doi = {10.1109/ICNP.2019.8888029},
    abstract = {Users of an onion routing network, such as Tor, depend on its anonymity properties. However, especially malicious entry nodes, which know the client's identity, can also observe the whole communication on their link to the client and, thus, conduct several de-anonymization attacks. To limit this exposure and to impede corresponding attacks, we propose to multipath traffic between the client and the middle node to reduce the information an attacker can obtain at a single vantage point. To facilitate the deployment, only clients and selected middle nodes need to implement our approach, which works transparently for the remaining legacy nodes. Furthermore, we let clients control the splitting strategy to prevent any external manipulation.}
    }

  • J. Hiller, J. Pennekamp, M. Dahlmanns, M. Henze, A. Panchenko, and K. Wehrle, “Tailoring Onion Routing to the Internet of Things: Security and Privacy in Untrusted Environments,” in Proceedings of the 27th IEEE International Conference on Network Protocols (ICNP), 2019.
    [BibTeX] [Abstract] [PDF] [DOI]

    An increasing number of IoT scenarios involve mobile, resource-constrained IoT devices that rely on untrusted networks for Internet connectivity. In such environments, attackers can derive sensitive private information of IoT device owners, e.g., daily routines or secret supply chain procedures, when sniffing on IoT communication and linking IoT devices and owner. Furthermore, untrusted networks do not provide IoT devices with any protection against attacks from the Internet. Anonymous communication using onion routing provides a well-proven mechanism to keep the relationship between communication partners secret and (optionally) protect against network attacks. However, the application of onion routing is challenged by protocol incompatibilities and demanding cryptographic processing on constrained IoT devices, rendering its use infeasible. To close this gap, we tailor onion routing to the IoT by bridging protocol incompatibilities and offloading expensive cryptographic processing to a router or web server of the IoT device owner. Thus, we realize resource-conserving access control and end-to-end security for IoT devices. To prove applicability, we deploy onion routing for the IoT within the well-established Tor network enabling IoT devices to leverage its resources to achieve the same grade of anonymity as readily available to traditional devices.

    @inproceedings{HPD+19,
    author = {Hiller, Jens and Pennekamp, Jan and Dahlmanns, Markus and Henze, Martin and Panchenko, Andriy and Wehrle, Klaus},
    title = {{Tailoring Onion Routing to the Internet of Things: Security and Privacy in Untrusted Environments}},
    booktitle = {Proceedings of the 27th IEEE International Conference on Network Protocols (ICNP)},
    month = {10},
    year = {2019},
    doi = {10.1109/ICNP.2019.8888033},
    abstract = {An increasing number of IoT scenarios involve mobile, resource-constrained IoT devices that rely on untrusted networks for Internet connectivity. In such environments, attackers can derive sensitive private information of IoT device owners, e.g., daily routines or secret supply chain procedures, when sniffing on IoT communication and linking IoT devices and owner. Furthermore, untrusted networks do not provide IoT devices with any protection against attacks from the Internet. Anonymous communication using onion routing provides a well-proven mechanism to keep the relationship between communication partners secret and (optionally) protect against network attacks. However, the application of onion routing is challenged by protocol incompatibilities and demanding cryptographic processing on constrained IoT devices, rendering its use infeasible. To close this gap, we tailor onion routing to the IoT by bridging protocol incompatibilities and offloading expensive cryptographic processing to a router or web server of the IoT device owner. Thus, we realize resource-conserving access control and end-to-end security for IoT devices. To prove applicability, we deploy onion routing for the IoT within the well-established Tor network enabling IoT devices to leverage its resources to achieve the same grade of anonymity as readily available to traditional devices.}
    }

  • J. Hiller, M. Henze, T. Zimmermann, O. Hohlfeld, and K. Wehrle, “The Case for Session Sharing: Relieving Clients from TLS Handshake Overheads,” in Proceedings of the IEEE LCN Symposium on Emerging Topics in Networking, 2019.
    [BibTeX] [Abstract] [PDF] [DOI]

    In recent years, the amount of traffic protected with Transport Layer Security (TLS) has significantly increased and new protocols such as HTTP/2 and QUIC further foster this emerging trend. However, protecting traffic with TLS has significant impacts on network entities. While the restrictions for middleboxes have been extensively studied, addressing the impact of TLS on clients and servers has been mostly neglected so far. Especially mobile clients in emerging 5G and IoT deployments suffer from significantly increased latency, traffic, and energy overheads when protecting traffic with TLS. In this paper, we address this emerging topic by thoroughly analyzing the impact of TLS on clients and servers and derive opportunities for significantly decreasing latency of TLS communication and downsizing TLS management traffic, thereby also reducing TLS-induced server load. We propose a protocol compatible redesign of TLS session management to use these opportunities and showcase their potential based on mobile device traffic and mobile web-browsing traces. These show promising potentials for latency improvements by up to 25.8% and energy savings of up to 26.3%.

    @inproceedings{HHZ+19,
    author = {Hiller, Jens and Henze, Martin and Zimmermann, Torsten and Hohlfeld, Oliver and Wehrle, Klaus},
    title = {{The Case for Session Sharing: Relieving Clients from TLS Handshake Overheads}},
    booktitle = {Proceedings of the IEEE LCN Symposium on Emerging Topics in Networking},
    month = {10},
    year = {2019},
    doi = {10.1109/LCNSymposium47956.2019.9000667},
    abstract = {In recent years, the amount of traffic protected with Transport Layer Security (TLS) has significantly increased and new protocols such as HTTP/2 and QUIC further foster this emerging trend. However, protecting traffic with TLS has significant impacts on network entities. While the restrictions for middleboxes have been extensively studied, addressing the impact of TLS on clients and servers has been mostly neglected so far. Especially mobile clients in emerging 5G and IoT deployments suffer from significantly increased latency, traffic, and energy overheads when protecting traffic with TLS. In this paper, we address this emerging topic by thoroughly analyzing the impact of TLS on clients and servers and derive opportunities for significantly decreasing latency of TLS communication and downsizing TLS management traffic, thereby also reducing TLS-induced server load. We propose a protocol compatible redesign of TLS session management to use these opportunities and showcase their potential based on mobile device traffic and mobile web-browsing traces. These show promising potentials for latency improvements by up to 25.8% and energy savings of up to 26.3%.}
    }

  • J. Pennekamp, M. Henze, O. Hohlfeld, and A. Panchenko, “Hi Doppelgänger: Towards Detecting Manipulation in News Comments,” in Companion Proceedings of the 2019 World Wide Web Conference, Fourth Workshop on Computational Methods in Online Misbehavior (CyberSafety), 2019.
    [BibTeX] [Abstract] [PDF] [DOI]

    Public opinion manipulation is a serious threat to society, potentially influencing elections and the political situation even in established democracies. The prevalence of online media and the opportunity for users to express opinions in comments magnifies the problem. Governments, organizations, and companies can exploit this situation for biasing opinions. Typically, they deploy a large number of pseudonyms to create an impression of a crowd that supports specific opinions. Side channel information (such as IP addresses or identities of browsers) often allows a reliable detection of pseudonyms managed by a single person. However, while spoofing and anonymizing data that links these accounts is simple, a linking without is very challenging. In this paper, we evaluate whether stylometric features allow a detection of such doppelgängers within comment sections on news articles. To this end, we adapt a state-of-the-art doppelgängers detector to work on small texts (such as comments) and apply it on three popular news sites in two languages. Our results reveal that detecting potential doppelgängers based on linguistics is a promising approach even when no reliable side channel information is available. Preliminary results following an application in the wild shows indications for doppelgängers in real world data sets.

    @inproceedings{PHHP19,
    author = {Pennekamp, Jan and Henze, Martin and Hohlfeld, Oliver and Panchenko, Andriy},
    title = {{Hi Doppelgänger: Towards Detecting Manipulation in News Comments}},
    booktitle = {Companion Proceedings of the 2019 World Wide Web Conference, Fourth Workshop on Computational Methods in Online Misbehavior (CyberSafety)},
    month = {05},
    year = {2019},
    doi = {10.1145/3308560.3316496},
    abstract = {Public opinion manipulation is a serious threat to society, potentially influencing elections and the political situation even in established democracies. The prevalence of online media and the opportunity for users to express opinions in comments magnifies the problem. Governments, organizations, and companies can exploit this situation for biasing opinions. Typically, they deploy a large number of pseudonyms to create an impression of a crowd that supports specific opinions. Side channel information (such as IP addresses or identities of browsers) often allows a reliable detection of pseudonyms managed by a single person. However, while spoofing and anonymizing data that links these accounts is simple, a linking without is very challenging.
    In this paper, we evaluate whether stylometric features allow a detection of such doppelgängers within comment sections on news articles. To this end, we adapt a state-of-the-art doppelgängers detector to work on small texts (such as comments) and apply it on three popular news sites in two languages. Our results reveal that detecting potential doppelgängers based on linguistics is a promising approach even when no reliable side channel information is available. Preliminary results following an application in the wild shows indications for doppelgängers in real world data sets.}
    }

  • J. Pennekamp, R. Glebke, M. Henze, T. Meisen, C. Quix, R. Hai, L. Gleim, P. Niemietz, M. Rudack, S. Knape, A. Epple, D. Trauth, U. Vroomen, T. Bergs, C. Brecher, A. Bührig-Polaczek, M. Jarke, and K. Wehrle, “Towards an Infrastructure Enabling the Internet of Production,” in Proceedings of the 2nd IEEE International Conference on Industrial Cyber-Physical Systems (ICPS), 2019.
    [BibTeX] [Abstract] [PDF] [DOI]

    New levels of cross-domain collaboration between manufacturing companies throughout the supply chain are anticipated to bring benefits to both suppliers and consumers of products. Enabling a fine-grained sharing and analysis of data among different stakeholders in an automated manner, such a vision of an Internet of Production (IoP) introduces demanding challenges to the communication, storage, and computation infrastructure in production environments. In this work, we present three example cases that would benefit from an IoP (a fine blanking line, a high pressure die casting process, and a connected job shop) and derive requirements that cannot be met by today’s infrastructure. In particular, we identify three orthogonal research objectives: (i) real-time control of tightly integrated production processes to offer seamless low-latency analysis and execution, (ii) storing and processing heterogeneous production data to support scalable data stream processing and storage, and (iii) secure privacy-aware collaboration in production to provide a basis for secure industrial collaboration. Based on a discussion of state-of-the-art approaches for these three objectives, we create a blueprint for an infrastructure acting as an enabler for an IoP.

    @inproceedings{PGH+19,
    author = {Pennekamp, Jan and Glebke, Ren{\'e} and Henze, Martin and Meisen, Tobias and Quix, Christoph and Hai, Rihan and Gleim, Lars and Niemietz, Philipp and Rudack, Maximilian and Knape, Simon and Epple, Alexander and Trauth, Daniel and Vroomen, Uwe and Bergs, Thomas and Brecher, Christian and B{\"u}hrig-Polaczek, Andreas and Jarke, Matthias and Wehrle, Klaus},
    title = {{Towards an Infrastructure Enabling the Internet of Production}},
    booktitle = {Proceedings of the 2nd IEEE International Conference on Industrial Cyber-Physical Systems (ICPS)},
    month = {05},
    year = {2019},
    doi = {10.1109/ICPHYS.2019.8780276},
    abstract = {New levels of cross-domain collaboration between manufacturing companies throughout the supply chain are anticipated to bring benefits to both suppliers and consumers of products. Enabling a fine-grained sharing and analysis of data among different stakeholders in an automated manner, such a vision of an Internet of Production (IoP) introduces demanding challenges to the communication, storage, and computation infrastructure in production environments. In this work, we present three example cases that would benefit from an IoP (a fine blanking line, a high pressure die casting process, and a connected job shop) and derive requirements that cannot be met by today's infrastructure. In particular, we identify three orthogonal research objectives: (i) real-time control of tightly integrated production processes to offer seamless low-latency analysis and execution, (ii) storing and processing heterogeneous production data to support scalable data stream processing and storage, and (iii) secure privacy-aware collaboration in production to provide a basis for secure industrial collaboration. Based on a discussion of state-of-the-art approaches for these three objectives, we create a blueprint for an infrastructure acting as an enabler for an IoP.}
    }

  • R. Glebke, M. Henze, K. Wehrle, P. Niemietz, D. Trauth, P. Mattfeld, and T. Bergs, “A Case for Integrated Data Processing in Large-Scale Cyber-Physical Systems,” in Proceedings of the 52nd Hawaii International Conference on System Sciences (HICSS), 2019.
    [BibTeX] [Abstract] [PDF] [DOI]

    Large-scale cyber-physical systems such as manufacturing lines generate vast amounts of data to guarantee precise control of their machinery. Visions such as the Industrial Internet of Things aim at making this data available also to computation systems outside the lines to increase productivity and product quality. However, rising amounts and complexities of data and control decisions push existing infrastructure for data transmission, storage, and processing to its limits. In this paper, we exemplarily study a fine blanking line which can produce up to 6.2 Gbit/s worth of data to showcase the extreme requirements found in modern manufacturing. We consequently propose integrated data processing which keeps inherently local and small-scale tasks close to the processes while at the same time centralizing tasks relying on more complex decision procedures and remote data sources. Our approach thus allows for both maintaining control of field-level processes and leveraging the benefits of “big data” applications.

    @inproceedings{GHW+19,
    author = {Glebke, Ren{\'e} and Henze, Martin and Wehrle, Klaus and Niemietz, Philipp and Trauth, Daniel and Mattfeld, Patrick and Bergs, Thomas},
    title = {{A Case for Integrated Data Processing in Large-Scale Cyber-Physical Systems}},
    booktitle = {Proceedings of the 52nd Hawaii International Conference on System Sciences (HICSS)},
    month = {01},
    year = {2019},
    doi = {10.24251/HICSS.2019.871},
    abstract = {Large-scale cyber-physical systems such as manufacturing lines generate vast amounts of data to guarantee precise control of their machinery. Visions such as the Industrial Internet of Things aim at making this data available also to computation systems outside the lines to increase productivity and product quality. However, rising amounts and complexities of data and control decisions push existing infrastructure for data transmission, storage, and processing to its limits. In this paper, we exemplarily study a fine blanking line which can produce up to 6.2 Gbit/s worth of data to showcase the extreme requirements found in modern manufacturing. We consequently propose integrated data processing which keeps inherently local and small-scale tasks close to the processes while at the same time centralizing tasks relying on more complex decision procedures and remote data sources. Our approach thus allows for both maintaining control of field-level processes and leveraging the benefits of "big data" applications.},
    }

2018

  • M. Henze, “Accounting for Privacy in the Cloud Computing Landscape,” PhD Thesis, RWTH Aachen University, Aachen, Germany, 2018.
    [BibTeX] [Abstract] [PDF]

    Cloud computing enables service operators to efficiently and flexibly utilize resources offered by third party providers instead of having to maintain their own infrastructure. As such, cloud computing offers many advantages over the traditional service delivery model, e.g., failure safety, scalability, cost savings, and a high ease of use. Not only service operators, but also their users benefit from these advantages. As a result, cloud computing has revolutionized service delivery and we observe a tremendous trend for moving services to the cloud. However, this trend of outsourcing services and data to the cloud is limited by serious privacy challenges as evidenced by recent security breaches and privacy incidents such as the global surveillance disclosures. These privacy challenges stem from the technical complexity and missing transparency of cloud computing, opaque legislation with respect to the jurisdiction that applies to users’ data, the inherent centrality of the cloud computing market, and missing control of users over the handling of their data. Overcoming these privacy challenges is key to enable corporate and private users to fully embrace the advantages of cloud computing and hence secure the success of the cloud computing paradigm. Indeed, we observe that cloud providers already account for selected privacy requirements, e.g., by opening special data centers in countries with strict data protection and privacy legislation. Likewise, researchers propose technical approaches to enforce certain privacy requirements either from the client side, e.g., using encryption, or from the service side, e.g., based on trusted hardware. Despite these ongoing efforts, the necessary technical means to fully account for privacy in the cloud computing landscape are still missing. In this dissertation, we approach the pressing problem of privacy in cloud computing from a different direction: Instead of focusing on single actors, we are convinced that overcoming the inherent privacy challenges of cloud computing requires cooperation between the various actors in the cloud computing landscape, i.e., users, service providers, and infrastructure providers. All these different actors have clear incentives to care for privacy and, with the contributions presented in this dissertation, we provide technical approaches that enable each of them to account for privacy. As our first contribution to support users in exercising their privacy, we raise awareness for their exposure to cloud services in the context of email services as well as smartphone apps and enable them to anonymously compare their cloud usage to their peers. With privacy requirements-aware cloud infrastructure as our second contribution, we realize user-specified per-data item privacy policies and enable infrastructure providers to adhere to them. We furthermore support service providers in building privacy-preserving cloud services for the Internet of Things in the context of our third contribution by enabling the transparent processing of protected data and by introducing a distributed architecture to secure the control over devices and networks. Finally, with our fourth contribution, we propose a decentralized cloud infrastructure that enables users who strongly distrust cloud providers to completely shift certain services away from the cloud by cooperating with other users. The contributions of this dissertation highlight that it is both promising and feasible to apply cooperation of different actors to strengthen users’ privacy and consequently enable more corporate and private users to benefit from cloud computing.

    @phdthesis{Hen18,
    author = {Henze, Martin},
    title = {{Accounting for Privacy in the Cloud Computing Landscape}},
    institution = {RWTH Aachen University},
    address = {Aachen, Germany},
    month = {12},
    year = {2018},
    abstract = {Cloud computing enables service operators to efficiently and flexibly utilize resources offered by third party providers instead of having to maintain their own infrastructure. As such, cloud computing offers many advantages over the traditional service delivery model, e.g., failure safety, scalability, cost savings, and a high ease of use. Not only service operators, but also their users benefit from these advantages. As a result, cloud computing has revolutionized service delivery and we observe a tremendous trend for moving services to the cloud. However, this trend of outsourcing services and data to the cloud is limited by serious privacy challenges as evidenced by recent security breaches and privacy incidents such as the global surveillance disclosures. These privacy challenges stem from the technical complexity and missing transparency of cloud computing, opaque legislation with respect to the jurisdiction that applies to users' data, the inherent centrality of the cloud computing market, and missing control of users over the handling of their data.
    Overcoming these privacy challenges is key to enable corporate and private users to fully embrace the advantages of cloud computing and hence secure the success of the cloud computing paradigm. Indeed, we observe that cloud providers already account for selected privacy requirements, e.g., by opening special data centers in countries with strict data protection and privacy legislation. Likewise, researchers propose technical approaches to enforce certain privacy requirements either from the client side, e.g., using encryption, or from the service side, e.g., based on trusted hardware. Despite these ongoing efforts, the necessary technical means to fully account for privacy in the cloud computing landscape are still missing.
    In this dissertation, we approach the pressing problem of privacy in cloud computing from a different direction: Instead of focusing on single actors, we are convinced that overcoming the inherent privacy challenges of cloud computing requires cooperation between the various actors in the cloud computing landscape, i.e., users, service providers, and infrastructure providers. All these different actors have clear incentives to care for privacy and, with the contributions presented in this dissertation, we provide technical approaches that enable each of them to account for privacy.
    As our first contribution to support users in exercising their privacy, we raise awareness for their exposure to cloud services in the context of email services as well as smartphone apps and enable them to anonymously compare their cloud usage to their peers. With privacy requirements-aware cloud infrastructure as our second contribution, we realize user-specified per-data item privacy policies and enable infrastructure providers to adhere to them. We furthermore support service providers in building privacy-preserving cloud services for the Internet of Things in the context of our third contribution by enabling the transparent processing of protected data and by introducing a distributed architecture to secure the control over devices and networks. Finally, with our fourth contribution, we propose a decentralized cloud infrastructure that enables users who strongly distrust cloud providers to completely shift certain services away from the cloud by cooperating with other users.
    The contributions of this dissertation highlight that it is both promising and feasible to apply cooperation of different actors to strengthen users' privacy and consequently enable more corporate and private users to benefit from cloud computing.},
    }

  • J. Hiller, M. Henze, M. Serror, E. Wagner, J. N. Richter, and K. Wehrle, “Secure Low Latency Communication for Constrained Industrial IoT Scenarios,” in Proceedings of the 43rd IEEE Conference on Local Computer Networks (LCN), 2018.
    [BibTeX] [Abstract] [PDF] [DOI]

    The emerging Internet of Things (IoT) promises value-added services for private and business applications. However, especially the industrial IoT often faces tough communication latency boundaries, e.g., to react to production errors, realize human-robot interaction, or counter fluctuations in smart grids. Simultaneously, devices must apply security measures such as encryption and integrity protection to guard business secrets and prevent sabotage. As security processing requires significant time, the goals of secure communication and low latency contradict each other. Especially on constrained IoT devices, which are equipped with cheap, low-power processors, the overhead for security processing aggregates to a primary source of latency. We show that antedated encryption and data authentication with templates enables IoT devices to meet both, security and low latency requirements. These mechanisms offload significant security processing to a preprocessing phase and thus decrease latency during actual transmission by up to 75.9 %. Thereby they work for well-established security-proven standard ciphers.

    @inproceedings{HHS+18,
    author = {Hiller, Jens and Henze, Martin and Serror, Martin and Wagner, Eric and Richter, Jan Niklas and Wehrle, Klaus},
    title = {{Secure Low Latency Communication for Constrained Industrial IoT Scenarios}},
    booktitle = {Proceedings of the 43rd IEEE Conference on Local Computer Networks (LCN)},
    month = {10},
    year = {2018},
    doi = {10.1109/LCN.2018.8638027},
    abstract = {The emerging Internet of Things (IoT) promises value-added services for private and business applications. However, especially the industrial IoT often faces tough communication latency boundaries, e.g., to react to production errors, realize human-robot interaction, or counter fluctuations in smart grids. Simultaneously, devices must apply security measures such as encryption and integrity protection to guard business secrets and prevent sabotage. As security processing requires significant time, the goals of secure communication and low latency contradict each other. Especially on constrained IoT devices, which are equipped with cheap, low-power processors, the overhead for security processing aggregates to a primary source of latency. We show that antedated encryption and data authentication with templates enables IoT devices to meet both, security and low latency requirements. These mechanisms offload significant security processing to a preprocessing phase and thus decrease latency during actual transmission by up to 75.9 %. Thereby they work for well-established security-proven standard ciphers.},
    }

  • M. Serror, M. Henze, S. Hack, M. Schuba, and K. Wehrle, “Towards In-Network Security for Smart Homes,” in Proceedings of the 2nd International Workshop on Security and Forensics of IoT (IoT-SECFOR), 2018.
    [BibTeX] [Abstract] [PDF] [DOI]

    The proliferation of the Internet of Things (IoT) in the context of smart homes entails new security risks threatening the privacy and safety of end users. In this paper, we explore the design space of in-network security for smart home networks, which automatically complements existing security mechanisms with a rule-based approach, i. e., every IoT device provides a specification of the required communication to fulfill the desired services. In our approach, the home router as the central network component then enforces these communication rules with traffic filtering and anomaly detection to dynamically react to threats. We show that in-network security can be easily integrated into smart home networks based on existing approaches and thus provides additional protection for heterogeneous IoT devices and protocols. Furthermore, in-network security relieves users of difficult home network configurations, since it automatically adapts to the connected devices and services.

    @inproceedings{SHH+18,
    author = {Serror, Martin and Henze, Martin and Hack, Sacha and Schuba, Marko and Wehrle, Klaus},
    title = {{Towards In-Network Security for Smart Homes}},
    booktitle = {Proceedings of the 2nd International Workshop on Security and Forensics of IoT (IoT-SECFOR)},
    month = {08},
    year = {2018},
    doi = {10.1145/3230833.3232802},
    abstract = {The proliferation of the Internet of Things (IoT) in the context of smart homes entails new security risks threatening the privacy and safety of end users. In this paper, we explore the design space of in-network security for smart home networks, which automatically complements existing security mechanisms with a rule-based approach, i. e., every IoT device provides a specification of the required communication to fulfill the desired services. In our approach, the home router as the central network component then enforces these communication rules with traffic filtering and anomaly detection to dynamically react to threats. We show that in-network security can be easily integrated into smart home networks based on existing approaches and thus provides additional protection for heterogeneous IoT devices and protocols. Furthermore, in-network security relieves users of difficult home network configurations, since it automatically adapts to the connected devices and services.},
    }

  • R. Matzutt, M. Henze, J. H. Ziegeldorf, J. Hiller, and K. Wehrle, “Thwarting Unwanted Blockchain Content Insertion,” in 2018 IEEE Workshop on Blockchain Technologies and Applications (BTA), 2018.
    [BibTeX] [Abstract] [PDF] [DOI]

    Since the introduction of Bitcoin in 2008, blockchain systems have seen an enormous increase in adoption. By providing a persistent, distributed, and append-only ledger, blockchains enable numerous applications such as distributed consensus, robustness against equivocation, and smart contracts. However, recent studies show that blockchain systems such as Bitcoin can be (mis)used to store arbitrary content. This has already been used to store arguably objectionable content on Bitcoin’s blockchain. Already single instances of clearly objectionable or even illegal content can put the whole system at risk by making its node operators culpable. To overcome this imminent risk, we survey and discuss the design space of countermeasures against the insertion of such objectionable content. Our analysis shows a wide spectrum of potential countermeasures, which are often combinable for increased efficiency. First, we investigate special-purpose content detectors as an ad hoc mitigation. As they turn out to be easily evadable, we also investigate content-agnostic countermeasures. We find that mandatory minimum fees as well as mitigation of transaction manipulability via identifier commitments significantly raise the bar for inserting harmful content into a blockchain.

    @inproceedings{MHZ+18,
    author = {Matzutt, Roman and Henze, Martin and Ziegeldorf, Jan Henrik and Hiller, Jens and Wehrle, Klaus},
    title = {{Thwarting Unwanted Blockchain Content Insertion}},
    booktitle = {2018 IEEE Workshop on Blockchain Technologies and Applications (BTA)},
    month = {04},
    year = {2018},
    doi = {10.1109/IC2E.2018.00070},
    abstract = {Since the introduction of Bitcoin in 2008, blockchain systems have seen an enormous increase in adoption. By providing a persistent, distributed, and append-only ledger, blockchains enable numerous applications such as distributed consensus, robustness against equivocation, and smart contracts. However, recent studies show that blockchain systems such as Bitcoin can be (mis)used to store arbitrary content. This has already been used to store arguably objectionable content on Bitcoin's blockchain. Already single instances of clearly objectionable or even illegal content can put the whole system at risk by making its node operators culpable. To overcome this imminent risk, we survey and discuss the design space of countermeasures against the insertion of such objectionable content. Our analysis shows a wide spectrum of potential countermeasures, which are often combinable for increased efficiency. First, we investigate special-purpose content detectors as an ad hoc mitigation. As they turn out to be easily evadable, we also investigate content-agnostic countermeasures. We find that mandatory minimum fees as well as mitigation of transaction manipulability via identifier commitments significantly raise the bar for inserting harmful content into a blockchain.},
    }

  • J. H. Ziegeldorf, R. Matzutt, M. Henze, F. Grossmann, and K. Wehrle, “Secure and Anonymous Decentralized Bitcoin Mixing,” Future Generation Computer Systems (FGCS), vol. 80, 2018.
    [BibTeX] [Abstract] [PDF] [DOI]

    The decentralized digital currency Bitcoin presents an anonymous alternative to the centralized banking system and indeed enjoys widespread and increasing adoption. Recent works, however, show how users can be reidentified and their payments linked based on Bitcoin’s most central element, the blockchain, a public ledger of all transactions. Thus, many regard Bitcoin’s central promise of financial privacy as broken. In this paper, we propose CoinParty, an efficient decentralized mixing service that allows users to reestablish their financial privacy in Bitcoin and related cryptocurrencies. CoinParty, through a novel combination of decryption mixnets with threshold signatures, takes a unique place in the design space of mixing services, combining the advantages of previously proposed centralized and decentralized mixing services in one system. Our prototype implementation of CoinParty scales to large numbers of users and achieves anonymity sets by orders of magnitude higher than related work as we quantify by analyzing transactions in the actual Bitcoin blockchain. CoinParty can easily be deployed by any individual group of users, i.e., independent of any third parties, or provided as a commercial or voluntary service, e.g., as a community service by privacy-aware organizations.

    @article{ZMH+16,
    author = {Ziegeldorf, Jan Henrik and Matzutt, Roman and Henze, Martin and Grossmann, Fred and Wehrle, Klaus},
    journal = {Future Generation Computer Systems (FGCS)},
    title = {{Secure and Anonymous Decentralized Bitcoin Mixing}},
    volume = {80},
    month = {03},
    year = {2018},
    doi = {10.1016/j.future.2016.05.018},
    abstract = {The decentralized digital currency Bitcoin presents an anonymous alternative to the centralized banking system and indeed enjoys widespread and increasing adoption. Recent works, however, show how users can be reidentified and their payments linked based on Bitcoin's most central element, the blockchain, a public ledger of all transactions. Thus, many regard Bitcoin's central promise of financial privacy as broken.
    In this paper, we propose CoinParty, an efficient decentralized mixing service that allows users to reestablish their financial privacy in Bitcoin and related cryptocurrencies. CoinParty, through a novel combination of decryption mixnets with threshold signatures, takes a unique place in the design space of mixing services, combining the advantages of previously proposed centralized and decentralized mixing services in one system. Our prototype implementation of CoinParty scales to large numbers of users and achieves anonymity sets by orders of magnitude higher than related work as we quantify by analyzing transactions in the actual Bitcoin blockchain. CoinParty can easily be deployed by any individual group of users, i.e., independent of any third parties, or provided as a commercial or voluntary service, e.g., as a community service by privacy-aware organizations.},
    }

  • R. Matzutt, J. Hiller, M. Henze, J. H. Ziegeldorf, D. Müllmann, O. Hohlfeld, and K. Wehrle, “A Quantitative Analysis of the Impact of Arbitrary Blockchain Content on Bitcoin,” in Proceedings of the 22nd International Conference on Financial Cryptography and Data Security (FC), 2018.
    [BibTeX] [Abstract] [PDF] [DOI]

    Blockchains primarily enable credible accounting of digital events, e.g., money transfers in cryptocurrencies. However, beyond this original purpose, blockchains also irrevocably record arbitrary data, ranging from short messages to pictures. This does not come without risk for users as each participant has to locally replicate the complete blockchain, particularly including potentially harmful content. We provide the first systematic analysis of the benefits and threats of arbitrary blockchain content. Our analysis shows that certain content, e.g., illegal pornography, can render the mere possession of a blockchain illegal. Based on these insights, we conduct a thorough quantitative and qualitative analysis of unintended content on Bitcoin’s blockchain. Although most data originates from benign extensions to Bitcoin’s protocol, our analysis reveals more than 1600 files on the blockchain, over 99 % of which are texts or images. Among these files there is clearly objectionable content such as links to child pornography, which is distributed to all Bitcoin participants. With our analysis, we thus highlight the importance for future blockchain designs to address the possibility of unintended data insertion and protect blockchain users accordingly.

    @inproceedings{MHH+18,
    author = {Matzutt, Roman and Hiller, Jens and Henze, Martin and Ziegeldorf, Jan Henrik and M{\"u}llmann, Dirk and Hohlfeld, Oliver and Wehrle, Klaus},
    title = {{A Quantitative Analysis of the Impact of Arbitrary Blockchain Content on Bitcoin}},
    booktitle = {Proceedings of the 22nd International Conference on Financial Cryptography and Data Security (FC)},
    month = {02},
    year = {2018},
    doi = {10.1007/978-3-662-58387-6_23},
    abstract = {Blockchains primarily enable credible accounting of digital events, e.g., money transfers in cryptocurrencies. However, beyond this original purpose, blockchains also irrevocably record arbitrary data, ranging from short messages to pictures. This does not come without risk for users as each participant has to locally replicate the complete blockchain, particularly including potentially harmful content. We provide the first systematic analysis of the benefits and threats of arbitrary blockchain content. Our analysis shows that certain content, e.g., illegal pornography, can render the mere possession of a blockchain illegal. Based on these insights, we conduct a thorough quantitative and qualitative analysis of unintended content on Bitcoin's blockchain. Although most data originates from benign extensions to Bitcoin's protocol, our analysis reveals more than 1600 files on the blockchain, over 99 % of which are texts or images. Among these files there is clearly objectionable content such as links to child pornography, which is distributed to all Bitcoin participants. With our analysis, we thus highlight the importance for future blockchain designs to address the possibility of unintended data insertion and protect blockchain users accordingly.},
    }

2017

  • J. Pennekamp, M. Henze, and K. Wehrle, “A Survey on the Evolution of Privacy Enforcement on Smartphones and the Road Ahead,” Pervasive and Mobile Computing, vol. 42, 2017.
    [BibTeX] [Abstract] [PDF] [DOI]

    With the increasing proliferation of smartphones, enforcing privacy of smartphone users becomes evermore important. Nowadays, one of the major privacy challenges is the tremendous amount of permissions requested by applications, which can significantly invade users’ privacy, often without their knowledge. In this paper, we provide a comprehensive review of approaches that can be used to report on applications’ permission usage, tune permission access, contain sensitive information, and nudge users towards more privacy-conscious behavior. We discuss key shortcomings of privacy enforcement on smartphones so far and identify suitable actions for the future.

    @article{PHW17,
    author = {Pennekamp, Jan and Henze, Martin and Wehrle, Klaus},
    title = {{A Survey on the Evolution of Privacy Enforcement on Smartphones and the Road Ahead}},
    journal = {Pervasive and Mobile Computing},
    volume = {42},
    month = {12},
    year = {2017},
    doi = {10.1016/j.pmcj.2017.09.005},
    abstract = {With the increasing proliferation of smartphones, enforcing privacy of smartphone users becomes evermore important. Nowadays, one of the major privacy challenges is the tremendous amount of permissions requested by applications, which can significantly invade users' privacy, often without their knowledge. In this paper, we provide a comprehensive review of approaches that can be used to report on applications' permission usage, tune permission access, contain sensitive information, and nudge users towards more privacy-conscious behavior. We discuss key shortcomings of privacy enforcement on smartphones so far and identify suitable actions for the future.},
    }

  • M. Henze, R. Inaba, I. B. Fink, and J. H. Ziegeldorf, “Privacy-preserving Comparison of Cloud Exposure Induced by Mobile Apps,” in Proceedings of the 14th EAI International Conference on Mobile and Ubiquitous Systems: Computing, Networking and Services (MobiQuitous) – Poster Session, 2017.
    [BibTeX] [Abstract] [PDF] [DOI]

    The increasing utilization of cloud services by mobile apps on smartphones leads to serious privacy concerns. While users can quantify the cloud usage of their apps, they often cannot relate to involved privacy risks. In this paper, we apply comparison-based privacy, a behavioral nudge, to the cloud usage of mobile apps. This enables users to compare their personal app-induced cloud exposure to that of their peers to discover potential privacy risks from deviation from normal usage behavior. Since cloud usage statistics are sensitive, we protect them with k-anonymity and differential privacy.

    @inproceedings{HIFZ17,
    author = {Henze, Martin and Inaba, Ritsuma and Fink, Ina Berenice and Ziegeldorf, Jan Henrik},
    title = {{Privacy-preserving Comparison of Cloud Exposure Induced by Mobile Apps}},
    booktitle = {Proceedings of the 14th EAI International Conference on Mobile and Ubiquitous Systems: Computing, Networking and Services (MobiQuitous) - Poster Session},
    month = {11},
    year = {2017},
    doi = {10.1145/3144457.3144511},
    abstract = {The increasing utilization of cloud services by mobile apps on smartphones leads to serious privacy concerns. While users can quantify the cloud usage of their apps, they often cannot relate to involved privacy risks. In this paper, we apply comparison-based privacy, a behavioral nudge, to the cloud usage of mobile apps. This enables users to compare their personal app-induced cloud exposure to that of their peers to discover potential privacy risks from deviation from normal usage behavior. Since cloud usage statistics are sensitive, we protect them with k-anonymity and differential privacy.},
    }

  • M. Henze, J. Pennekamp, D. Hellmanns, E. Mühmer, J. H. Ziegeldorf, A. Drichel, and K. Wehrle, “CloudAnalyzer: Uncovering the Cloud Usage of Mobile Apps,” in Proceedings of the 14th EAI International Conference on Mobile and Ubiquitous Systems: Computing, Networking and Services (MobiQuitous), 2017.
    [BibTeX] [Abstract] [PDF] [DOI]

    Developers of smartphone apps increasingly rely on cloud services for ready-made functionalities, e.g., to track app usage, to store data, or to integrate social networks. At the same time, mobile apps have access to various private information, ranging from users’ contact lists to their precise locations. As a result, app deployment models and data flows have become too complex and entangled for users to understand. We present CloudAnalyzer, a transparency technology that reveals the cloud usage of smartphone apps and hence provides users with the means to reclaim informational self-determination. We apply CloudAnalyzer to study the cloud exposure of 29 volunteers over the course of 19 days. In addition, we analyze the cloud usage of the 5000 most accessed mobile websites as well as 500 popular apps from five different countries. Our results reveal an excessive exposure to cloud services: 90 % of apps use cloud services and 36 % of apps used by volunteers solely communicate with cloud services. Given the information provided by CloudAnalyzer, users can critically review the cloud usage of their apps.

    @inproceedings{HPH+17,
    author = {Henze, Martin and Pennekamp, Jan and Hellmanns, David and M{\"u}hmer, Erik and Ziegeldorf, Jan Henrik and Drichel, Arthur and Wehrle, Klaus},
    title = {{CloudAnalyzer: Uncovering the Cloud Usage of Mobile Apps}},
    booktitle = {Proceedings of the 14th EAI International Conference on Mobile and Ubiquitous Systems: Computing, Networking and Services (MobiQuitous)},
    month = {11},
    year = {2017},
    doi = {10.1145/3144457.3144471},
    abstract = {Developers of smartphone apps increasingly rely on cloud services for ready-made functionalities, e.g., to track app usage, to store data, or to integrate social networks. At the same time, mobile apps have access to various private information, ranging from users' contact lists to their precise locations. As a result, app deployment models and data flows have become too complex and entangled for users to understand. We present CloudAnalyzer, a transparency technology that reveals the cloud usage of smartphone apps and hence provides users with the means to reclaim informational self-determination. We apply CloudAnalyzer to study the cloud exposure of 29 volunteers over the course of 19 days. In addition, we analyze the cloud usage of the 5000 most accessed mobile websites as well as 500 popular apps from five different countries. Our results reveal an excessive exposure to cloud services: 90 % of apps use cloud services and 36 % of apps used by volunteers solely communicate with cloud services. Given the information provided by CloudAnalyzer, users can critically review the cloud usage of their apps.},
    }

  • M. Henze, J. Hiller, R. Hummen, R. Matzutt, K. Wehrle, and J. H. Ziegeldorf, “Network Security and Privacy for Cyber-Physical Systems,” in Security and Privacy in Cyber-Physical Systems: Foundations, Principles, and Applications, H. Song, G. A. Fink, and S. Jeschke, Eds., Wiley-IEEE Press, 2017.
    [BibTeX] [Abstract] [DOI]

    Cyber-physical systems (CPSs) are expected to collect, process, and exchange data that regularly contain sensitive information. CPSs may, for example, involve a person in the privacy of her home or convey business secrets in production plants. Hence, confidentiality, integrity, and authenticity are of utmost importance for secure and privacy-preserving CPSs. In this chapter, we present and discuss emerging security and privacy issues in CPSs and highlight challenges as well as opportunities for building and operating these systems in a secure and privacy-preserving manner. We focus on issues that are unique to CPSs, for example, resulting from the resource constraints of the involved devices and networks, the limited configurability of these devices, and the expected ubiquity of the data collection of CPSs. The covered issues impact the security and privacy of CPSs from local networks to Cloud-based environments.

    @incollection{HHH+17,
    author = {Henze, Martin and Hiller, Jens and Hummen, Ren{\'e} and Matzutt, Roman and Wehrle, Klaus and Ziegeldorf, Jan Henrik},
    title = {{Network Security and Privacy for Cyber-Physical Systems}},
    booktitle = {Security and Privacy in Cyber-Physical Systems: Foundations, Principles, and Applications},
    editor = {Song, Houbing and Fink, Glenn A. and Jeschke, Sabina},
    month = {11},
    year = {2017},
    publisher = {Wiley-IEEE Press},
    doi = {10.1002/9781119226079.ch2},
    abstract = {Cyber-physical systems (CPSs) are expected to collect, process, and exchange data that regularly contain sensitive information. CPSs may, for example, involve a person in the privacy of her home or convey business secrets in production plants. Hence, confidentiality, integrity, and authenticity are of utmost importance for secure and privacy-preserving CPSs. In this chapter, we present and discuss emerging security and privacy issues in CPSs and highlight challenges as well as opportunities for building and operating these systems in a secure and privacy-preserving manner. We focus on issues that are unique to CPSs, for example, resulting from the resource constraints of the involved devices and networks, the limited configurability of these devices, and the expected ubiquity of the data collection of CPSs. The covered issues impact the security and privacy of CPSs from local networks to Cloud-based environments.},
    }

  • A. Panchenko, A. Mitseva, M. Henze, F. Lanze, K. Wehrle, and T. Engel, “Analysis of Fingerprinting Techniques for Tor Hidden Services,” in Proceedings of the 16th Workshop on Privacy in the Electronic Society (WPES), 2017.
    [BibTeX] [Abstract] [PDF] [DOI]

    The website fingerprinting attack aims to infer the content of encrypted and anonymized connections by analyzing traffic patterns such as packet sizes, their order, and direction. Although it has been shown that no existing fingerprinting method scales in Tor when applied in realistic settings, the case of Tor hidden (onion) services has not yet been considered in such scenarios. Recent works claim the feasibility of the attack in the context of hidden services using limited datasets. In this work, we propose a novel two-phase approach for fingerprinting hidden services that does not rely on malicious Tor nodes. In our attack, the adversary merely needs to be on the link between the client and the first anonymization node. In the first phase, we detect a connection to a hidden service. Once a hidden service communication is detected, we determine the visited hidden service (phase two) within the hidden service universe. To estimate the scalability of our and other existing methods, we constructed the most extensive and realistic dataset of existing hidden services. Using this dataset, we show the feasibility of phase one of the attack and establish that phase two does not scale using existing classifiers. We present a comprehensive comparison of the performance and limits of the state-of-the-art website fingerprinting attacks with respect to Tor hidden services.

    @inproceedings{PMH+17,
    author = {Panchenko, Andriy and Mitseva, Asya and Henze, Martin and Lanze, Fabian and Wehrle, Klaus and Engel, Thomas},
    title = {{Analysis of Fingerprinting Techniques for Tor Hidden Services}},
    booktitle = {Proceedings of the 16th Workshop on Privacy in the Electronic Society (WPES)},
    month = {10},
    year = {2017},
    doi = {10.1145/3139550.3139564},
    abstract = {The website fingerprinting attack aims to infer the content of encrypted and anonymized connections by analyzing traffic patterns such as packet sizes, their order, and direction. Although it has been shown that no existing fingerprinting method scales in Tor when applied in realistic settings, the case of Tor hidden (onion) services has not yet been considered in such scenarios. Recent works claim the feasibility of the attack in the context of hidden services using limited datasets.
    In this work, we propose a novel two-phase approach for fingerprinting hidden services that does not rely on malicious Tor nodes. In our attack, the adversary merely needs to be on the link between the client and the first anonymization node. In the first phase, we detect a connection to a hidden service. Once a hidden service communication is detected, we determine the visited hidden service (phase two) within the hidden service universe. To estimate the scalability of our and other existing methods, we constructed the most extensive and realistic dataset of existing hidden services. Using this dataset, we show the feasibility of phase one of the attack and establish that phase two does not scale using existing classifiers. We present a comprehensive comparison of the performance and limits of the state-of-the-art website fingerprinting attacks with respect to Tor hidden services.},
    }

  • M. Henze, B. Wolters, R. Matzutt, T. Zimmermann, and K. Wehrle, “Distributed Configuration, Authorization and Management in the Cloud-based Internet of Things,” in 2017 IEEE International Conference on Trust, Security and Privacy in Computing and Communications (TrustCom), 2017.
    [BibTeX] [Abstract] [PDF] [DOI]

    Network-based deployments within the Internet of Things increasingly rely on the cloud-controlled federation of individual networks to configure, authorize, and manage devices across network borders. While this approach allows the convenient and reliable interconnection of networks, it raises severe security and safety concerns. These concerns range from a curious cloud provider accessing confidential data to a malicious cloud provider being able to physically control safety-critical devices. To overcome these concerns, we present D-CAM, which enables secure and distributed configuration, authorization, and management across network borders in the cloud-based Internet of Things. With D-CAM, we constrain the cloud to act as highly available and scalable storage for control messages. Consequently, we achieve reliable network control across network borders and strong security guarantees. Our evaluation confirms that D-CAM adds only a modest overhead and can scale to large networks.

    @inproceedings{HWM+17,
    author = {Henze, Martin and Wolters, Benedikt and Matzutt, Roman and Zimmermann, Torsten and Wehrle, Klaus},
    title = {{Distributed Configuration, Authorization and Management in the Cloud-based Internet of Things}},
    booktitle = {2017 IEEE International Conference on Trust, Security and Privacy in Computing and Communications (TrustCom)},
    month = {08},
    year = {2017},
    doi = {10.1109/Trustcom/BigDataSE/ICESS.2017.236},
    abstract = {Network-based deployments within the Internet of Things increasingly rely on the cloud-controlled federation of individual networks to configure, authorize, and manage devices across network borders. While this approach allows the convenient and reliable interconnection of networks, it raises severe security and safety concerns. These concerns range from a curious cloud provider accessing confidential data to a malicious cloud provider being able to physically control safety-critical devices. To overcome these concerns, we present D-CAM, which enables secure and distributed configuration, authorization, and management across network borders in the cloud-based Internet of Things. With D-CAM, we constrain the cloud to act as highly available and scalable storage for control messages. Consequently, we achieve reliable network control across network borders and strong security guarantees. Our evaluation confirms that D-CAM adds only a modest overhead and can scale to large networks.},
    }

  • J. H. Ziegeldorf, J. Pennekamp, D. Hellmanns, F. Schwinger, I. Kunze, M. Henze, J. Hiller, R. Matzutt, and K. Wehrle, “BLOOM: BLoom filter based oblivious outsourced matchings,” BMC Medical Genomics, vol. 10, iss. Suppl 2, 2017.
    [BibTeX] [Abstract] [PDF] [DOI]

    Whole genome sequencing has become fast, accurate, and cheap, paving the way towards the large-scale collection and processing of human genome data. Unfortunately, this dawning genome era does not only promise tremendous advances in biomedical research but also causes unprecedented privacy risks for the many. Handling storage and processing of large genome datasets through cloud services greatly aggravates these concerns. Current research efforts thus investigate the use of strong cryptographic methods and protocols to implement privacy-preserving genomic computations. We propose Fhe-Bloom and Phe-Bloom, two efficient approaches for genetic disease testing using homomorphically encrypted Bloom filters. Both approaches allow the data owner to securely outsource storage and computation to an untrusted cloud. Fhe-Bloom is fully secure in the semi-honest model while Phe-Bloom slightly relaxes security guarantees in a trade-off for highly improved performance. We implement and evaluate both approaches on a large dataset of up to 50 patient genomes each with up to 1000000 variations (single nucleotide polymorphisms). For both implementations, overheads scale linearly in the number of patients and variations, while Phe-Bloom is faster by at least three orders of magnitude. For example, testing disease susceptibility of 50 patients with 100000 variations requires only a total of 308.31 s (σ=8.73 s) with our first approach and a mere 0.07 s (σ=0.00 s) with the second. We additionally discuss security guarantees of both approaches and their limitations as well as possible extensions towards more complex query types, e.g., fuzzy or range queries. Both approaches handle practical problem sizes efficiently and are easily parallelized to scale with the elastic resources available in the cloud. The fully homomorphic scheme, Fhe-Bloom, realizes a comprehensive outsourcing to the cloud, while the partially homomorphic scheme, Phe-Bloom, trades a slight relaxation of security guarantees against performance improvements by at least three orders of magnitude.

    @article{ZPH+17,
    author = {Ziegeldorf, Jan Henrik and Pennekamp, Jan and Hellmanns, David and Schwinger, Felix and Kunze, Ike and Henze, Martin and Hiller, Jens and Matzutt, Roman and Wehrle, Klaus},
    title = {{BLOOM: BLoom filter based oblivious outsourced matchings}},
    journal = {BMC Medical Genomics},
    volume = {10},
    number = {Suppl 2},
    month = {07},
    year = {2017},
    doi = {10.1186/s12920-017-0277-y},
    abstract = {Whole genome sequencing has become fast, accurate, and cheap, paving the way towards the large-scale collection and processing of human genome data. Unfortunately, this dawning genome era does not only promise tremendous advances in biomedical research but also causes unprecedented privacy risks for the many. Handling storage and processing of large genome datasets through cloud services greatly aggravates these concerns. Current research efforts thus investigate the use of strong cryptographic methods and protocols to implement privacy-preserving genomic computations.
    We propose Fhe-Bloom and Phe-Bloom, two efficient approaches for genetic disease testing using homomorphically encrypted Bloom filters. Both approaches allow the data owner to securely outsource storage and computation to an untrusted cloud. Fhe-Bloom is fully secure in the semi-honest model while Phe-Bloom slightly relaxes security guarantees in a trade-off for highly improved performance.
    We implement and evaluate both approaches on a large dataset of up to 50 patient genomes each with up to 1000000 variations (single nucleotide polymorphisms). For both implementations, overheads scale linearly in the number of patients and variations, while Phe-Bloom is faster by at least three orders of magnitude. For example, testing disease susceptibility of 50 patients with 100000 variations requires only a total of 308.31 s (σ=8.73 s) with our first approach and a mere 0.07 s (σ=0.00 s) with the second. We additionally discuss security guarantees of both approaches and their limitations as well as possible extensions towards more complex query types, e.g., fuzzy or range queries.
    Both approaches handle practical problem sizes efficiently and are easily parallelized to scale with the elastic resources available in the cloud. The fully homomorphic scheme, Fhe-Bloom, realizes a comprehensive outsourcing to the cloud, while the partially homomorphic scheme, Phe-Bloom, trades a slight relaxation of security guarantees against performance improvements by at least three orders of magnitude.},
    }

  • M. Henze, M. P. Sanford, and O. Hohlfeld, “Veiled in Clouds? Assessing the Prevalence of Cloud Computing in the Email Landscape,” in 2017 Network Traffic Measurement and Analysis Conference (TMA), 2017.
    [BibTeX] [Abstract] [PDF] [DOI]

    The ongoing adoption of cloud-based email services – mainly run by few operators – transforms the largely decentralized email infrastructure into a more centralized one. Yet, little empirical knowledge on this transition and its implications exists. To address this gap, we assess the prevalence and exposure of Internet users to cloud-based email in a measurement study. In a first step, we study the email infrastructure and detect SMTP servers running in the cloud by analyzing all 154M .com/.net/.org domains for cloud usage. Informed by this infrastructure assessment, we then study the prevalence of cloud-based SMTP services among actual email exchanges. Here, we analyze 31M exchanged emails, ranging from public email archives to the personal emails of 20 users. Our results show that as of today, 13% to 25% of received emails utilize cloud services and 30% to 70% of this cloud usage is invisible for users.

    @inproceedings{HSH17,
    author = {Henze, Martin and Sanford, Mary Peyton and Hohlfeld, Oliver},
    title = {{Veiled in Clouds? Assessing the Prevalence of Cloud Computing in the Email Landscape}},
    booktitle = {2017 Network Traffic Measurement and Analysis Conference (TMA)},
    month = {06},
    year = {2017},
    doi = {10.23919/TMA.2017.8002910},
    abstract = {The ongoing adoption of cloud-based email services - mainly run by few operators - transforms the largely decentralized email infrastructure into a more centralized one. Yet, little empirical knowledge on this transition and its implications exists. To address this gap, we assess the prevalence and exposure of Internet users to cloud-based email in a measurement study. In a first step, we study the email infrastructure and detect SMTP servers running in the cloud by analyzing all 154M .com/.net/.org domains for cloud usage. Informed by this infrastructure assessment, we then study the prevalence of cloud-based SMTP services among actual email exchanges. Here, we analyze 31M exchanged emails, ranging from public email archives to the personal emails of 20 users. Our results show that as of today, 13% to 25% of received emails utilize cloud services and 30% to 70% of this cloud usage is invisible for users.},
    }

  • M. Henze, R. Matzutt, J. Hiller, E. Mühmer, J. H. Ziegeldorf, J. van der Giet, and K. Wehrle, “Practical Data Compliance for Cloud Storage,” in 2017 IEEE International Conference on Cloud Engineering (IC2E), 2017.
    [BibTeX] [Abstract] [PDF] [DOI]

    Despite their increasing proliferation and technical variety, existing cloud storage technologies by design lack support for enforcing compliance with regulatory, organizational, or contractual data handling requirements. However, with legislation responding to rising privacy concerns, this becomes a crucial technical capability for cloud storage systems. In this paper, we introduce PRADA , a practical approach to enforce data compliance in key-value based cloud storage systems. To this end, PRADA introduces a transparent data handling layer which enables clients to specify data handling requirements and provides operators with the technical means to adhere to them. The evaluation of our prototype shows that the modest overheads for supporting data handling requirements in cloud storage systems are practical for real-world deployments.

    @inproceedings{HMH+17,
    author = {Henze, Martin and Matzutt, Roman and Hiller, Jens and M{\"u}hmer, Erik and Ziegeldorf, Jan Henrik and van der Giet, Johannes and Wehrle, Klaus},
    title = {{Practical Data Compliance for Cloud Storage}},
    booktitle = {2017 IEEE International Conference on Cloud Engineering (IC2E)},
    month = {04},
    year = {2017},
    doi = {10.1109/IC2E.2017.32},
    abstract = {Despite their increasing proliferation and technical variety, existing cloud storage technologies by design lack support for enforcing compliance with regulatory, organizational, or contractual data handling requirements. However, with legislation responding to rising privacy concerns, this becomes a crucial technical capability for cloud storage systems. In this paper, we introduce PRADA , a practical approach to enforce data compliance in key-value based cloud storage systems. To this end, PRADA introduces a transparent data handling layer which enables clients to specify data handling requirements and provides operators with the technical means to adhere to them. The evaluation of our prototype shows that the modest overheads for supporting data handling requirements in cloud storage systems are practical for real-world deployments.},
    }

  • J. H. Ziegeldorf, J. Metzke, J. Rüth, M. Henze, and K. Wehrle, “Privacy-Preserving HMM Forward Computation,” in The 7th ACM Conference on Data and Application Security and Privacy (CODASPY), 2017.
    [BibTeX] [Abstract] [PDF] [DOI]

    In many areas such as bioinformatics, pattern recognition, and signal processing, Hidden Markov Models (HMMs) have become an indispensable statistical tool. A fundamental building block for these applications is the Forward algorithm which computes the likelihood to observe a given sequence of emissions for a given HMM. The classical Forward algorithm requires that one party holds both the model and observation sequences. However, we observe for many emerging applications and services that the models and observation sequences are held by different parties who are not able to share their information due to applicable data protection legislation or due to concerns over intellectual property and privacy. This renders the application of HMMs infeasible. In this paper, we show how to resolve this evident conflict of interests using secure two-party computation. Concretely, we propose Priward which enables two mutually untrusting parties to compute the Forward algorithm securely, i.e., without requiring either party to share her sensitive inputs with the other or any third party. The evaluation of our implementation of Priward shows that our solution is efficient, accurate, and outperforms related works by a factor of 4 to 126. To highlight the applicability of our approach in real-world deployments, we combine Priward with the widely used HMMER biosequence analysis framework and show how to analyze real genome sequences in a privacy-preserving manner.

    @inproceedings{ZMR+17,
    author = {Ziegeldorf, Jan Henrik and Metzke, Jan and R{\"u}th, Jan and Henze, Martin and Wehrle, Klaus},
    title = {{Privacy-Preserving HMM Forward Computation}},
    booktitle = {The 7th ACM Conference on Data and Application Security and Privacy (CODASPY)},
    month = {03},
    year = {2017},
    doi = {10.1145/3029806.3029816},
    abstract = {In many areas such as bioinformatics, pattern recognition, and signal processing, Hidden Markov Models (HMMs) have become an indispensable statistical tool. A fundamental building block for these applications is the Forward algorithm which computes the likelihood to observe a given sequence of emissions for a given HMM. The classical Forward algorithm requires that one party holds both the model and observation sequences. However, we observe for many emerging applications and services that the models and observation sequences are held by different parties who are not able to share their information due to applicable data protection legislation or due to concerns over intellectual property and privacy. This renders the application of HMMs infeasible. In this paper, we show how to resolve this evident conflict of interests using secure two-party computation. Concretely, we propose Priward which enables two mutually untrusting parties to compute the Forward algorithm securely, i.e., without requiring either party to share her sensitive inputs with the other or any third party. The evaluation of our implementation of Priward shows that our solution is efficient, accurate, and outperforms related works by a factor of 4 to 126. To highlight the applicability of our approach in real-world deployments, we combine Priward with the widely used HMMER biosequence analysis framework and show how to analyze real genome sequences in a privacy-preserving manner.},
    }

  • J. H. Ziegeldorf, M. Henze, J. Bavendiek, and K. Wehrle, “TraceMixer: Privacy-Preserving Crowd-Sensing sans Trusted Third Party,” in 2017 Wireless On-demand Network Systems and Services Conference (WONS), 2017.
    [BibTeX] [Abstract] [PDF] [DOI]

    Crowd-sensing promises cheap and easy large scale data collection by tapping into the sensing and processing capabilities of smart phone users. However, the vast amount of fine-grained location data collected raises serious privacy concerns among potential contributors. In this paper, we argue that crowd-sensing has unique requirements w.r.t. privacy and data utility which renders existing protection mechanisms infeasible. We hence propose TraceMixer, a novel location privacy protection mechanism tailored to the special requirements in crowd-sensing. TraceMixer builds upon the well-studied concept of mix zones to provide trajectory privacy while achieving high spatial accuracy. First in this line of research, TraceMixer applies secure two-party computation technologies to realize a trustless architecture that does not require participants to share locations with anyone in clear. We evaluate TraceMixer on real-world datasets to show the feasibility of our approach in terms of privacy, utility, and performance. Finally, we demonstrate the applicability of TraceMixer in a real-world crowd-sensing campaign.

    @inproceedings{ZHBW17,
    author = {Ziegeldorf, Jan Henrik and Henze, Martin and Bavendiek, Jens and Wehrle, Klaus},
    title = {{TraceMixer: Privacy-Preserving Crowd-Sensing sans Trusted Third Party}},
    booktitle = {2017 Wireless On-demand Network Systems and Services Conference (WONS)},
    month = {02},
    year = {2017},
    doi = {10.1109/WONS.2017.7888771},
    abstract = {Crowd-sensing promises cheap and easy large scale data collection by tapping into the sensing and processing capabilities of smart phone users. However, the vast amount of fine-grained location data collected raises serious privacy concerns among potential contributors. In this paper, we argue that crowd-sensing has unique requirements w.r.t. privacy and data utility which renders existing protection mechanisms infeasible. We hence propose TraceMixer, a novel location privacy protection mechanism tailored to the special requirements in crowd-sensing. TraceMixer builds upon the well-studied concept of mix zones to provide trajectory privacy while achieving high spatial accuracy. First in this line of research, TraceMixer applies secure two-party computation technologies to realize a trustless architecture that does not require participants to share locations with anyone in clear. We evaluate TraceMixer on real-world datasets to show the feasibility of our approach in terms of privacy, utility, and performance. Finally, we demonstrate the applicability of TraceMixer in a real-world crowd-sensing campaign.},
    }

2016

  • M. Henze, D. Kerpen, J. Hiller, M. Eggert, D. Hellmanns, E. Mühmer, O. Renuli, H. Maier, C. Stüble, R. Häußling, and K. Wehrle, “Towards Transparent Information on Individual Cloud Service Usage,” in 2016 IEEE International Conference on Cloud Computing Technology and Science (CloudCom), 2016.
    [BibTeX] [Abstract] [PDF] [DOI]

    Cloud computing allows developers of mobile apps to overcome limited computing, storage, and power resources of modern smartphones. Besides these huge advantages, the hidden utilization of cloud services by mobile apps leads to severe privacy concerns. To overcome these concerns and allow users and companies to properly assess the risks of hidden cloud usage, it is necessary to provide transparency over the cloud services utilized by smartphone apps. In this paper, we present our ongoing work on TRINICS to provide transparent information on individual cloud service usage. To this end, we analyze network traffic of smartphone apps with the goal to detect and uncover cloud usage. We present the resulting statistics on cloud usage to the user and put these numbers into context through anonymous comparison with users’ peer groups (i.e., users with similar sociodemographic background and interests). By doing so, we enable users to make an informed decision on suitable means for sufficient self data protection for their future use of apps and cloud services.

    @inproceedings{HKH+16,
    author = {Henze, Martin and Kerpen, Daniel and Hiller, Jens and Eggert, Michael and Hellmanns, David and M{\"u}hmer, Erik and Renuli, Oussama and Maier, Henning and St{\"u}ble, Christian and H{\"a}u{\ss}ling, Roger and Wehrle, Klaus},
    title = {{Towards Transparent Information on Individual Cloud Service Usage}},
    booktitle = {2016 IEEE International Conference on Cloud Computing Technology and Science (CloudCom)},
    month = {12},
    year = {2016},
    doi = {10.1109/CloudCom.2016.0064},
    abstract = {Cloud computing allows developers of mobile apps to overcome limited computing, storage, and power resources of modern smartphones. Besides these huge advantages, the hidden utilization of cloud services by mobile apps leads to severe privacy concerns. To overcome these concerns and allow users and companies to properly assess the risks of hidden cloud usage, it is necessary to provide transparency over the cloud services utilized by smartphone apps. In this paper, we present our ongoing work on TRINICS to provide transparent information on individual cloud service usage. To this end, we analyze network traffic of smartphone apps with the goal to detect and uncover cloud usage. We present the resulting statistics on cloud usage to the user and put these numbers into context through anonymous comparison with users' peer groups (i.e., users with similar sociodemographic background and interests). By doing so, we enable users to make an informed decision on suitable means for sufficient self data protection for their future use of apps and cloud services.},
    }

  • M. Henze, J. Hiller, S. Schmerling, J. H. Ziegeldorf, and K. Wehrle, “CPPL: Compact Privacy Policy Language,” in Proceedings of the 15th Workshop on Privacy in the Electronic Society (WPES), 2016.
    [BibTeX] [Abstract] [PDF] [DOI]

    Recent technology shifts such as cloud computing, the Internet of Things, and big data lead to a significant transfer of sensitive data out of trusted edge networks. To counter resulting privacy concerns, we must ensure that this sensitive data is not inadvertently forwarded to third-parties, used for unintended purposes, or handled and stored in violation of legal requirements. Related work proposes to solve this challenge by annotating data with privacy policies before data leaves the control sphere of its owner. However, we find that existing privacy policy languages are either not flexible enough or require excessive processing, storage, or bandwidth resources which prevents their widespread deployment. To fill this gap, we propose CPPL, a Compact Privacy Policy Language which compresses privacy policies by taking advantage of flexibly specifiable domain knowledge. Our evaluation shows that CPPL reduces policy sizes by two orders of magnitude compared to related work and can check several thousand of policies per second. This allows for individual per-data item policies in the context of cloud computing, the Internet of Things, and big data.

    @inproceedings{HHS+16,
    author = {Henze, Martin and Hiller, Jens and Schmerling, Sascha and Ziegeldorf, Jan Henrik and Wehrle, Klaus},
    title = {{CPPL: Compact Privacy Policy Language}},
    booktitle = {Proceedings of the 15th Workshop on Privacy in the Electronic Society (WPES)},
    month = {10},
    year = {2016},
    doi = {10.1145/2994620.2994627},
    abstract = {Recent technology shifts such as cloud computing, the Internet of Things, and big data lead to a significant transfer of sensitive data out of trusted edge networks. To counter resulting privacy concerns, we must ensure that this sensitive data is not inadvertently forwarded to third-parties, used for unintended purposes, or handled and stored in violation of legal requirements. Related work proposes to solve this challenge by annotating data with privacy policies before data leaves the control sphere of its owner. However, we find that existing privacy policy languages are either not flexible enough or require excessive processing, storage, or bandwidth resources which prevents their widespread deployment. To fill this gap, we propose CPPL, a Compact Privacy Policy Language which compresses privacy policies by taking advantage of flexibly specifiable domain knowledge. Our evaluation shows that CPPL reduces policy sizes by two orders of magnitude compared to related work and can check several thousand of policies per second. This allows for individual per-data item policies in the context of cloud computing, the Internet of Things, and big data.},
    }

  • A. Mitseva, A. Panchenko, F. Lanze, M. Henze, K. Wehrle, and T. Engel, “POSTER: Fingerprinting Tor Hidden Services,” in Proceedings of the 23rd ACM Conference on Computer and Communications Security (CCS) – Poster Session, 2016.
    [BibTeX] [Abstract] [PDF] [DOI]

    The website fingerprinting attack aims to infer the content of encrypted and anonymized connections by analyzing patterns from the communication such as packet sizes, their order, and direction. Although recent study has shown that no existing fingerprinting method scales in Tor when applied in realistic settings, this does not consider the case of Tor hidden services. In this work, we propose a two-phase fingerprinting approach applied in the scope of Tor hidden services and explore its scalability. We show that the success of the only previously proposed fingerprinting attack against hidden services strongly depends on the Tor version used; i.e., it may be applicable to less than 1.5% of connections to hidden services due to its requirement for control of the first anonymization node. In contrast, in our method, the attacker needs merely to be somewhere on the link between the client and the first anonymization node and the attack can be mounted for any connection to a hidden service.

    @inproceedings{MPL+16,
    author = {Mitseva, Asya and Panchenko, Andriy and Lanze, Fabian and Henze, Martin and Wehrle, Klaus and Engel, Thomas},
    title = {{POSTER: Fingerprinting Tor Hidden Services}},
    booktitle = {Proceedings of the 23rd ACM Conference on Computer and Communications Security (CCS) - Poster Session},
    month = {10},
    year = {2016},
    doi = {10.1145/2976749.2989054},
    abstract = {The website fingerprinting attack aims to infer the content of encrypted and anonymized connections by analyzing patterns from the communication such as packet sizes, their order, and direction. Although recent study has shown that no existing fingerprinting method scales in Tor when applied in realistic settings, this does not consider the case of Tor hidden services. In this work, we propose a two-phase fingerprinting approach applied in the scope of Tor hidden services and explore its scalability. We show that the success of the only previously proposed fingerprinting attack against hidden services strongly depends on the Tor version used; i.e., it may be applicable to less than 1.5% of connections to hidden services due to its requirement for control of the first anonymization node. In contrast, in our method, the attacker needs merely to be somewhere on the link between the client and the first anonymization node and the attack can be mounted for any connection to a hidden service.},
    }

  • R. Matzutt, O. Hohlfeld, M. Henze, R. Rawiel, J. H. Ziegeldorf, and K. Wehrle, “POSTER: I Don’t Want That Content! On the Risks of Exploiting Bitcoin’s Blockchain as a Content Store,” in Proceedings of the 23rd ACM Conference on Computer and Communications Security (CCS) – Poster Session, 2016.
    [BibTeX] [Abstract] [PDF] [DOI]

    Bitcoin has revolutionized digital currencies and its underlying blockchain has been successfully applied to other domains. To be verifiable by every participating peer, the blockchain maintains every transaction in a persistent, distributed, and tamper-proof log that every participant needs to replicate locally. While this constitutes the central innovation of blockchain technology and is thus a desired property, it can also be abused in ways that are harmful to the overall system. We show for Bitcoin that blockchains potentially provide multiple ways to store (malicious and illegal) content that, once stored, cannot be removed and is replicated by every participating user. We study the evolution of content storage in Bitcoin’s blockchain, classify the stored content, and highlight implications of allowing the storage of arbitrary data in globally replicated blockchains.

    @inproceedings{MHH+16,
    author = {Matzutt, Roman and Hohlfeld, Oliver and Henze, Martin and Rawiel, Robin and Ziegeldorf, Jan Henrik and Wehrle, Klaus},
    title = {{POSTER: I Don't Want That Content! On the Risks of Exploiting Bitcoin's Blockchain as a Content Store}},
    booktitle = {Proceedings of the 23rd ACM Conference on Computer and Communications Security (CCS) - Poster Session},
    month = {10},
    year = {2016},
    doi = {10.1145/2976749.2989059},
    abstract = {Bitcoin has revolutionized digital currencies and its underlying blockchain has been successfully applied to other domains. To be verifiable by every participating peer, the blockchain maintains every transaction in a persistent, distributed, and tamper-proof log that every participant needs to replicate locally. While this constitutes the central innovation of blockchain technology and is thus a desired property, it can also be abused in ways that are harmful to the overall system. We show for Bitcoin that blockchains potentially provide multiple ways to store (malicious and illegal) content that, once stored, cannot be removed and is replicated by every participating user. We study the evolution of content storage in Bitcoin's blockchain, classify the stored content, and highlight implications of allowing the storage of arbitrary data in globally replicated blockchains.},
    }

  • M. Henze, J. Hiller, O. Hohlfeld, and K. Wehrle, “Moving Privacy-Sensitive Services from Public Clouds to Decentralized Private Clouds,” in 2016 IEEE International Conference on Cloud Engineering (IC2E) Workshops, 2016.
    [BibTeX] [Abstract] [PDF] [DOI]

    Today’s public cloud services suffer from fundamental privacy issues, e.g., as demonstrated by the global surveillance disclosures. The lack of privacy in cloud computing stems from its inherent centrality. State-of-the-art approaches that increase privacy for cloud services either operate cloud-like services on user’s devices or encrypt data prior to upload to the cloud. However, these techniques jeopardize advantages of the cloud such as elasticity of processing resources. In contrast, we propose decentralized private clouds to allow users to protect their privacy and still benefit from the advantages of cloud computing. Our approach utilizes idle resources of friends and family to realize a trusted, decentralized system in which cloud services can be operated securely and privacy-preserving. We discuss our approach and substantiate its feasibility with initial experiments.

    @inproceedings{HHHW16,
    author = {Henze, Martin and Hiller, Jens and Hohlfeld, Oliver and Wehrle, Klaus},
    title = {{Moving Privacy-Sensitive Services from Public Clouds to Decentralized Private Clouds}},
    booktitle = {2016 IEEE International Conference on Cloud Engineering (IC2E) Workshops},
    month = {04},
    year = {2016},
    doi = {10.1109/IC2EW.2016.24},
    abstract = {Today's public cloud services suffer from fundamental privacy issues, e.g., as demonstrated by the global surveillance disclosures. The lack of privacy in cloud computing stems from its inherent centrality. State-of-the-art approaches that increase privacy for cloud services either operate cloud-like services on user's devices or encrypt data prior to upload to the cloud. However, these techniques jeopardize advantages of the cloud such as elasticity of processing resources. In contrast, we propose decentralized private clouds to allow users to protect their privacy and still benefit from the advantages of cloud computing. Our approach utilizes idle resources of friends and family to realize a trusted, decentralized system in which cloud services can be operated securely and privacy-preserving. We discuss our approach and substantiate its feasibility with initial experiments.},
    }

  • M. Henze, L. Hermerschmidt, D. Kerpen, R. Häußling, B. Rumpe, and K. Wehrle, “A Comprehensive Approach to Privacy in the Cloud-based Internet of Things,” Future Generation Computer Systems (FGCS), vol. 56, 2016.
    [BibTeX] [Abstract] [PDF] [DOI]

    In the near future, the Internet of Things is expected to penetrate all aspects of the physical world, including homes and urban spaces. In order to handle the massive amount of data that becomes collectible and to offer services on top of this data, the most convincing solution is the federation of the Internet of Things and cloud computing. Yet, the wide adoption of this promising vision, especially for application areas such as pervasive health care, assisted living, and smart cities, is hindered by severe privacy concerns of the individual users. Hence, user acceptance is a critical factor to turn this vision into reality. To address this critical factor and thus realize the cloud-based Internet of Things for a variety of different application areas, we present our comprehensive approach to privacy in this envisioned setting. We allow an individual user to enforce all her privacy requirements before any sensitive data is uploaded to the cloud, enable developers of cloud services to integrate privacy functionality already into the development process of cloud services, and offer users a transparent and adaptable interface for configuring their privacy requirements.

    @article{HHK+15,
    author = {Henze, Martin and Hermerschmidt, Lars and Kerpen, Daniel and H{\"a}u{\ss}ling, Roger and Rumpe, Bernhard and Wehrle, Klaus},
    journal = {Future Generation Computer Systems (FGCS)},
    volume = {56},
    title = {{A Comprehensive Approach to Privacy in the Cloud-based Internet of Things}},
    month = {03},
    year = {2016},
    doi = {10.1016/j.future.2015.09.016},
    abstract = {In the near future, the Internet of Things is expected to penetrate all aspects of the physical world, including homes and urban spaces. In order to handle the massive amount of data that becomes collectible and to offer services on top of this data, the most convincing solution is the federation of the Internet of Things and cloud computing. Yet, the wide adoption of this promising vision, especially for application areas such as pervasive health care, assisted living, and smart cities, is hindered by severe privacy concerns of the individual users. Hence, user acceptance is a critical factor to turn this vision into reality.
    To address this critical factor and thus realize the cloud-based Internet of Things for a variety of different application areas, we present our comprehensive approach to privacy in this envisioned setting. We allow an individual user to enforce all her privacy requirements before any sensitive data is uploaded to the cloud, enable developers of cloud services to integrate privacy functionality already into the development process of cloud services, and offer users a transparent and adaptable interface for configuring their privacy requirements.},
    }

  • A. Panchenko, F. Lanze, A. Zinnen, M. Henze, J. Pennekamp, K. Wehrle, and T. Engel, “Website Fingerprinting at Internet Scale,” in 23rd Annual Network and Distributed System Security Symposium (NDSS), 2016.
    [BibTeX] [Abstract] [PDF] [DOI]

    The website fingerprinting attack aims to identify the content (i.e., a webpage accessed by a client) of encrypted and anonymized connections by observing patterns of data flows such as packet size and direction. This attack can be performed by a local passive eavesdropper – one of the weakest adversaries in the attacker model of anonymization networks such as Tor. In this paper, we present a novel website fingerprinting attack. Based on a simple and comprehensible idea, our approach outperforms all state-of-the-art methods in terms of classification accuracy while being computationally dramatically more efficient. In order to evaluate the severity of the website fingerprinting attack in reality, we collected the most representative dataset that has ever been built, where we avoid simplified assumptions made in the related work regarding selection and type of webpages and the size of the universe. Using this data, we explore the practical limits of website fingerprinting at Internet scale. Although our novel approach is by orders of magnitude computationally more efficient and superior in terms of detection accuracy, for the first time we show that no existing method – including our own – scales when applied in realistic settings. With our analysis, we explore neglected aspects of the attack and investigate the realistic probability of success for different strategies a real-world adversary may follow.

    @inproceedings{PLZ+16,
    author = {Panchenko, Andriy and Lanze, Fabian and Zinnen, Andreas and Henze, Martin and Pennekamp, Jan and Wehrle, Klaus and Engel, Thomas},
    title = {{Website Fingerprinting at Internet Scale}},
    booktitle = {23rd Annual Network and Distributed System Security Symposium (NDSS)},
    month = {02},
    year = {2016},
    doi = {10.14722/ndss.2016.23477},
    abstract = {The website fingerprinting attack aims to identify the content (i.e., a webpage accessed by a client) of encrypted and anonymized connections by observing patterns of data flows such as packet size and direction. This attack can be performed by a local passive eavesdropper - one of the weakest adversaries in the attacker model of anonymization networks such as Tor.
    In this paper, we present a novel website fingerprinting attack. Based on a simple and comprehensible idea, our approach outperforms all state-of-the-art methods in terms of classification accuracy while being computationally dramatically more efficient. In order to evaluate the severity of the website fingerprinting attack in reality, we collected the most representative dataset that has ever been built, where we avoid simplified assumptions made in the related work regarding selection and type of webpages and the size of the universe. Using this data, we explore the practical limits of website fingerprinting at Internet scale. Although our novel approach is by orders of magnitude computationally more efficient and superior in terms of detection accuracy, for the first time we show that no existing method - including our own - scales when applied in realistic settings. With our analysis, we explore neglected aspects of the attack and investigate the realistic probability of success for different strategies a real-world adversary may follow.},
    }

2015

  • J. H. Ziegeldorf, J. Hiller, M. Henze, H. Wirtz, and K. Wehrle, “Bandwidth-optimized Secure Two-Party Computation of Minima,” in The 14th International Conference on Cryptology and Network Security (CANS), 2015.
    [BibTeX] [Abstract] [PDF] [DOI]

    Secure Two-Party Computation (STC) allows two mutually untrusting parties to securely evaluate a function on their private inputs. While tremendous progress has been made towards reducing processing overheads, STC still incurs significant communication overhead that is in fact prohibitive when no high-speed network connection is available, e.g., when applications are run over a cellular network. In this paper, we consider the fundamental problem of securely computing a minimum and its argument, which is a basic building block in a wide range of applications that have been proposed as STCs, e.g., Nearest Neighbor Search, Auctions, and Biometric Matchings. We first comprehensively analyze and compare the communication overhead of implementations of the three major STC concepts, i.e., Yao’s Garbled Circuits, the Goldreich-Micali-Wigderson protocol, and Homomorphic Encryption. We then propose an algorithm for securely computing minima in the semi-honest model that, compared to current state-of-the-art, reduces communication overheads by 18 % to 98 %. Lower communication overheads result in faster runtimes in constrained networks and lower direct costs for users.

    @inproceedings{ZHH+15,
    author = {Ziegeldorf, Jan Henrik and Hiller, Jens and Henze, Martin and Wirtz, Hanno and Wehrle, Klaus},
    title = {{Bandwidth-optimized Secure Two-Party Computation of Minima}},
    booktitle = {The 14th International Conference on Cryptology and Network Security (CANS)},
    month = {12},
    year = {2015},
    doi = {10.1007/978-3-319-26823-1_14},
    abstract = {Secure Two-Party Computation (STC) allows two mutually untrusting parties to securely evaluate a function on their private inputs. While tremendous progress has been made towards reducing processing overheads, STC still incurs significant communication overhead that is in fact prohibitive when no high-speed network connection is available, e.g., when applications are run over a cellular network. In this paper, we consider the fundamental problem of securely computing a minimum and its argument, which is a basic building block in a wide range of applications that have been proposed as STCs, e.g., Nearest Neighbor Search, Auctions, and Biometric Matchings. We first comprehensively analyze and compare the communication overhead of implementations of the three major STC concepts, i.e., Yao's Garbled Circuits, the Goldreich-Micali-Wigderson protocol, and Homomorphic Encryption. We then propose an algorithm for securely computing minima in the semi-honest model that, compared to current state-of-the-art, reduces communication overheads by 18 % to 98 %. Lower communication overheads result in faster runtimes in constrained networks and lower direct costs for users.},
    }

  • J. H. Ziegeldorf, M. Henze, R. Hummen, and K. Wehrle, “Comparison-based Privacy: Nudging Privacy in Social Media,” in The 10th International Workshop on Data Privacy Management (DPM), 2015.
    [BibTeX] [Abstract] [PDF] [DOI]

    Social media continues to lead imprudent users into oversharing, exposing them to various privacy threats. Recent research thus focusses on nudging the user into the ‘right’ direction. In this paper, we propose Comparison-based Privacy (CbP), a design paradigm for privacy nudges that overcomes the limitations and challenges of existing approaches. CbP is based on the observation that comparison is a natural human behavior. With CbP , we transfer this observation to decision-making processes in the digital world by enabling the user to compare herself along privacy-relevant metrics to user-selected comparison groups. In doing so, our approach provides a framework for the integration of existing nudges under a self-adaptive, user-centric norm of privacy. Thus, we expect CbP not only to provide technical improvements, but to also increase user acceptance of privacy nudges. We also show how CbP can be implemented and present preliminary results.

    @inproceedings{ZHHW15,
    author = {Ziegeldorf, Jan Henrik and Henze, Martin and Hummen, Ren{\'e} and Wehrle, Klaus},
    title = {{Comparison-based Privacy: Nudging Privacy in Social Media}},
    booktitle = {The 10th International Workshop on Data Privacy Management (DPM)},
    month = {09},
    year = {2015},
    doi = {10.1007/978-3-319-29883-2_15},
    abstract = {Social media continues to lead imprudent users into oversharing, exposing them to various privacy threats. Recent research thus focusses on nudging the user into the 'right' direction. In this paper, we propose Comparison-based Privacy (CbP), a design paradigm for privacy nudges that overcomes the limitations and challenges of existing approaches. CbP is based on the observation that comparison is a natural human behavior. With CbP , we transfer this observation to decision-making processes in the digital world by enabling the user to compare herself along privacy-relevant metrics to user-selected comparison groups. In doing so, our approach provides a framework for the integration of existing nudges under a self-adaptive, user-centric norm of privacy. Thus, we expect CbP not only to provide technical improvements, but to also increase user acceptance of privacy nudges. We also show how CbP can be implemented and present preliminary results.},
    }

  • J. H. Ziegeldorf, J. Metzke, M. Henze, and K. Wehrle, “Choose Wisely: A Comparison of Secure Two-Party Computation Frameworks,” in 2015 IEEE Security and Privacy Workshops, 2015.
    [BibTeX] [Abstract] [PDF] [DOI]

    Secure Two-Party Computation (STC), despite being a powerful tool for privacy engineers, is rarely used practically due to two reasons: i) STCs incur significant overheads and ii) developing efficient STCs requires expert knowledge. Recent works propose a variety of frameworks that address these problems. However, the varying assumptions, scenarios, and benchmarks in these works render results incomparable. It is thus hard, if not impossible, for an inexperienced developer of STCs to choose the best framework for her task. In this paper, we present a thorough quantitative performance analysis of recent STC frameworks. Our results reveal significant performance differences and we identify potential for optimizations as well as new research directions for STC. Complemented by a qualitative discussion of the frameworks’ usability, our results provide privacy engineers with a dependable information basis to take the decision for the right STC framework fitting their application.

    @inproceedings{ZMHW15,
    author = {Ziegeldorf, Jan Henrik and Metzke, Jan and Henze, Martin and Wehrle, Klaus},
    title = {{Choose Wisely: A Comparison of Secure Two-Party Computation Frameworks}},
    booktitle = {2015 IEEE Security and Privacy Workshops},
    month = {05},
    year = {2015},
    doi = {10.1109/SPW.2015.9},
    abstract = {Secure Two-Party Computation (STC), despite being a powerful tool for privacy engineers, is rarely used practically due to two reasons: i) STCs incur significant overheads and ii) developing efficient STCs requires expert knowledge. Recent works propose a variety of frameworks that address these problems. However, the varying assumptions, scenarios, and benchmarks in these works render results incomparable. It is thus hard, if not impossible, for an inexperienced developer of STCs to choose the best framework for her task. In this paper, we present a thorough quantitative performance analysis of recent STC frameworks. Our results reveal significant performance differences and we identify potential for optimizations as well as new research directions for STC. Complemented by a qualitative discussion of the frameworks' usability, our results provide privacy engineers with a dependable information basis to take the decision for the right STC framework fitting their application.},
    }

  • J. H. Ziegeldorf, F. Grossmann, M. Henze, N. Inden, and K. Wehrle, “CoinParty: Secure Multi-Party Mixing of Bitcoins,” in The Fifth ACM Conference on Data and Application Security and Privacy (CODASPY), 2015.
    [BibTeX] [Abstract] [PDF] [DOI]

    Bitcoin is a digital currency that uses anonymous cryptographic identities to achieve financial privacy. However, Bitcoin’s promise of anonymity is broken as recent work shows how Bitcoin’s blockchain exposes users to reidentification and linking attacks. In consequence, different mixing services have emerged which promise to randomly mix a user’s Bitcoins with other users’ coins to provide anonymity based on the unlinkability of the mixing. However, proposed approaches suffer either from weak security guarantees and single points of failure, or small anonymity sets and missing deniability. In this paper, we propose CoinParty a novel, decentralized mixing service for Bitcoin based on a combination of decryption mixnets with threshold signatures. CoinParty is secure against malicious adversaries and the evaluation of our prototype shows that it scales easily to a large number of participants in real-world network settings. By the application of threshold signatures to Bitcoin mixing, CoinParty achieves anonymity by orders of magnitude higher than related work as we quantify by analyzing transactions in the actual Bitcoin blockchain and is first among related approaches to provide plausible deniability.

    @inproceedings{ZGH+15,
    author = {Ziegeldorf, Jan Henrik and Grossmann, Fred and Henze, Martin and Inden, Nicolas and Wehrle, Klaus},
    title = {{CoinParty: Secure Multi-Party Mixing of Bitcoins}},
    booktitle = {The Fifth ACM Conference on Data and Application Security and Privacy (CODASPY)},
    month = {03},
    year = {2015},
    doi = {10.1145/2699026.2699100},
    abstract = {Bitcoin is a digital currency that uses anonymous cryptographic identities to achieve financial privacy. However, Bitcoin's promise of anonymity is broken as recent work shows how Bitcoin's blockchain exposes users to reidentification and linking attacks. In consequence, different mixing services have emerged which promise to randomly mix a user's Bitcoins with other users' coins to provide anonymity based on the unlinkability of the mixing. However, proposed approaches suffer either from weak security guarantees and single points of failure, or small anonymity sets and missing deniability. In this paper, we propose CoinParty a novel, decentralized mixing service for Bitcoin based on a combination of decryption mixnets with threshold signatures. CoinParty is secure against malicious adversaries and the evaluation of our prototype shows that it scales easily to a large number of participants in real-world network settings. By the application of threshold signatures to Bitcoin mixing, CoinParty achieves anonymity by orders of magnitude higher than related work as we quantify by analyzing transactions in the actual Bitcoin blockchain and is first among related approaches to provide plausible deniability.},
    }

2014

  • M. Henze, R. Hummen, R. Matzutt, and K. Wehrle, “A Trust Point-based Security Architecture for Sensor Data in the Cloud,” in Trusted Cloud Computing, H. Krcmar, R. Reussner, and B. Rumpe, Eds., Springer, 2014.
    [BibTeX] [Abstract] [DOI]

    The SensorCloud project aims at enabling the use of elastic, on-demand resources of today’s Cloud offers for the storage and processing of sensed information about the physical world. Recent privacy concerns regarding the Cloud computing paradigm, however, constitute an adoption barrier that must be overcome to leverage the full potential of the envisioned scenario. To this end, a key goal of the SensorCloud project is to develop a security architecture that offers full access control to the data owner when outsourcing her sensed information to the Cloud. The central idea of this security architecture is the introduction of the trust point, a security-enhanced gateway at the border of the information sensing network. Based on a security analysis of the SensorCloud scenario, this chapter presents the design and implementation of the main components of our proposed security architecture. Our evaluation results confirm the feasibility of our proposed architecture with respect to the elastic, on-demand resources of today’s commodity Cloud offers.

    @incollection{HHMW14,
    author = {Henze, Martin and Hummen, Ren{\'e} and Matzutt, Roman and Wehrle, Klaus},
    title = {{A Trust Point-based Security Architecture for Sensor Data in the Cloud}},
    booktitle = {Trusted Cloud Computing},
    editor = {Krcmar, Helmut and Reussner, Ralf and Rumpe, Bernhard},
    month = {12},
    year = {2014},
    publisher = {Springer},
    doi = {10.1007/978-3-319-12718-7_6},
    abstract = {The SensorCloud project aims at enabling the use of elastic, on-demand resources of today's Cloud offers for the storage and processing of sensed information about the physical world. Recent privacy concerns regarding the Cloud computing paradigm, however, constitute an adoption barrier that must be overcome to leverage the full potential of the envisioned scenario. To this end, a key goal of the SensorCloud project is to develop a security architecture that offers full access control to the data owner when outsourcing her sensed information to the Cloud. The central idea of this security architecture is the introduction of the trust point, a security-enhanced gateway at the border of the information sensing network. Based on a security analysis of the SensorCloud scenario, this chapter presents the design and implementation of the main components of our proposed security architecture. Our evaluation results confirm the feasibility of our proposed architecture with respect to the elastic, on-demand resources of today's commodity Cloud offers.},
    }

  • M. Eggert, R. Häußling, M. Henze, L. Hermerschmidt, R. Hummen, D. Kerpen, A. Navarro Pérez, B. Rumpe, D. Thißen, and K. Wehrle, “SensorCloud: Towards the Interdisciplinary Development of a Trustworthy Platform for Globally Interconnected Sensors and Actuators,” in Trusted Cloud Computing, H. Krcmar, R. Reussner, and B. Rumpe, Eds., Springer, 2014.
    [BibTeX] [Abstract] [PDF] [DOI]

    Although Cloud Computing promises to lower IT costs and increase users’ productivity in everyday life, the unattractive aspect of this new technology is that the user no longer owns all the devices which process personal data. To lower scepticism, the project SensorCloud investigates techniques to understand and compensate these adoption barriers in a scenario consisting of cloud applications that utilize sensors and actuators placed in private places. This work provides an interdisciplinary overview of the social and technical core research challenges for the trustworthy integration of sensor and actuator devices with the Cloud Computing paradigm. Most importantly, these challenges include i) ease of development, ii) security and privacy, and iii) social dimensions of a cloud-based system which integrates into private life. When these challenges are tackled in the development of future cloud systems, the attractiveness of new use cases in a sensor-enabled world will considerably be increased for users who currently do not trust the Cloud.

    @incollection{EHH+14,
    author = {Eggert, Michael and H{\"a}u{\ss}ling, Roger and Henze, Martin and Hermerschmidt, Lars and Hummen, Ren{\'e} and Kerpen, Daniel and Navarro P{\'e}rez, Antonio and Rumpe, Bernhard and Thi{\ss}en, Dirk and Wehrle, Klaus},
    title = {{SensorCloud: Towards the Interdisciplinary Development of a Trustworthy Platform for Globally Interconnected Sensors and Actuators}},
    booktitle = {Trusted Cloud Computing},
    editor = {Krcmar, Helmut and Reussner, Ralf and Rumpe, Bernhard},
    month = {12},
    year = {2014},
    publisher = {Springer},
    doi = {10.1007/978-3-319-12718-7_13},
    abstract = {Although Cloud Computing promises to lower IT costs and increase users' productivity in everyday life, the unattractive aspect of this new technology is that the user no longer owns all the devices which process personal data. To lower scepticism, the project SensorCloud investigates techniques to understand and compensate these adoption barriers in a scenario consisting of cloud applications that utilize sensors and actuators placed in private places. This work provides an interdisciplinary overview of the social and technical core research challenges for the trustworthy integration of sensor and actuator devices with the Cloud Computing paradigm. Most importantly, these challenges include i) ease of development, ii) security and privacy, and iii) social dimensions of a cloud-based system which integrates into private life. When these challenges are tackled in the development of future cloud systems, the attractiveness of new use cases in a sensor-enabled world will considerably be increased for users who currently do not trust the Cloud.},
    }

  • M. Henze, S. Bereda, R. Hummen, and K. Wehrle, “SCSlib: Transparently Accessing Protected Sensor Data in the Cloud,” in The 6th International Symposium on Applications of Ad hoc and Sensor Networks (AASNET), 2014.
    [BibTeX] [Abstract] [PDF] [DOI]

    As sensor networks get increasingly deployed in real-world scenarios such as home and industrial automation, there is a similarly growing demand in analyzing, consolidating, and storing the data collected by these networks. The dynamic, on-demand resources offered by today’s cloud computing environments promise to satisfy this demand. However, prevalent security concerns still hinder the integration of sensor networks and cloud computing. In this paper, we show how recent progress in standardization can provide the basis for protecting data from diverse sensor devices when outsourcing data processing and storage to the cloud. To this end, we present our Sensor Cloud Security Library (SCSlib) that enables cloud service developers to transparently access cryptographically protected sensor data in the cloud. SCSlib specifically allows domain specialists who are not security experts to build secure cloud services. Our evaluation proves the feasibility and applicability of SCSlib for commodity cloud computing environments.

    @inproceedings{HBHW14,
    author = {Henze, Martin and Bereda, Sebastian and Hummen, Ren{\'e} and Wehrle, Klaus},
    title = {{SCSlib: Transparently Accessing Protected Sensor Data in the Cloud}},
    booktitle = {The 6th International Symposium on Applications of Ad hoc and Sensor Networks (AASNET)},
    series = {Procedia Computer Science},
    volume = {37},
    month = {09},
    year = {2014},
    doi = {10.1016/j.procs.2014.08.055},
    abstract = {As sensor networks get increasingly deployed in real-world scenarios such as home and industrial automation, there is a similarly growing demand in analyzing, consolidating, and storing the data collected by these networks. The dynamic, on-demand resources offered by today's cloud computing environments promise to satisfy this demand. However, prevalent security concerns still hinder the integration of sensor networks and cloud computing. In this paper, we show how recent progress in standardization can provide the basis for protecting data from diverse sensor devices when outsourcing data processing and storage to the cloud. To this end, we present our Sensor Cloud Security Library (SCSlib) that enables cloud service developers to transparently access cryptographically protected sensor data in the cloud. SCSlib specifically allows domain specialists who are not security experts to build secure cloud services. Our evaluation proves the feasibility and applicability of SCSlib for commodity cloud computing environments.},
    }

  • M. Henze, L. Hermerschmidt, D. Kerpen, R. Häußling, B. Rumpe, and K. Wehrle, “User-driven Privacy Enforcement for Cloud-based Services in the Internet of Things,” in 2014 International Conference on Future Internet of Things and Cloud (FiCloud), 2014.
    [BibTeX] [Abstract] [PDF] [DOI]

    Internet of Things devices are envisioned to penetrate essentially all aspects of life, including homes and urban spaces, in use cases such as health care, assisted living, and smart cities. One often proposed solution for dealing with the massive amount of data collected by these devices and offering services on top of them is the federation of the Internet of Things and cloud computing. However, user acceptance of such systems is a critical factor that hinders the adoption of this promising approach due to severe privacy concerns. We present UPECSI, an approach for user-driven privacy enforcement for cloud-based services in the Internet of Things to address this critical factor. UPECSI enables enforcement of all privacy requirements of the user once her sensitive data leaves the border of her network, provides a novel approach for the integration of privacy functionality into the development process of cloud-based services, and offers the user an adaptable and transparent configuration of her privacy requirements. Hence, UPECSI demonstrates an approach for realizing user-accepted cloud services in the Internet of Things.

    @inproceedings{HHK+14,
    author = {Henze, Martin and Hermerschmidt, Lars and Kerpen, Daniel and H{\"a}u{\ss}ling, Roger and Rumpe, Bernhard and Wehrle, Klaus},
    title = {{User-driven Privacy Enforcement for Cloud-based Services in the Internet of Things}},
    booktitle = {2014 International Conference on Future Internet of Things and Cloud (FiCloud)},
    month = {08},
    year = {2014},
    doi = {10.1109/FiCloud.2014.38},
    abstract = {Internet of Things devices are envisioned to penetrate essentially all aspects of life, including homes and urban spaces, in use cases such as health care, assisted living, and smart cities. One often proposed solution for dealing with the massive amount of data collected by these devices and offering services on top of them is the federation of the Internet of Things and cloud computing. However, user acceptance of such systems is a critical factor that hinders the adoption of this promising approach due to severe privacy concerns. We present UPECSI, an approach for user-driven privacy enforcement for cloud-based services in the Internet of Things to address this critical factor. UPECSI enables enforcement of all privacy requirements of the user once her sensitive data leaves the border of her network, provides a novel approach for the integration of privacy functionality into the development process of cloud-based services, and offers the user an adaptable and transparent configuration of her privacy requirements. Hence, UPECSI demonstrates an approach for realizing user-accepted cloud services in the Internet of Things.},
    }

  • J. H. Ziegeldorf, N. Viol, M. Henze, and K. Wehrle, “POSTER: Privacy-preserving Indoor Localization,” in 7th ACM Conference on Security and Privacy in Wireless & Mobile Networks (WiSec) – Poster Session, 2014.
    [BibTeX] [Abstract] [PDF] [DOI]

    Upcoming WiFi-based localization systems for indoor environments face a conflict of privacy interests: Server-side localization violates location privacy of the users, while localization on the user’s device forces the localization provider to disclose the details of the system, e.g., sophisticated classification models. We show how Secure Two-Party Computation can be used to reconcile privacy interests in a state-of-the-art localization system. Our approach provides strong privacy guarantees for all involved parties, while achieving room-level localization accuracy at reasonable overheads.

    @inproceedings{ZVHW14,
    author = {Ziegeldorf, Jan Henrik and Viol, Nicolai and Henze, Martin and Wehrle, Klaus},
    title = {{POSTER: Privacy-preserving Indoor Localization}},
    booktitle = {7th ACM Conference on Security and Privacy in Wireless & Mobile Networks (WiSec) - Poster Session},
    month = {07},
    year = {2014},
    doi = {10.13140/2.1.2847.4886},
    abstract = {Upcoming WiFi-based localization systems for indoor environments face a conflict of privacy interests: Server-side localization violates location privacy of the users, while localization on the user's device forces the localization provider to disclose the details of the system, e.g., sophisticated classification models. We show how Secure Two-Party Computation can be used to reconcile privacy interests in a state-of-the-art localization system. Our approach provides strong privacy guarantees for all involved parties, while achieving room-level localization accuracy at reasonable overheads.},
    }

  • F. Schmidt, M. Henze, and K. Wehrle, “Piccett: Protocol-Independent Classification of Corrupted Error-Tolerant Traffic,” in 18th IEEE Symposium on Computers and Communications (ISCC), 2014.
    [BibTeX] [Abstract] [PDF] [DOI]

    Bit errors regularly occur in wireless communications. While many media streaming codecs in principle provide bit error tolerance and resilience, packet-based communication typically drops packets that are not transmitted perfectly. We present PICCETT, a method to heuristically identify which connections corrupted packets belong to, and to assign them to the correct applications instead of dropping them. PICCETT is a receiver-side classifier that requires no support from the sender or network, and no information which communication protocols are used. We show that PICCETT can assign virtually all packets to the correct connections at bit error rates up to 7–10%, and prevents misassignments even during error bursts. PICCET’s classification algorithm needs no prior offline training and both trains and classifies fast enough to easily keep up with IEEE 802.11 communication speeds.

    @inproceedings{SHW14,
    author = {Schmidt, Florian and Henze, Martin and Wehrle, Klaus},
    title = {{Piccett: Protocol-Independent Classification of Corrupted Error-Tolerant Traffic}},
    booktitle = {18th IEEE Symposium on Computers and Communications (ISCC)},
    month = {06},
    year = {2014},
    doi = {10.1109/ISCC.2014.6912582},
    abstract = {Bit errors regularly occur in wireless communications. While many media streaming codecs in principle provide bit error tolerance and resilience, packet-based communication typically drops packets that are not transmitted perfectly. We present PICCETT, a method to heuristically identify which connections corrupted packets belong to, and to assign them to the correct applications instead of dropping them. PICCETT is a receiver-side classifier that requires no support from the sender or network, and no information which communication protocols are used. We show that PICCETT can assign virtually all packets to the correct connections at bit error rates up to 7–10%, and prevents misassignments even during error bursts. PICCET's classification algorithm needs no prior offline training and both trains and classifies fast enough to easily keep up with IEEE 802.11 communication speeds.},
    }

  • I. Aktas, M. Henze, M. H. Alizai, K. Möllering, and K. Wehrle, “Graph-based Redundancy Removal Approach for Multiple Cross-Layer Interactions,” in 2014 Sixth International Conference on Communication Systems and Networks (COMSNETS), 2014.
    [BibTeX] [Abstract] [PDF] [DOI]

    Research has shown that the availability of cross-layer information from different protocol layers enable adaptivity advantages of applications and protocols which significantly enhance the system performance. However, the development of such cross-layer interactions typically residing in the OS is very difficult mainly due to limited interfaces. The development gets even more complex for multiple running cross-layer interactions which may be added by independent developers without coordination causing (i) redundancy in cross-layer interactions leading to a waste of memory and CPU time and (ii) conflicting cross-layer interactions. In this paper, we focus on the former problem and propose a graph-based redundancy removal algorithm that automatically detects and resolves such redundancies without any feedback from the developer. We demonstrate the applicability of our approach for the cross-layer architecture CRAWLER that utilizes module compositions to realize cross-layer interactions. Our evaluation shows that our approach effectively resolves redundancies at runtime.

    @inproceedings{AHA+14,
    author = {Aktas, Ismet and Henze, Martin and Alizai, Muhammad Hamad and M{\"o}llering, Kevin and Wehrle, Klaus},
    title = {{Graph-based Redundancy Removal Approach for Multiple Cross-Layer Interactions}},
    booktitle = {2014 Sixth International Conference on Communication Systems and Networks (COMSNETS)},
    month = {01},
    year = {2014},
    doi = {10.1109/COMSNETS.2014.6734899},
    abstract = {Research has shown that the availability of cross-layer information from different protocol layers enable adaptivity advantages of applications and protocols which significantly enhance the system performance. However, the development of such cross-layer interactions typically residing in the OS is very difficult mainly due to limited interfaces. The development gets even more complex for multiple running cross-layer interactions which may be added by independent developers without coordination causing (i) redundancy in cross-layer interactions leading to a waste of memory and CPU time and (ii) conflicting cross-layer interactions. In this paper, we focus on the former problem and propose a graph-based redundancy removal algorithm that automatically detects and resolves such redundancies without any feedback from the developer. We demonstrate the applicability of our approach for the cross-layer architecture CRAWLER that utilizes module compositions to realize cross-layer interactions. Our evaluation shows that our approach effectively resolves redundancies at runtime.},
    }

2013

  • M. Henze, R. Hummen, R. Matzutt, D. Catrein, and K. Wehrle, “Maintaining User Control While Storing and Processing Sensor Data in the Cloud,” International Journal of Grid and High Performance Computing (IJGHPC), vol. 5, iss. 4, 2013.
    [BibTeX] [Abstract] [PDF] [DOI]

    Clouds provide a platform for efficiently and flexibly aggregating, storing, and processing large amounts of data. Eventually, sensor networks will automatically collect such data. A particular challenge regarding sensor data in Clouds is the inherent sensitive nature of sensed information. For current Cloud platforms, the data owner loses control over her sensor data once it enters the Cloud. This imposes a major adoption barrier for bridging Cloud computing and sensor networks, which we address henceforth. After analyzing threats to sensor data in Clouds, the authors propose a Cloud architecture that enables end-to-end control over sensitive sensor data by the data owner. The authors introduce a well-defined entry point from the sensor network into the Cloud, which enforces end-to-end data protection, applies encryption and integrity protection, and grants data access. Additionally, the authors enforce strict isolation of services. The authors show the feasibility and scalability of their Cloud architecture using a prototype and measurements.

    @article{HHM+13,
    author = {Henze, Martin and Hummen, Ren{\'e} and Matzutt, Roman and Catrein, Daniel and Wehrle, Klaus},
    journal = {International Journal of Grid and High Performance Computing (IJGHPC)},
    title = {{Maintaining User Control While Storing and Processing Sensor Data in the Cloud}},
    month = {12},
    year = {2013},
    volume = {5},
    number = {4},
    doi = {10.4018/ijghpc.2013100107},
    abstract = {Clouds provide a platform for efficiently and flexibly aggregating, storing, and processing large amounts of data. Eventually, sensor networks will automatically collect such data. A particular challenge regarding sensor data in Clouds is the inherent sensitive nature of sensed information. For current Cloud platforms, the data owner loses control over her sensor data once it enters the Cloud. This imposes a major adoption barrier for bridging Cloud computing and sensor networks, which we address henceforth. After analyzing threats to sensor data in Clouds, the authors propose a Cloud architecture that enables end-to-end control over sensitive sensor data by the data owner. The authors introduce a well-defined entry point from the sensor network into the Cloud, which enforces end-to-end data protection, applies encryption and integrity protection, and grants data access. Additionally, the authors enforce strict isolation of services. The authors show the feasibility and scalability of their Cloud architecture using a prototype and measurements.},
    }

  • M. Henze, M. Großfengels, M. Koprowski, and K. Wehrle, “Towards Data Handling Requirements-aware Cloud Computing,” in 2013 IEEE International Conference on Cloud Computing Technology and Science (CloudCom) – Poster Session, 2013.
    [BibTeX] [Abstract] [PDF] [DOI]

    The adoption of the cloud computing paradigm is hindered by severe security and privacy concerns which arise when outsourcing sensitive data to the cloud. One important group are those concerns regarding the handling of data. On the one hand, users and companies have requirements how their data should be treated. On the other hand, lawmakers impose requirements and obligations for specific types of data. These requirements have to be addressed in order to enable the affected users and companies to utilize cloud computing. However, we observe that current cloud offers, especially in an intercloud setting, fail to meet these requirements. Users have no way to specify their requirements for data handling in the cloud and providers in the cloud stack – even if they were willing to meet these requirements – can thus not treat the data adequately. In this paper, we identify and discuss the challenges for enabling data handling requirements awareness in the (inter-)cloud. To this end, we show how to extend a data storage service, AppScale, and Cassandra to follow data handling requirements. Thus, we make an important step towards data handling requirements-aware cloud computing.

    @inproceedings{HGKW13,
    author = {Henze, Martin and Gro{\ss}fengels, Marcel and Koprowski, Maik and Wehrle, Klaus},
    title = {{Towards Data Handling Requirements-aware Cloud Computing}},
    booktitle = {2013 IEEE International Conference on Cloud Computing Technology and Science (CloudCom) - Poster Session},
    month = {12},
    year = {2013},
    doi = {10.1109/CloudCom.2013.145},
    abstract = {The adoption of the cloud computing paradigm is hindered by severe security and privacy concerns which arise when outsourcing sensitive data to the cloud. One important group are those concerns regarding the handling of data. On the one hand, users and companies have requirements how their data should be treated. On the other hand, lawmakers impose requirements and obligations for specific types of data. These requirements have to be addressed in order to enable the affected users and companies to utilize cloud computing.
    However, we observe that current cloud offers, especially in an intercloud setting, fail to meet these requirements. Users have no way to specify their requirements for data handling in the cloud and providers in the cloud stack - even if they were willing to meet these requirements - can thus not treat the data adequately. In this paper, we identify and discuss the challenges for enabling data handling requirements awareness in the (inter-)cloud. To this end, we show how to extend a data storage service, AppScale, and Cassandra to follow data handling requirements. Thus, we make an important step towards data handling requirements-aware cloud computing.},
    }

  • R. Hummen, J. Hiller, M. Henze, and K. Wehrle, “Slimfit – A HIP DEX Compression Layer for the IP-based Internet of Things,” in 1st International Workshop on Internet of Things Communications and Technologies (IoT), 2013.
    [BibTeX] [Abstract] [PDF] [DOI]

    The HIP Diet EXchange (DEX) is an end-to-end security protocol designed for constrained network environments in the IP-based Internet of Things (IoT). It is a variant of the IETF-standardized Host Identity Protocol (HIP) with a refined protocol design that targets performance improvements of the original HIP protocol. To stay compatible with existing protocol extensions, the HIP DEX specification thereby aims at preserving the general HIP architecture and protocol semantics. As a result, HIP DEX inherits the verbose HIP packet structure and currently does not consider the available potential to tailor the transmission overhead to constrained IoT environments. In this paper, we present Slimfit, a novel compression layer for HIP DEX. Most importantly, Slimfit i) preserves the HIP DEX security guarantees, ii) allows for stateless (de-)compression at the communication end-points or an on-path gateway, and iii) maintains the flexible packet structure of the original HIP protocol. Moreover, we show that Slimfit is also directly applicable to the original HIP protocol. Our evaluation results indicate a maximum compression ratio of 1.55 for Slimfit-compressed HIP DEX packets. Furthermore, Slimfit reduces HIP DEX packet fragmentation by 25 % and thus further decreases the transmission overhead for lossy network links. Finally, the compression of HIP DEX packets leads to a reduced processing time at the network layers below Slimfit. As a result, processing of Slimfit-compressed packets shows an overall performance gain at the HIP DEX peers.

    @inproceedings{HHHW13,
    author = {Hummen, Ren{\'e} and Hiller, Jens and Henze, Martin and Wehrle, Klaus},
    title = {{Slimfit - A HIP DEX Compression Layer for the IP-based Internet of Things}},
    booktitle = {1st International Workshop on Internet of Things Communications and Technologies (IoT)},
    month = {10},
    year = {2013},
    doi = {10.1109/WiMOB.2013.6673370},
    abstract = {The HIP Diet EXchange (DEX) is an end-to-end security protocol designed for constrained network environments in the IP-based Internet of Things (IoT). It is a variant of the IETF-standardized Host Identity Protocol (HIP) with a refined protocol design that targets performance improvements of the original HIP protocol. To stay compatible with existing protocol extensions, the HIP DEX specification thereby aims at preserving the general HIP architecture and protocol semantics. As a result, HIP DEX inherits the verbose HIP packet structure and currently does not consider the available potential to tailor the transmission overhead to constrained IoT environments. In this paper, we present Slimfit, a novel compression layer for HIP DEX. Most importantly, Slimfit i) preserves the HIP DEX security guarantees, ii) allows for stateless (de-)compression at the communication end-points or an on-path gateway, and iii) maintains the flexible packet structure of the original HIP protocol. Moreover, we show that Slimfit is also directly applicable to the original HIP protocol. Our evaluation results indicate a maximum compression ratio of 1.55 for Slimfit-compressed HIP DEX packets. Furthermore, Slimfit reduces HIP DEX packet fragmentation by 25 % and thus further decreases the transmission overhead for lossy network links. Finally, the compression of HIP DEX packets leads to a reduced processing time at the network layers below Slimfit. As a result, processing of Slimfit-compressed packets shows an overall performance gain at the HIP DEX peers.},
    }

  • M. Henze, R. Hummen, and K. Wehrle, “The Cloud Needs Cross-Layer Data Handling Annotations,” in 2013 IEEE Security and Privacy Workshops, 2013.
    [BibTeX] [Abstract] [PDF] [DOI]

    Nowadays, an ever-increasing number of service providers takes advantage of the cloud computing paradigm in order to efficiently offer services to private users, businesses, and governments. However, while cloud computing allows to transparently scale back-end functionality such as computing and storage, the implied distributed sharing of resources has severe implications when sensitive or otherwise privacy-relevant data is concerned. These privacy implications primarily stem from the in-transparency of the involved backend providers of a cloud-based service and their dedicated data handling processes. Likewise, back-end providers cannot determine the sensitivity of data that is stored or processed in the cloud. Hence, they have no means to obey the underlying privacy regulations and contracts automatically. As the cloud computing paradigm further evolves towards federated cloud environments, the envisioned integration of different cloud platforms adds yet another layer to the existing in-transparencies. In this paper, we discuss initial ideas on how to overcome these existing and dawning data handling in-transparencies and the accompanying privacy concerns. To this end, we propose to annotate data with sensitivity information as it leaves the control boundaries of the data owner and travels through to the cloud environment. This allows to signal privacy properties across the layers of the cloud computing architecture and enables the different stakeholders to react accordingly.

    @inproceedings{HHW13,
    author = {Henze, Martin and Hummen, Rene and Wehrle, Klaus},
    booktitle = {2013 IEEE Security and Privacy Workshops},
    title = {{The Cloud Needs Cross-Layer Data Handling Annotations}},
    month = {05},
    year = {2013},
    doi = {10.1109/SPW.2013.31},
    abstract = {Nowadays, an ever-increasing number of service providers takes advantage of the cloud computing paradigm in order to efficiently offer services to private users, businesses, and governments. However, while cloud computing allows to transparently scale back-end functionality such as computing and storage, the implied distributed sharing of resources has severe implications when sensitive or otherwise privacy-relevant data is concerned. These privacy implications primarily stem from the in-transparency of the involved backend providers of a cloud-based service and their dedicated data handling processes. Likewise, back-end providers cannot determine the sensitivity of data that is stored or processed in the cloud. Hence, they have no means to obey the underlying privacy regulations and contracts automatically. As the cloud computing paradigm further evolves towards federated cloud environments, the envisioned integration of different cloud platforms adds yet another layer to the existing in-transparencies.
    In this paper, we discuss initial ideas on how to overcome these existing and dawning data handling in-transparencies and the accompanying privacy concerns. To this end, we propose to annotate data with sensitivity information as it leaves the control boundaries of the data owner and travels through to the cloud environment. This allows to signal privacy properties across the layers of the cloud computing architecture and enables the different stakeholders to react accordingly.},
    }

  • R. Hummen, J. Hiller, H. Wirtz, M. Henze, H. Shafagh, and K. Wehrle, “6LoWPAN Fragmentation Attacks and Mitigation Mechanisms,” in Proceedings of the sixth ACM Conference on Security and privacy in Wireless and Mobile Networks (WiSec), 2013.
    [BibTeX] [Abstract] [PDF] [DOI]

    6LoWPAN is an IPv6 adaptation layer that defines mechanisms to make IP connectivity viable for tightly resource-constrained devices that communicate over low power, lossy links such as IEEE 802.15.4. It is expected to be used in a variety of scenarios ranging from home automation to industrial control systems. To support the transmission of IPv6 packets exceeding the maximum frame size of the link layer, 6LoWPAN defines a packet fragmentation mechanism.However, the best effort semantics for fragment transmissions, the lack of authentication at the 6LoWPAN layer, and the scarce memory resources of the networked devices render the design of the fragmentation mechanism vulnerable. In this paper, we provide a detailed security analysis of the 6LoWPAN fragmentation mechanism. We identify two attacks at the 6LoWPAN design-level that enable an attacker to (selectively) prevent correct packet reassembly on a target node at considerably low cost. Specifically, an attacker can mount our identified attacks by only sending a single protocol-compliant 6LoWPAN fragment. To counter these attacks, we propose two complementary, lightweight defense mechanisms, the content chaining scheme and the split buffer approach. Our evaluation shows the practicality of the identified attacks as well as the effectiveness of our proposed defense mechanisms at modest trade-offs.

    @inproceedings{HHW+13,
    author = {Hummen, Ren{\'e} and Hiller, Jens and Wirtz, Hanno and Henze, Martin and Shafagh, Hossein and Wehrle, Klaus},
    title = {{6LoWPAN Fragmentation Attacks and Mitigation Mechanisms}},
    booktitle = {Proceedings of the sixth ACM Conference on Security and privacy in Wireless and Mobile Networks (WiSec)},
    month = {04},
    year = {2013},
    doi = {10.1145/2462096.2462107},
    abstract = {6LoWPAN is an IPv6 adaptation layer that defines mechanisms to make IP connectivity viable for tightly resource-constrained devices that communicate over low power, lossy links such as IEEE 802.15.4. It is expected to be used in a variety of scenarios ranging from home automation to industrial control systems. To support the transmission of IPv6 packets exceeding the maximum frame size of the link layer, 6LoWPAN defines a packet fragmentation mechanism.However, the best effort semantics for fragment transmissions, the lack of authentication at the 6LoWPAN layer, and the scarce memory resources of the networked devices render the design of the fragmentation mechanism vulnerable.
    In this paper, we provide a detailed security analysis of the 6LoWPAN fragmentation mechanism. We identify two attacks at the 6LoWPAN design-level that enable an attacker to (selectively) prevent correct packet reassembly on a target node at considerably low cost. Specifically, an attacker can mount our identified attacks by only sending a single protocol-compliant 6LoWPAN fragment. To counter these attacks, we propose two complementary, lightweight defense mechanisms, the content chaining scheme and the split buffer approach. Our evaluation shows the practicality of the identified attacks as well as the effectiveness of our proposed defense mechanisms at modest trade-offs.},
    }

2012

  • R. Hummen, M. Henze, D. Catrein, and K. Wehrle, “A Cloud Design for User-controlled Storage and Processing of Sensor Data,” in 2012 IEEE 4th International Conference on Cloud Computing Technology and Science (CloudCom), 2012.
    [BibTeX] [Abstract] [PDF] [DOI]

    Ubiquitous sensing environments such as sensor networks collect large amounts of data. This data volume is destined to grow even further with the vision of the Internet of Things. Cloud computing promises to elastically store and process such sensor data. As an additional benefit, storage and processing in the Cloud enables the efficient aggregation and analysis of information from different data sources. However, sensor data often contains privacy-relevant or otherwise sensitive information. For current Cloud platforms, the data owner looses control over her data once it enters the Cloud. This imposes adoption barriers due to legal or privacy concerns. Hence, a Cloud design is required that the data owner can trust to handle her sensitive data securely. In this paper, we analyze and define properties that a trusted Cloud design has to fulfill. Based on this analysis, we present the security architecture of SensorCloud. Our proposed security architecture enforces end-to-end data access control by the data owner reaching from the sensor network to the Cloud storage and processing subsystems as well as strict isolation up to the service-level. We evaluate the validity and feasibility of our Cloud design with an analysis of our early prototype. Our results show that our proposed security architecture is a promising extension of today’s Cloud offers.

    @inproceedings{HHCW12,
    author = {Hummen, Ren{\'e} and Henze, Martin and Catrein, Daniel and Wehrle, Klaus},
    booktitle = {2012 IEEE 4th International Conference on Cloud Computing Technology and Science (CloudCom)},
    title = {{A Cloud Design for User-controlled Storage and Processing of Sensor Data}},
    month = {12},
    year = {2012},
    doi = {10.1109/CloudCom.2012.6427523},
    abstract = {Ubiquitous sensing environments such as sensor networks collect large amounts of data. This data volume is destined to grow even further with the vision of the Internet of Things. Cloud computing promises to elastically store and process such sensor data. As an additional benefit, storage and processing in the Cloud enables the efficient aggregation and analysis of information from different data sources. However, sensor data often contains privacy-relevant or otherwise sensitive information. For current Cloud platforms, the data owner looses control over her data once it enters the Cloud. This imposes adoption barriers due to legal or privacy concerns. Hence, a Cloud design is required that the data owner can trust to handle her sensitive data securely. In this paper, we analyze and define properties that a trusted Cloud design has to fulfill. Based on this analysis, we present the security architecture of SensorCloud. Our proposed security architecture enforces end-to-end data access control by the data owner reaching from the sensor network to the Cloud storage and processing subsystems as well as strict isolation up to the service-level. We evaluate the validity and feasibility of our Cloud design with an analysis of our early prototype. Our results show that our proposed security architecture is a promising extension of today's Cloud offers.},
    }