Keynotes


Foundations of Transaction Fee Mechanism Design
Elaine Shi, Carnegie Mellon University

Space in a blockchain is a scarce resource. Cryptocurrencies today use auctions to decide which transactions get confirmed in the block. Intriguingly, classical auctions fail in such a decentralized environment, since even the auctioneer can be a strategic player. For example, the second-price auction is a golden standard in classical mechanism design. It fails, however, in the blockchain environment since the miner can easily inject a bid that is epsilon smaller than the k-th price where k is the block size. Moreover, the miner and users can also collude through the smart contract mechanisms available in modern cryptocurrencies.

I will talk about a new foundation for mechanism design in a decentralized environment. I will prove an impossibility result which rules out the existence of a dream transaction fee mechanism that incentivizes honest behavior for the user, the miner, and a miner-user coalition at the same time. I will then argue why the prior modeling choices are too draconian, and how we can overcome this lower bound by capturing hidden costs pertaining to certain deviations.

Elaine Shi

Elaine Shi is an Associate Professor at Carnegie Mellon University. Her research interests include cryptography, algorithms, and foundations of blockchains. Prior to CMU, she taught at the University of Maryland and Cornell University. She is a recipient of the Packard Fellowship, the Sloan Fellowship, the ONR YIP Award, the NSA best scientific cybersecurity paper award, and various other best paper awards.




Machine learning is becoming less dependable
Nicholas Carlini, Google Brain

Machine learning has seen incredible progress over the past few years, and problems once seen as science fiction are now nearly trivial. However current machine learning systems lack one important property: dependability. In this talk I examine the (lack of) dependability in modern machine learning techniques when evaluated in adversarial environments. And if we continue in the current direction, I argue future machine learning systems will be less reliable in the future.

Nicholas Carlini

Nicholas Carlini is a research scientist at Google Brain. He studies the security and privacy of machine learning, and for this has received best paper awards at ICML, USENIX Security and IEEE S&P. He obtained his PhD from the University of California, Berkeley in 2018




Resilience and fault tolerance for extreme-scale scientific computing
Franck Cappello, Argonne National Laboratory

Exascale scientific computing is a fierce international competition that started circa 2007. Since that time and for about 15 years, the research on resilience and fault tolerance for exascale computing has explored many territories, including the characterization of faults, errors, and failures in large HPC systems and a wide variety of mitigation methods. We will review the main results of this intense analysis and exploration period, which produced the foundations for the solutions that will be used on the exascale systems in the United States. However, the community now is facing an important new problem: Many numerical simulations will generate a huge flow of data that is too large to store, communicate, and analyze completely. Data reduction and, in particular, lossy reduction and compression of scientific data become a necessity. Because lossy data reduction removes information, it may directly affect numerical simulation correctness and thus the dependability of results. A critical question thus arises: How can we reduce scientific data by one or more orders of magnitude while keeping the same science and the same predictions? We will discuss this important new trend and the new results concerning scientific data reduction methods and methods to assess the correctness of results produced from lossy-reduced scientific data.

Franck Cappello

Franck Cappello is senior computer scientist and R&D lead at Argonne National laboratory. Franck started working on fault tolerance for high performance scientific computing more than 20 year ago. With his students and collaborators, he explored many different and complementary aspects of resilience and fault tolerance for scientific computing at extreme scale: fault/error/failure characterization, checkpointing, fault tolerance protocols, optimal checkpoint scheduling, silent data corruption detection and mitigation, failure prediction producing about 100 international publications on this domain. He led the resilience topic of the International Exascale Software Project and the European Exascale Software Initiative that identified critical challenges of exascale computing in US and Europe. Franck leads the VeloC innovative asynchronous multi-level checkpointing project funded by the U.S. Exascale Computing Project (ECP) that will serve applications running on the US exascale systems. From 2016 and with the support of ECP, he started exploring lossy compression for scientific computing to address the increasing discrepancy between scientific application data set sizes and the capacities of HPC storage infrastructures. This research produced the SZ lossy compressor and Z-checker tool to assess the nature of lossy compression errors. Franck is IEEE fellow and recipient of two prestigious R&D100 awards, the 2018 IEEE TCPP Outstanding Service Award and the 2021 IEEE Transactions in Computers Award for Editorial Service and Excellence.




Sponsors

IEEE Logo
IFIP Logo
Powered by w3.css