Description: Large AI Systems and Models with Privacy and Safety Analysis Workshop (LAMPS)
security (10021) machine learning (3819) deep learning (1253) malware (378) vulnerability (184) intrusion detection (77) attacks (34) program analysis (21) botnets (8) adversarial examples (4)
Carmela Troncoso is an Associate Professor at EPFL (Switzerland) where she heads the SPRING Lab. Her work focuses on analyzing, building, and deploying secure and privacy-preserving systems. Troncoso holds a Ph.D. in engineering from KULeuven. Her work on privacy engineering has received multiple awards, and she has been named 40 under 40 in technology by Fortune in 2020.
Decentralization is often seen as a main tool to achieve security and privacy. It has worked in a number of systems, for which decentralization help protect identities and data of users. Thus, it is not a surprise that a new trend of machine learning algorithms opt for decentralization to increase data privacy. In this talk, we will analyze decentralized machine learning proposals and show how they not only don’t improve privacy or robustness, but also increase the surface of attack resulting in less protec
Dr. Mikel Rodriguez has spent over two decades working in the public and private sector securing the application of Artificial Intelligence in high-stakes consequential environments. At Google DeepMind, Mikel defines and leads the cross-functional AI Red and Blue “ReBl” team to ensure that foundational models are battle-tested with the rigor and scrutiny of real-world adversaries, and help drive research and tooling that will make this red-blue mindset scalable in preparation for AGI. In his role as the Man