Studenti eccellenti del secondo e terzo anno di triennale e del primo anno di magistrale in Informatica e altre discipline STEM scoprono la ricerca di avanguardia nell’informatica. Ricercatori di punta coinvolgono i partecipanti nei loro ambiti di specializzazione tramite corsi brevi, seminari, discussioni e momenti di incontro informali.
16:30 – 17:15 Registration
17:15 – 17:45 Welcome and Introductions
17:45 – 18:30 Refreshments
20:15 Dinner
09:00 – 10:00 Computation for Content Creation – Lecture 1 (Abe Davis, Cornell)
10:00 – 11:00 Machine Learning for the Rest of Us – Lecture 1 (Keshav Pingali, UT Austin)
11:00 – 11:30 Coffee break
11:30 – 12:30 Computation for Content Creation – Lecture 2 (Abe Davis, Cornell)
12:30 – 15:15 Lunch break
15:15 – 16:00 Hilbert, Gödel, Turing: Computers ‘R’ Us – Part 1 (Alessandro Panconesi, Sapienza)
16:00 – 16:30 Coffee break
16:30 – 18:00 Question of the Day
20:30 Dinner
09:00 – 10:00 Computation for Content Creation – Lecture 3 (Abe Davis, Cornell)
10:00 – 11:00 Machine Learning for the Rest of Us – Lecture 2 (Keshav Pingali, UT Austin)
11:00 – 11:30 Coffee break
11:30 – 12:30 Computation for Content Creation – Lecture 4 (Abe Davis, Cornell)
12:30 – 15:15 Lunch break
15:15 – 16:00 The Impact of BSD Unix on Modern Computing and the Internet: Origins of the Open Software Movement (Ozalp Babaoglu, Università di Bologna)
16:00 – 16:30 Coffee break
16:30 – 18:00 Question of the Day
20:30 Dinner
08:30 – 09:30 Computation for Content Creation – Lecture 5 (Abe Davis, Cornell)
09:30 – 10:30 Machine Learning for the Rest of Us – Lecture 3 (Keshav Pingali, UT Austin)
10:30 – 11:15 Coffee break
11:15 – 12:00 Chatting about Transformers: an Introduction to Large Language Models – Part 1 (Gianfranco Bilardi, Università di Padova)
12:00 – 15:15 Lunch break
15:15 – 16:00 Elastic Computing: from Cloud to Edge and Beyond (Valeria Cardellini, Tor Vergata)
16:00 – 16:30 Coffee break
16:30 – 18:00 Question of the Day
20:30 Dinner
09:15 – 10:15 Machine Learning for the Rest of Us – Lecture 4 (Keshav Pingali, UT Austin)
10:15 -11:00 Forecasting Rhapsody: Algorithmic Models and Hybrids for Time Series Forecasting – Part 1 (Vittorio Maniezzo, Università di Bologna)
11:00 – 11:30 Coffee break
11:30 – 12:15 Chatting about Transformers: an Introduction to Large Language Models – Part 2 (Gianfranco Bilardi, Università di Padova)
12:15 – 15:15 Lunch break
15:15 – 16:00 Hilbert, Gödel, Turing: Computers ‘R’ Us – Part 2 (Alessandro Panconesi, Sapienza)
16:00 – 16:30 Coffee break
16:30 – 18:00 Question of the Day
09:15 – 10:15 Machine Learning for the Rest of Us – Lecture 5 (Keshav Pingali, UT Austin)
10:15 -11:00 Forecasting Rhapsody: Algorithmic Models and Hybrids for Time Series Forecasting – Part 2 (Vittorio Maniezzo, Università di Bologna)
11:00 – 11:30 Coffee break
11:30 – 12:15 Digital Innovation through Secure and Resilient Services (Michele Colajanni, Università di Bologna)
No classes
09:00 – 10:00 Simple and Fast: Algorithm Design via Gradient Descent – Lecture 1 (Lorenzo Orecchia, UChicago)
10:00 – 11:00 Anatomy of Cloud File Systems – Lecture 1 (Matteo Frigo, Google)
11:00 – 11:30 Coffee break
11:30 – 12:30 Of Mice and Men (and Bytes) – Lecture 1 (Samantha Riesenfeld, UChicago)
12:30 – 15:15 Lunch break
15:15 – 16:00 When Graphs are Large (Paolo Boldi, Università di Milano)
16:00 – 16:30 Coffee break
16:30 – 17:30 Il Mondo Ortogonale
20:30 Dinner
09:00 – 10:00 Simple and Fast: Algorithm Design via Gradient Descent – Lecture 2 (Lorenzo Orecchia, UChicago)
10:00 – 11:00 Anatomy of Cloud File Systems – Lecture 2 (Matteo Frigo, Google)
11:00 – 11:30 Coffee break
11:30 – 12:30 Of Mice and Men (and Bytes) – Lecture 2 (Samantha Riesenfeld, UChicago)
12:30 – 15:15 Lunch break
15:15 – 16:00 Machine Learning Applications in Medicine: from Theory to Practice and Back (Barbara Di Camillo, Università di Padova)
16:00 – 16:30 Coffee break
16:30 – 17:30 Question of the Day
20:30 Dinner
09:00 – 10:00 Simple and Fast: Algorithm Design via Gradient Descent – Lecture 3 (Lorenzo Orecchia, UChicago)
10:00 – 11:00 Anatomy of Cloud File Systems – Lecture 3 (Matteo Frigo, Google)
11:00 – 11:30 Coffee break
11:30 – 12:30 Of Mice and Men (and Bytes) – Lecture 3 (Samantha Riesenfeld, UChicago)
12:30 – 15:15 Lunch break
15:15 – 16:00 Using Multi-Agent Models to Simulate Tumor Microenvironment (Barbara Di Camillo, Università di Padova)
16:00 – 16:30 Coffee break
16:30 – 18:00 Question of the Day
20:30 Dinner
09:00 – 10:00 Simple and Fast: Algorithm Design via Gradient Descent – Lecture 4 (Lorenzo Orecchia, UChicago)
10:00 – 11:00 Anatomy of Cloud File Systems – Lecture 4 (Matteo Frigo, Google)
11:00 – 11:30 Coffee break
11:30 – 12:30 Of Mice and Men (and Bytes) – Lecture 4 (Samantha Riesenfeld, UChicago)
12:30 – 15:15 Lunch break
15:15 – 16:00 The Mathematics of Machine Learning – Part 1 (Nicolò Cesa-Bianchi, Università di Milano)
16:00 – 16:30 Coffee break
16:30 – 17:30 Question of the Day
09:00 – 10:00 Simple and Fast: Algorithm Design via Gradient Descent – Lecture 5 (Lorenzo Orecchia, UChicago)
10:00 – 11:00 Anatomy of Cloud File Systems – Lecture 5 (Matteo Frigo, Google)
11:00 – 11:30 Coffee break
11:30 – 12:30 Of Mice and Men (and Bytes) – Lecture 5 (Samantha Riesenfeld, UChicago)
12:30 – 15:15 Lunch break
15:15 – 16:00 The Mathematics of Machine Learning – Part 2 (Nicolò Cesa-Bianchi, Università di Milano)
16:00 – 16:15 Closing remarks
16:15 – 16:45 Coffee
Attendance is by invitation only. Required application materials include information about your undergraduate/graduate academic record, and a concise description of your key accomplishments to date.
You can apply by filling this form.
There is no registration fee to attend, and BOOST will cover food and lodging for all attendees.
Who should apply?
Outstanding second- and third-year undergraduate and first year Masters students in Informatics and other STEM disciplines.
What is the deadline for applications? When will I hear back?
Applicants submitting their applications by July 6 will receive notification of their status by July 13. Submissions received after July 6 will be evaluated on a rolling basis until all positions are filled.
What if I am available for a subset of the days of the school? Can I attend partially?
Unfortunately, no. Students are expected to commit for the entire duration of the school.
Where are classes held?
In Oropa, classes will be held in the Sala Convegni of the Sanctuary.
What kind of accommodations will there be?
Students will be hosted in the Monte Mucrone rooms within the Sanctuary’s hospitality facilities. Typical accommodations consists of a double room, with private bath and wi-fi.
What is the earliest arrival and latest departure date?
Check-in at the Sanctuary’s hospitality facilities will be available from 2:00 pm to 7:00 pm on July 20.. Checkout will be after lunch on July 25
Do I need to bring a laptop?
Yes. Courses may include coding exercises.
Which language is spoken at the school?
All instruction will be in English.
How many students will be attending?
Approximately 70
If the above does not address your question, you can contact the organizers.
Ozalp è professore ordinario di Informatica presso l’Università di Bologna. In precedenza, è stato Associate Professor nel Computer Science Department di Cornell University. Ha conseguito il PhD in Computer Science nel 1981 presso l’Università della California a Berkeley. Le sue estensioni della memoria virtuale per il sistema Unix di AT&T, sviluppate durante il dottorato a Berkeley, sono diventate la base per una lunga serie di distribuzioni “BSD Unix”. Ha ricevuto il Sakrison Memorial Award nel 1982 (insieme a Bill Joy), lo UNIX International Recognition Award nel 1989 e lo USENIX Association Lifetime Achievement Award nel 1993. Nel 2002 è stato nominato Fellow dell’ACM. Nel 2007 ha co-fondato la serie IEEE International Conference on Self-Adaptive and Self-Organizing Systems (SASO). Ha fatto parte dei comitati editoriali di ACM Transactions on Computer Systems, ACM Transactions on Autonomous and Adaptive Systems e Springer Distributed Computing.
È Presidente di ELICSIR e del Consiglio della Scuola Ortogonale.
Gianfranco è ordinario presso il Dipartimento di Ingegneria dell’Informazione dell’Università di Padova e Academic Visitor presso lo IBM T.J. Watson Research Center. In precedenza, è stato Assistant Professor of Computer Science alla Cornell University. Nel 1985 a conseguito il PhD in Electrical Engineering alla University of Illinois Urbana-Champaign. Nell’ateneo padovano è stato membro del CdA e vicerettore per le infrastruttre informatiche. I suoi interessi di ricerca sono nell’ambito degli algoritmi e architetture parallele, high performance computing, theory of computation, formal languages, VLSI e signal processing.
È membro del consiglio direttivo di ELICSIR e del Consiglio della Scuola Ortogonale.
Paolo è professore ordinario di informatica all’Università Statale di Milano, dove ha coordinato il dottorato e la laurea triennale in informatica. La sua ricerca si concentra sugli algoritmi per il web e le reti sociali, con risultati di rilievo come la conferma sperimentale dell’esperimento dei sei gradi di separazione su Facebook e algoritmi avanzati per la compressione del grafo del web. Ha sviluppato software di ampia diffusione nella comunità accademica internazionale ed è stato chair di conferenze di riferimento internazionale come WWW, WSDM e ACM Web Science. Ha ricevuto tre Yahoo! Faculty Award ed un Google Focused Award.
Per ELICSIR è mentore della Scuola Ortogonale.
Valeria Cardellini è Professore Ordinario di Ingegneria Informatica all’Università di Roma Tor Vergata. I suoi interessi di ricerca includono i sistemi software distribuiti, ed in particolare il Cloud ed Edge. È co-autrice di oltre cento pubblicazioni in riviste e conferenze internazionali e fa parte dei comitati editoriali di IEEE Transactions on Parallel and Distributed Systems e Elsevier Journal of Parallel and Distributed Computing.
In ELICSIR è mentore della Scuola Ortogonale.
Nicolò Cesa-Bianchi is Professor of Computer Science at Università degli Studi di Milano and holds a joint appointment at Politecnico di Milano. His main research interests are the design and analysis of machine learning algorithms for online learning, sequential decision-making, and graph analytics. He is co-author of the monographs “Prediction, Learning, and Games” and “Regret Analysis of Stochastic and Nonstochastic Multi-armed Bandit Problems”. He served as President of the Association for Computational Learning and co-chaired the program committees of some of the most important machine learning conferences, including NeurIPS, COLT, and ALT. He is the recipient of a Google Research Award, a Xerox Foundation Award, a Criteo Faculty Award, a Google Focused Award, and an IBM Research Award. He is ELLIS fellow, member of the ELLIS board, and co-director of the ELLIS program on Interactive Learning and Interventional Representations. He is a corresponding member of the Italian Academy of Sciences.
Michele Colajanni is a Full Professor at the Department of Computer Science and Engineering of the University of Bologna. He is also affiliated with the Bologna Business School and the Johns Hopkins University, SAIS Europe. He graduated in Pisa, and he was with the University of Rome and the University of Modena. His research interests include cybersecurity, scalable architectures for big data management and AI analytics. He founded the Research Center on Security and Safety and the Cyber Academy for ethical hackers. He has directed courses and masters for universities, ministries, and companies. His scientific production includes more than 250 peer-reviewed articles, direction of national and international projects, and several presentations in workshops, conferences, and invited lectures.
Abe Davis is an Assistant Professor in the Computer Science Department at Cornell University, where his research group works at the intersections of computer graphics, vision, and human-computer interaction. Abe earned his Ph.D. in EECS from MIT CSAIL, and his thesis won the MIT Sprowls Award for Outstanding PhD Dissertation in Computer Science as well as honorable mention for the ACM SIGGRAPH Outstanding Doctoral Dissertation Award. Abe was also named one of Forbes Magazine’s “30 under 30”, Business Insider’s “50 Scientists Who are Changing the World” and “8 Innovative Scientists in Tech and Engineering”, he has won the “Most Practical SHM Solution for Civil Infrastructures” Award at IWSHM, and is a recipient of the NSF CAREER award in 2024.
Barbara è Professoressa ordinaria di Informatica presso il Dipartimento di Ingegneria dell’Informazione dell’Università di Padova. La sua attività di ricerca si concentra sull’applicazione del data mining e del machine learning per l’analisi di dati biologici per applicazioni alla bioinformatica e alla medicina. In particolare, lo studio delle regolatorie metagenomiche e trascrittomiche, e la modellazione della dinamica delle malattie. Nell’ateneo padovano dirige il gruppo di ricerca SysBioBig (Systems Biology and Bioinformatics Group)
In ELICSIR è mentore della Scuola Ortogonale.
Matteo ha conseguito il dottorato di ricerca presso il Massachusetts Institute of Technology nel 1999. I suoi interessi di ricerca includono la teoria e la pratica degli algoritmi paralleli, i sistemi multi-threaded, algoritmi cache-oblivious, elaborazione dei segnali e, più recentemente, le prove a conoscenza zero. Ha lavorato per oltre un decennio nell’industria del cloud progettando in prima persona istemi di archiviazione e di rete di alcune tra le principali piattaforme cloud. Per la sua ricerca ha ricevuto riconoscimenti importanti tra cui il Wilkinson Prize for Numerical Software nel 1999, l’ACM Most Influential PLDI Paper Award nel 2008 e 2009, lo SPAA Best Paper Award nel 2009 e l’IEEE FOCS Test of Time Award nel 2019.
In ELICSIR è mentore della Scuola Ortogonale.
Vittorio è professore ordinario di Informatica presso il dipartimento di informatica dell’Università di Bologna ed è autore di oltre 100 pubblicazioni internazionali con quasi 40000 citazioni su Google scholar. Fa parte del comitato editoriale di riviste quali OR spectrum, Swarm Intelligence, Operational Research – An International Journal, Algorithms e Int. J. of Applied Metaheuristic Computing. Si occupa di algoritmi euristici per l’ottimizzazione combinatoria fin dal dottorato di ricerca, ottenuto in informatica presso il Politecnico di Milano nel 1993, prima come uno dei progettisti dell’algoritmo Ant System – poi evolutosi in Ant Colony Optimization (ACO) e più recentemente come uno dei promotori della comunità attiva sulla mateuristiche (dal 2006 a oggi).
In ELICSIR è mentore della Scuola Ortogonale.
Lorenzo è Professore associato nel Dipartimento di Informatica dell’Università di Chicago. La sua ricerca si concentra sulla progettazione di algoritmi per affrontare sfide computazionali fondamentali nel machine learning e nell’ottimizzazione combinatoria, combinando idee dell’ottimizzazione continua e discreta in un unico framework. Ha conseguito il dottorato in informatica presso UC Berkeley nel 2011 e ha insegnato matematica applicata al MIT fino al 2015. Ha ricevuto prestigiosi riconoscimenti quali il SODA Best Paper Award nel 2014 e l’NSF CAREER Award nel 2020. È stato chiamato a far parte del comitato di programma di conferenze di riferimento quali FOCS, SODA e NeurIPS.
In ELICSIR è mentore della Scuola Ortogonale.
Alessandro è Professore Ordinario di Informatica presso l’Università Sapienza di Roma. Ha conseguito il PhD in Informatica presso la Cornell University. I suoi interessi di ricerca comprendono tutto ciò che riguarda gli algoritmi, con un’attenzione particolare agli algoritmi randomizzati e distribuiti e, più di recente, al machine learning. È Presidente del BICI, il Bertinoro International Center for Informatics. Ha ricevuto riconoscimenti internazionali per la sua ricerca, tra cui l’ACM Danny Lewin Best Student Paper Award, il Dijkstra Prize e faculty award di IBM, Yahoo! e Google e, per due volte, il Google Focused Award. Ha fatto parte del comitato di programma di conferenze di riferimento, tra cui SODA, PODC, ICALP, WWW e KDD assumendo anche ruoli di leadership. È associate editor di JCSS.
È membro del consiglio direttivo di ELICSIR e del Consiglio della Scuola Ortogonale.
Keshav Pingali is the W.A.”Tex” Moncrief Chair of Grid and Distributed Computing in the Department of Computer Science at the University of Texas at Austin, and a member of the Oden Institute for Computational Engineering and Sciences (ICES) at UT Austin. He has a PhD from MIT, and a B.Tech. from the Indian Institute of Technology, Kanpur, where he was awarded the President’s Gold Medal and the Lalit Narain Das Memorial Gold Medal. Pingali has made deep and wide-ranging contributions to many areas of parallel computing including programming languages, compilers, and runtime systems for multicore, manycore and distributed computers. His current research is focused on programming models and tools for high-performance graph computing.
Pingali is a Fellow of the IEEE, ACM, and AAAS, and a foreign member of the Academia Europeana. He received the IIT Kanpur Distinguished Alumnus Award in 2013, the 2023 IEEE CS Charles Babbage Award, the 2023 ACM/IEEE CS Ken Kennedy Award, and the 2024 ACM SIGPLAN Programming Languages Achievement Award. Between 2008 and 2011, he was the co-Editor-in-chief of the ACM Transactions on Programming Languages and Systems. He has served on many international committees including the NSF CISE Advisory Committee (2009-2012) and the Board of Directors of CoLab, a joint research initiative between the government of Portugal and UT Austin (2007-2017).
Samantha Riesenfeld is Assistant Professor in the University of Chicago Pritzker School of Molecular Engineering, with additional affiliations in the Department of Medicine, the Institute for Biophysical Dynamics, the Data Science Institute, the Comprehensive Cancer Center, and the Committee on Immunology. She is also a Chan-Zuckerberg Biohub Investigator and a faculty member of the NSF-Simons National Institute for Theory and Mathematics in Biology. She leads a highly interdisciplinary research group that develops and applies machine learning methods to investigate complex biological systems using genomic, transcriptomic, and multimodal data. Areas of focus include inflammatory immune responses and solid tumor cancers. Dr. Riesenfeld has a BA in mathematics and computer science from Harvard University and a PhD in theoretical computer science from UC Berkeley. She did postdoctoral training at the interface of machine learning, systems biology, and immunology at the Broad Institute of MIT and Harvard, Brigham and Women’s Hospital, and the Gladstone Institutes at UCSF. Her honors include an NIH F32 NRSA postdoctoral fellowship, a BroadIgnite postdoctoral award, and a Cancer Research Foundation Young Investigator Award.
In this talk, I will discuss the historical significance of Unix, which may be considered the “grandfather of all modern operating systems”. Since its creation at Bell Laboratories in the 1970s, Unix has blossomed into a wide family tree with many branches, one of which has come to be known as “BSD Unix”. Berkeley Software Distribution (BSD) Unix was a fork from the original Bell Laboratories Research Unix and was developed and distributed by the Computer Systems Research Group (CSRG) at the University of California, Berkeley from the late 1970s throughout the 80s. During my PhD work at UC Berkeley, I was one of the architects of BSD Unix which was a major factor in the rapid growth of the Internet through its built-in TCP/IP stack and has influenced numerous other modern operating systems including FreeBSD, NetBSD, OpenBSD, SunOS, Solaris, Mac OS/X and iOS. During the 1980s and 90s, the Berkeley version of UNIX became the standard in education and research, garnering development support from the Defense Advanced Research Projects Agency (DARPA) and was notable for introducing virtual memory and inter-networking. BSD Unix was widely distributed in source form so that others could learn from it and improve it; this style of software distribution has led to the open source movement, of which BSD Unix is now recognized to be one of the earliest examples.
Proposed in 2017, the Transformer has revolutionized the field of Large Language Models (LLMs), leading to applications that have fascinated the public worldwide, as well as raising intriguing scientific and philosophical questions, on the nature of language and knowledge. The basic function of the transformer consists in evaluating the probability distribution of the “next word” in a text, given the sequence of the preceding words. This function can then be extended to language generation, in response to a given “prompt”.
The seminars will present the key ideas of Machine Learning (ML) that have been successfully combined in the transformer, such as tokenization, word and positional embedding, query-key-value attention, feedforward neural networks, back propagation, and gradient descent. The algorithmic and architectural computing requirements will be considered. The wider implications of LLMs will be briefly discussed.
Graphs are powerful and versatile tools that find countless applications, from social networks to communication systems of various kinds. Handling large graphs poses a number of new challenges: even storing such graphs in main memory cannot be usually attained in naive ways, and calls for more sophisticated approaches.
In this talk, I will touch on some of the techniques that have been proposed to compress and use very large graphs. Besides compression, I will discuss diffusion-based algorithms using probabilistic counters, with two applications: Milgram-like experiments on very large social networks and the computation of distance-based centrality indices.
Elasticity is the degree to which a computing system is able to adapt to fluctuating demands by provisioning and de-provisioning resources in an autonomic manner. It represents a distinguishing feature of Cloud systems and services and becomes even more challenging in highly distributed environments such as the Edge. In this talk, I will discuss what elasticity is and how it can be achieved, also considering an architectural point of view. I will also talk about policies to drive elasticity, both from academia and industry.
Machine learning is the main driving force behind the current AI revolution. To provide a solid mathematical foundation to learning systems, we must formally characterize what a machine can learn and what is the minimal amount of training data needed to achieve a desired performance. In this talk, we will show some fundamental results concerning the mathematics of machine learning, stressing their potential and limitations.
Innovation through digitalization represents an inevitable and shared strategy. Equally evident is the fragility of a digital world where all processes, services and supply chains are supported by interconnected digital systems. Hence, we should move from a retrospective in which cybersecurity was perceived as an obstacle to business to an ambitious perspective where we should lead organizations towards digital services that embed cybersecurity and resiliency by design. This route will become even more important as more businesses, industries and services will be driven by digital data and AI. It is important to note that similar perspectives on guarantee of trust and service continuity are also received by the most recent European and US norms.
Computers have had a tremendous impact on the ways that we create and consume content. Whether that content is text, digital media (e.g., images, video, and audio), or even tangible manufactured objects, digital tools now play major roles in how we build, capture, or develop most of the things we create. This short course will explore many of those roles. The lectures draw heavily from a course by the same name that I teach for computer science graduate students at Cornell.
The tentative topics include:
Representing content: How do we represent different types of content? How might the representation that we expose to the human user of a computational tool differ from the internal representation? What should we consider when designing a representation?
What are computers good at?: The value of a computational tool hinges on being able to leverage certain advantages of computation. We will discuss what computers are (and are not) especially good at, and how these strengths are leveraged in computational tools.
Representing natural signals: We will talk a bit about how to represent and manipulate visual and auditory signals. In other words, an introductory preview of some foundational concepts in vision, graphics, and audio.
In this seminar, I will briefly introduce machine learning, explaining the unique characteristics of its applications in medicine and biology. Specifically, I will focus on the ability of algorithms to generalize and identify reproducible biomarkers. I will then explore methodological aspects related to feature selection that facilitate the discovery of robust biomarkers. Additionally, I will emphasize the importance of explainability in ensuring that machine learning models are transparent and their decisions are understandable, which is crucial for their acceptance and trust in the medical and biological fields.
Multi-agent models are simulation systems composed of multiple autonomous entities, called agents, that interact with each other within a common environment. These models are used to represent and analyze complex behaviors and dynamics of systems where multiple actors, or agents, act and react reciprocally. In this seminar, I will demonstrate how they can be used to simulate the tumor microenvironment, where different types of cells have unique properties and behaviors. Each cell is represented by an autonomous agent, showcasing how this approach can reveal emergent properties of the system.
We discuss the architecture and the techniques employed by real-world large-scale storage systems, with an emphasis on file storage. We first discuss the general organization of such systems. We then dive deep into certain problems that need to be solved for the system to work: 1) how to replicate data reliably and consistently; 2) how to maintain transactional invariants across different parts of the system; 3) how to build a scalable ordered map; 4) how to map the file abstraction into the ordered map; and 5) erasure-coding techniques for efficient storage utilization. In addition, we discuss techniques for managing congestion, which is an underappreciated problem at all layers of the system.
Knowledge of the future has always been a desire of mankind. Ancient approaches were quite diverse, though not very reliable, a diversity that can also be found in current approaches to time series forecasting. The talk will sketch the variety of backgrounds that have led to state-of-the-art forecasting algorithms, ranging from statistics to plain multilayer perceptrons, from nonlinear optimization to transformer-based models. Furthermore, the broad landscape of applications leads to an interest in the design of combined models.
In this minicourse, I will explore a recent surprising trend in the field of algorithms: the fastest methods for solving problems on discrete structures, such as graphs, are given by simulating continuous dynamics, e.g., dropping a ball in a potential well and diffusing heat in a conductive medium. Topics will include gradient descent and coordinate descent for linear regression problems, basics of spectral graph theory, combinatorial preconditioning, and variational methods for algorithm design.
In a landmark 1936 paper, Alan Turing famously introduced the concept now known as the Turing machine, a mathematical abstraction that rigorously defines the intuitive and yet elusive notion of an algorithm. His paper presented several revolutionary ideas that profoundly and enduringly influenced the development of science and technology, serving as true harbingers of the computer revolution. While the technical definition of Turing machines may be familiar to many computer science students, it is only by considering Turing’s ideas within their proper cultural context that we can fully appreciate their elegance and power. In this popular science talk, I will attempt to do just that.
It is likely that machine learning (ML) will transform the way we do science and engineering as radically as computers did 50 years ago. Therefore, just as Computer Science (CS) students need to know programming regardless of their area of specialization, they will soon need to know ML to stay relevant as the CS field is transformed by ML. However, most ML presentations are geared for researchers specializing in ML, and it can be difficult for students in other areas to extract the key intuitions and ideas in this rapidly evolving field. In this series of lectures, we use pictures and the very intuitive notion of paths in directed graphs to explicate the key ideas in deep neural networks, convolutional neural networks, recurrent neural networks, and reinforcement learning (RL).
Thanks to recent experimental technologies based on DNA sequencing, genomic data have increased dramatically in size, resolution, and biological scope. Together with bigger and novel types of data come new computational and statistical challenges, as well as more ambitious scientific goals. For example, can we use these noisy, high-dimensional data to demystify the inner workings of the mammalian immune system? Pin down the causes of autoimmune diseases? Improve targeted therapies for different cancers? In this minicourse, I will describe the role of data science in answering these questions, including vignettes from my own research. We will cover some of the unique challenges of extracting insights from these data, current computational strategies, and open problems on the horizon.
Oratorio di San Filippo Neri, Bologna
© ELICSIR Foundation ETS 2025 - All right reserved
Questo sito utilizza cookie tecnici e di profilazione per migliorare la tua esperienza di navigazione. Continuando a navigare nel sito acconsenti all'uso dei cookie.