A Stochastic General Equilibrium Approach to an Infrastructure Planning Problem
Jan 7, 2019
Julio Deride, Sandia National Laboratory
In this talk, we present a problem of strategic planning for capacity investment and production, in a competitive market, over a network structure. We model it as a general equilibrium problem, i.e., a collection of multi-agent optimization problems with an equilibrium constraint (supply meets demand), and solve it by means of two solution techniques: i) for the deterministic case, we propose a maxinf optimization approach and solve it using an augmented Walrasian approximation, and ii) for the stochastic case, we provide a solution method based on an alternating direction method of multipliers (ADMM). Finally, we illustrate the performance of the algorithms by solving an infrastructure planning problem for the EV-fast charging station problem in a small network.
Algorithmic Fairness and its Relevance to Facebook
December 6, 2018
Dr. Bobbie Chern, Facebook
Machine learning is playing an increasingly larger role in many decisions that affect our lives: loan approvals, employment decisions, parole, to name a few. While it is generally accepted that human decisions are susceptible to discrimination and bias, there is hope that those decisions driven by machine learning algorithms will not face such problems. Unfortunately, numerous studies have shown that machine learning systems can also inadvertently discriminate against minority groups or other protected classes; after all, the training data is often derived from human decisions. The first part of this talk will cover some of the basics of algorithmic fairness and the complexities involved: the multiple notions of fairness, their incompatibility with one another, and how to correct for such biases. The second part of this talk will discuss some of the steps Facebook is taking to ensure the thoughtful and responsible development of AI.
Bobbie Chern is a research scientist at Facebook. His research focuses on algorithmic fairness and bias, and applied machine learning. Prior to joining Facebook in 2017, he was a research engineer at Yahoo! He completed his PhD in Electrical Engineering at Stanford University in 2016 with Persi Diaconis.
Understanding pathological semidefinite programs: how elementary row operations help
Oct 17, 2018
G. Pataki, Univ. of North Carolina, Chapel Hill
Semidefinite programs (SDPs) -- optimization problems with linear constraints and semidefinite matrix variables -- are some of the most useful and versatile optimization problems of the last three decades. They appear in combinatorial optimization, engineering, machine learning, to name a few areas. However, SDPs are often pathological: they may not attain their optimal value, their optimal value may differ from that of their dual, and they may also be infeasible, but have zero distance to the set of feasible instances. In the talk I will show that many of these pathologies can be understood using a surprisingly simple tool: we can transform SDPs to a standard form that makes the pathology trivial to recognize. The transformation mostly uses elementary row operations coming from Gaussian elimination. The standard forms have computational uses, for example, in several cases they help us recognize infeasibility. The talk will rely only on knowledge of elementary linear algebra, and some basic convex analysis, which I will introduce. Some of this work is joint with Minghui Liu, Quoc Tran-Dinh and Yuzixuan Zhu.
Professor Gabor Pataki received his PhD in Algorithms, Combinatorics and Optimization from Carnegie Mellon University in 1996. After being a postdoctoral research scientist at Columbia, he joined UNC Chapel Hill in 2000. His research area is in discrete and in continuous optimization, in particular in convex analysis, convex optimization, mainly in semidefinite programming. One of his main results is on a classical problem in convex analysis: a characterization of when the linear image of a closed convex cone is closed.
Search Theory and Machine Scheduling
July 26, 2018
Thomas Lidbetter, Management Science and Information Systems Department, Rutgers Business School
We explore the connections between search theory and machine scheduling by examining a little-known ordering problem proposed by N. Pisaruk (1992). A target is hidden in one of a finite number of locations S, which must be searched in some order. The cost of searching an initial segment A of a given ordering is f(A) and the probability the target is located in A is g(A), where f is a non-decreasing submodular function and g is a non-decreasing supermodular function. We wish to find the ordering that discovers the target in minimal expected cost. The problem has applications to both machine scheduling and search theory, but is NP-hard. We present a new 2-approximation algorithm that generalizes well-known results in scheduling theory and establishes new ones. We also consider a version of the problem when nothing is known about the hiding probability, in which case we seek the randomized ordering that minimizes the expected cost in the worst case; equivalently this can be viewed as a zero-sum game.
Dr Lidbetter is an assistant professor in the Management Science and Information Systems Department in Rutgers Business School. His work is largely concerned with game theoretic models and their applications to problems of national security, particularly search games, which he studies from a game theoretic and algorithmic perspective. He is also interested in understanding and exploiting links between search theory and other areas of operations research such as scheduling and throughput maximization. Dr Lidbetter received his doctorate in mathematics from the London School of Economics, and won the 2013 Doctoral Dissertation Award from the UK Operations Research Society for the most distinguished body of research leading to a doctorate in the field of Operational Research.
Developing and implementing cutting-Edge learning technologies – the case of Project Management
May 3, 2018
Professor Avraham Shtub, Technion
The process of educating and training project managers presents a dilemma. The project management discipline is, almost by definition, an applied discipline. A project manager grows by working on projects and confronting and dealing with unstructured and unplanned situations. Project management is not a discipline with a heavy body of conceptual material like mathematics or physics. Therefore, project management cannot be taught with a 'cookbook' approach, as it is a discipline that seeks to manage uncertainty. Experiential instruction is a promising approach for training in project management. Training in the core project management methodologies along with hands-on experience to supplement the material on methodology. Trainees try out and experiment, for themselves, using realistic project scenarios, the various project management methodologies. They experience for themselves the various trade-offs that a project manager must deliberate between involving cost, schedule, risk and the scope of a project.
Professor Avraham Shtub holds the Stephen and Sharon Seiden Chair in Project Management. He has a B.Sc in Electrical Engineering from the Technion - Israel Institute of Technology (1974), an MBA from Tel Aviv University (1978) and a Ph.D in Management Science and Industrial Engineering from the University of Washington (1982). He is the recipient of the Institute of Industrial Engineering 1995 "Book of the Year Award" for his Book "Project Management: Engineering, Technology and Implementation" (coauthored with Jonathan Bard and Shlomo Globerson), Prentice Hall, 1994. He is the recipient
of the Production Operations Management Society; Wick Skinner Teaching Innovation Achievements Award for his book: "Enterprise Resource Planning (ERP): The Dynamics of Operations Management". His books on Project Management were published in English, Hebrew, Greek and Chinese. He is the recipient of the 2008 Project Management Institute Professional Development Product of the Year Award for the training simulator "Project Team Builder – PTB". He is the recipient of the Institute of Industrial Engineering 2017 "Book of the Year Award" for his Book "Introduction to Industrial Engineering" (co-authored with Yuval Cohen). Prof. Shtub was a Department Editor for IIE Transactions he was on the Editorial Boards of the Project Management Journal, The International Journal of Project Management, IIE Transactions and the International Journal of Production Research. He was a faculty member of the department of Industrial Engineering at Tel Aviv University from 1984 to 1998 were he also served as a chairman of the department (1993-1996). He joined the Technion in 1998 and was the Associate Dean and head of the MBA program. He has been a consultant to industry in the areas of project management, training by simulators and the design of production - operation systems. He was invited to speak at special seminars on Project Management and Operations in Europe, the Far East, North America, South America and Australia. Professor Shtub visited and taught at Vanderbilt University, The University of Pennsylvania, Korean Institute of Technology, Bilkent University in Turkey, Otego University in New Zealand, Yale University, Universidad Politécnica de Valencia, University of Bergamo in Italy and Politecnico di Milano in Italy.
Multimedia & Autism and Privacy-Preserving Distributed Deep Learning
April 24, 2018
Sen-ching Samson Cheung, University of Kentucky
In this talk, I will cover two topics in visualization and cybersecurity. The first one is on the application of visualization technology in autism. Autism Spectrum Disorder (ASD) is a highly prevalent developmental disorder that affects 1 in 68 children in the United State. The key to diagnose and treat ASD is through the examination and modification of specific behaviors associated with ASD, where multimedia and visualization can play a vital role. From identifying behavioral markers like repetitive movements and eye gaze patterns, to using virtual reality systems and robots for social skill interventions, many multimedia systems have been developed in recent years to make diagnosis more objective, to enable easy customization of treatments, and to improve learning efficiency of different skills. In the first part of the talk, I will review our research in using virtual and augmented reality systems as well as wearable devices for interventions and assistive platforms in ASD.
The second topic is on protecting privacy in distributed deep learning. The key promise of Internet-of-Things (IoT) and deep learning is to build data analytics solutions using powerful machine learning models over diverse data, collected from millions of embedded devices. While there have been many distributed deep learning (DDL) techniques, the main impeder lies in the privacy protection of the sensitive personal information. Existing approaches such as fully homomorphic encryption and differential privacy are computationally prohibitive or have been shown to leak sensitive information.In this part of the talk, I will present our recent work in privacy-preserving transformations (PPT), which are computationally efficient transformations applied at local data sites. These transformations are designed to protect privacy of the data, while supporting centralized learning at the cloud server using existing DL software infrastructure. Combined with powerful encrypted domain processing and game-theoretical collusion deterrence schemes, we show that cyberattacks at both the local data sites and the cloud servers can be effectively thwarted.
Sen-ching "Samson" Cheung is a Professor of Electrical and Computer Engineering and the director of Multimedia Information Laboratory (Mialab) at University of Kentucky (UKY), Lexington, KY, USA. He is the endowed Blazie Family Professor of Engineering. He is currently a visiting professor at the Department of Electrical and Computer Engineering & the MIND Institute (Medical Investigation of Neurodevelopmental Disorders) at UC Davis. Before joining UKY in 2004, he was a postdoctoral researcher with the Sapphire Scientific Data Mining Group at Lawrence Livermore National Laboratory that won the R&D 100 Award in 2006. He eceived his Ph.D. degree from University of California, Berkeley in 2002. He is a senior member of both IEEE and ACM. He has the fortune of working with a team of talented students and collaborators at Mialab in a number of areas in multimedia including video surveillance, privacy protection, encrypted domain signal processing, 3D data processing, virtual and augmented reality as well as computational multimedia for autism therapy. More details about current and past research projects at Mialab can be found at http://vis.uky.edu/mialab.
Distinguished Lecture Series
On Gradient-Based Optimization: Accelerated, Stochastic and Nonconvex
Professor Michael I. Jordan, University of California, Berkeley
Many new theoretical challenges have arisen in the area of gradient-based optimization for large-scale statistical data analysis, driven by the needs of applications and the opportunities provided by new hardware and software platforms. I discuss several recent, related results in this area: (1) a new framework for understanding Nesterov acceleration, obtained by taking a continuous-time, Lagrangian/Hamiltonian/symplectic perspective, (2) a discussion of how to escape saddle points efficiently in nonconvex optimization, and (3) the acceleration of Langevin diffusion.
Michael I. Jordan is the Pehong Chen Distinguished Professor in the Department of Electrical Engineering and Computer Science and the Department of Statistics at the University of California, Berkeley. His research interests bridge the computational, statistical, cognitive and biological sciences. Prof. Jordan is a member of the National Academy of Sciences and a member of the National Academy of Engineering. He has been named a Neyman Lecturer and a Medallion Lecturer by the Institute of Mathematical Statistics. He received the IJCAI Research Excellence Award in 2016, the David E. Rumelhart Prize in 2015 and the ACM/AAAI Allen Newell Award in 2009.
Perspectives on Stochastic Modeling
Professor Peter Glynn, Stanford University
Uncertainty is present in almost every decision-making environment. In many applications settings, the explicit quantitative modeling of uncertainty clearly improves decision-making. In this talk, I will discuss some perspectives on such stochastic models and their application. Specifically, I will talk about the interplay between modeling, data, and computation, and some of the lessons learned that are relevant to building models that can add value and insight.
Peter W. Glynn is the Thomas Ford Professor in the Department of Management Science and Engineering (MS&E) at Stanford University. He is a Fellow of INFORMS and of the Institute of Mathematical Statistics, and has been co-winner of Best Publication Awards from the INFORMS Simulation Society in 1993, 2008, and 2016 and the INFORMS Applied Probability Society in 2009. He was the co-winner of the John von Neumann Theory Prize from INFORMS in 2010 and in 2012, he was elected to the National Academy of Engineering. His research interests lie in stochastic simulation, queueing theory, and statistical inference for stochastic processes.
The lecture will be recorded and posted.
Bayesian Search for Missing Aircraft
Lawrence D. Stone
In recent years there have been a number of highly publicized searches for missing aircraft such as the ones for Air France flight AF 447 and Malaysia Airlines flight MH 370.
Bayesian search theory provides a well-developed method for planning searches for missing aircraft, ships lost at sea, or people missing on land. The theory has been applied successfully to searches for the missing US nuclear submarine Scorpion, the SS Central America (ship of gold), and the wreck of AF 447. It is used routinely the by U. S. Coast Guard to find people and ships missing at sea.
This talk presents the basic elements of the theory. It describes how Bayesian search theory was used to locate the wreck of AF 447 after two-years of unsuccessful search and discusses how it was applied to the search for MH 370. A crucial feature of Bayesian search theory is that it provides a principled method of combining all the available information about the location of a search object. This is particularly important in one-of-a-kind searches such as the one for AF 447 where there is little or no statistical data to rely upon.
Applied Risk Analytics: Making Advanced Analytics More Useful
Dr. Tony Cox
Traditional operations research emphasizes finding a feasible decision that maximizes an objective function. In practice, how decisions affect the objective function, and even what decisions are feasible, are often initially unknown. Managing risks effectively usually requires using available data, however limited, to answer the following questions, and then improve the answers in light of experience:
- DESCRIPTIVE ANALYTICS: What’s happening? What has changed recently? What should we be worrying about?
- PREDICTIVE ANALYTICS: What will (probably) happen if if we do nothing?
- CAUSAL ANALYTICS: What will (probably) happen if we take different actions or implement different policies? How soon are the consequences likely to occur, and how sure can we be?
- PRESCRIPTIVE ANALYTICS: What should we do next? How should we allocate available resources to explore, evaluate, and implement different actions or policies in different locations?
- EVALUATION ANALYTICS: How well are our risk management policies and decisions working? Are they producing (only) their intended effects? For what conditions or sub-populations do they work or fail?
- LEARNING ANALYTICS: How might we do better, taking into account value of information and opportunities to learn from small trials before scaling up?
- COLLABORATIVE ANALYTICS: How can we manage uncertain risks more effectively together?
This talk discusses recent advances in these areas and suggests how they might be integrated into a single decision support framework, which we call risk analytics, and applied to important policy questions such as whether, when, and how to revise risk management regulations or policies. Current technical methods of risk analytics, including change point analysis, quasi-experimental design and analysis, causal graph modeling, Bayesian Networks and influence diagrams, Granger causality and transfer entropy methods for time series, causal analysis and modeling, and low-regret learning provide a valuable toolkit for using data to assess and improve the performance of risk management decisions and policies by actively discovering what works, what does not, and how to improve over time.