AI'12 Invited Speakers
Prof Joseph Halpern (Cornell University, USA)
Title: Constructive decision theory: Decision theory with subjective states and outcomes
Date: Wednesday 5 December 2012
Abstract: The standard approach in decision theory (going back to Savage) is to place a preference order on acts, where an act is a function from states to outcomes. If the preference order satisfies appropriate postulates, then the decision maker can be viewed as acting as if he has a probability on states and a utility function on outcomes, and is maximizing expected utility. This framework implicitly assumes that the decision maker knows what the states and outcomes are. That isn't reasonable in a complex situation. For example, in trying to decide whether or not to attack Iraq, what are the states and what are the outcomes? We redo Savage viewing acts essentially as syntactic programs. We don't need to assume either states or outcomes. However, among other things, we can get representation theorems in the spirit of Savage's theorems; for Savage, the agent's probability and utility are subjective; for us, in addition to the probability and utility being subjective, so is the state space and the outcome space. I discuss the benefits, both conceptual and pragmatic, of this approach. As I show, among other things, it provides an elegant solution to framing problems.
This is joint work with Larry Blume and David Easley. No prior knowledge of Savage's work is assumed.
Brief biography: Prof Joseph Halpern received a B.Sc. in mathematics from the University of Toronto in 1975 and a Ph.D. in mathematics from Harvard in 1981. In between, he spent two years as the head of the Mathematics Department at Bawku Secondary School, in Ghana. After a year as a visiting scientist at MIT, he joined the IBM Almaden Research Center in 1982, where he remained until 1996, also serving as a consulting professor at Stanford. In 1996, he joined the CS Department at Cornell, and is now department chair.
Halpern's major research interests are in reasoning about knowledge and uncertainty, security, distributed computation, decision theory, and game theory. Together with his former student, Yoram Moses, he pioneered the approach of applying reasoning about knowledge to analyzing distributed protocols and multi-agent systems. He has coauthored 6 patents, two books ("Reasoning About Knowledge" and "Reasoning about Uncertainty"), and over 300 technical publications.
Halpern is a Fellow of AAAI, AAAS, ACM, and IEEE. Among other awards, he received the ACM SIGART Autonomous Agents Research Award in 2011, the Dijkstra Prize in 2009, the ACM/AAAI Newell Award in 2008, the Godel Prize in 1997, was a Guggenheim Fellow in 2001-02, and a Fulbright Fellow in 2001-02 and 2009-10. Two of his papers have won best-paper prizes at IJCAI (1985 and 1991), and another two received best-paper awards at the Knowledge Representation and Reasoning Conference (2006 and 2012). He was editor-in-chief of the Journal of the ACM (1997-2003) and has been program chair of a number of conferences, including the Symposium on Theory in Computing (STOC), Logic in Computer Science (LICS), Uncertainty in AI (UAI), Principles of Distributed Computing (PODC), and Theoretical Aspects of Rationality and Knowledge (TARK).
Prof Mary O'Kane (NSW Chief Scientist and Engineer)
Photo taken in the year of the first Australian AI
Title: AI in Australia – early days & current impact
Date: Thursday 6 December 2012
Abstract: This talk will survey the early days of Artificial Intelligence research in Australia and consider the impact of AI tools and techniques on current Australian productivity improvements.
Brief biography: Mary O'Kane is the NSW Chief Scientist and Engineer, Executive Chairman of Mary O'Kane & Associates Pty Ltd, and a company director. She led the first automatic speech recognition research group in Australia and was active in the Artificial Intelligence community before being diverted to a range of other things including being Vice-Chancellor of the University of Adelaide from 1996-2000 and a member of committees and boards including the Australian Research Council, the Co-operative Research Centres Committee, the board of FH Faulding & Co Ltd, the CSIRO Board and the board of the Australian Centre for Renewable Energy.
Prof Mamoru Kaneko (University of Tsukuba, Japan)
Title: Epistemic Logic and Inductive Game Theory
Date: Friday 7 December 2012
Abstract: "Bounded rationality" appears in many forms in individual thinking and behavior in social situations. We have no systematic developments about it in the literature of game theory. We should take it seriously for future developments of game theory and related fields. Its central part is not only in cognitive and epistemic abilities of individuals but also in social interactions. I will discuss a few foundational problems related to "bounded rationality" from the perspective of epistemic logic and inductive game theory.
My presentation starts with a Japanese comic story, called "Konnyaku Mondo", to illustrate the possibility that people communicate, believe they understand each other well, but actually, they misunderstand each other. This shows a discrepancy between objectivity and subjectivity hidden in a social situation. This recognition leads to many other problems. (Some extant approach, such as the Bayesian game theory, appears to address this problem too. However, it extends the classical game theory in a probabilistic manner, and hides the difficulty in its generality; it suffers from the same criticism if we consider the entire approach.)
We take the epistemic logic approach: It enables us to talk more faithfully about false beliefs. Also, it incorporates a possibility of logical inferences into game theoretical prediction/decision making. An explicit treatment of those components is a key for "bounded rationality."
On the other hand, the epistemic logic approach suffers from the incapability of discussing sources for experiential beliefs of players on their situations. This needs a development of a new framework, which is inductive game theory. In this approach, we explore experiential foundations of game theory: Individual players learn (some part of) the structure of the interactive situation by playing in it. These two approaches opens broad fields, which are apparently related to many AI-problems.
Brief biography: When I was an undergraduate student at Department of Social Engineering, Tokyo Institute of Technology, I was interested in foundational issues on human thinking and behavior in society. In 1977, I finished my Ph.D. study, and in 1979, I got the Ph.D. degree. In 1977, I started working as an economist at University of Tsukuba, and visited the Cowles Foundation of Yale University from 1980 to 1982. After I came back to Tsukuba, I decided to return to my original plan to work on foundational issues. Looking for key concepts in many fields, I found that symbolic logic could be a key for my aim since it treats propositions and logical inferences explicitly.
In 1986, I moved to Hitotsubashi University and met a proof theorist, Takashi Nagashima. I learned proof theory from him, and started working on proof theory with him to develop a theory of epistemic logic for game theory, which we called game logic. For three years from 1989, I worked, still as an economist, at Virginia Polytechnic Institute, and then came back to Tsukuba. After it, I had intensively worked on game logic, and reached one important result on the undecidablity and playability in a game, which was published later in Studia Logica.
In 1996, I turned my attention to experiential foundations on game theory. I started working with Akihiko Matsui (now, University of Tokyo) on inductive game theory, particularly, on prejudices and discrimination. Later, Jeffrey J. Kline (now, at University of Queensland) was involved in this project, and we started working on IGT in a more systematic manner.
Officially, I am still an economist, but actually, my main interests are in logic and social philosophy. In my presentation, I introduce the epistemic logic approach and inductive game theory to the AI audience.