Lies, Damn Lies and Machine Learning

City: McLean

The idea of giving a machine the ability to think has long captured the imagination of technologists.  However, the efforts to achieve this goal have often over promised and led to disappointment among researchers and those who fund them.  These failures have provided one key lesson:  the machine that “thinks” will first have to be a machine that “learns” from complex data.  In the 1980’s an interdisciplinary field centered on computer science called machine learning emerged with significant differences from its antecedent, artificial intelligence.  Over time the machine learning community has matured and found common purpose with cognitive scientists, physicists, electrical engineers and statisticians to create a fascinating intellectual stew of perspectives, theories, and algorithms that play an important role in fields as diverse as medicine, commerce and national security.  Given the wide-ranging and sophisticated content of machine learning, understanding the field can be rather daunting.  The goal of this talk is to provide a field guide of sorts to those unfamiliar with machine learning but familiar with some basic mathematical and statistical concepts.  One of the important distinctions is between supervised and unsupervised learning and the role that labels play in learning complex data.  The other important conceptual axis is the Frequentist and Bayesian approaches to how data and models are related.  To access the foundations of machine learning, an emphasis will be placed on how the geometry of representations can provide a unifying perspective across the entire field.  And, finally, some discussion of the latest “breakthrough” in machine learning, Deep Learning, will illustrate how to think about new algorithms and the ultimate goal of creating a thinking machine. Speaker(s): Dr. Jeff M. Byers, Location: Room: Third Floor Conference Room TeqCorner 1616 Anderson Rd McLean, Virginia 22314

Lies, Damn Lies and Machine Learning

City: McLean

The idea of giving a machine the ability to think has long captured the imagination of technologists.  However, the efforts to achieve this goal have often over promised and led to disappointment among researchers and those who fund them.  These failures have provided one key lesson:  the machine that “thinks” will first have to be a machine that “learns” from complex data.  In the 1980’s an interdisciplinary field centered on computer science called machine learning emerged with significant differences from its antecedent, artificial intelligence.  Over time the machine learning community has matured and found common purpose with cognitive scientists, physicists, electrical engineers and statisticians to create a fascinating intellectual stew of perspectives, theories, and algorithms that play an important role in fields as diverse as medicine, commerce and national security.  Given the wide-ranging and sophisticated content of machine learning, understanding the field can be rather daunting.  The goal of this talk is to provide a field guide of sorts to those unfamiliar with machine learning but familiar with some basic mathematical and statistical concepts.  One of the important distinctions is between supervised and unsupervised learning and the role that labels play in learning complex data.  The other important conceptual axis is the Frequentist and Bayesian approaches to how data and models are related.  To access the foundations of machine learning, an emphasis will be placed on how the geometry of representations can provide a unifying perspective across the entire field.  And, finally, some discussion of the latest “breakthrough” in machine learning, Deep Learning, will illustrate how to think about new algorithms and the ultimate goal of creating a thinking machine. Speaker(s): Dr. Jeff M. Byers, Location: Room: Third Floor Conference Room TeqCorner 1616 Anderson Rd McLean, Virginia 22102

Autonomy in the Real, Unstructured World

City: McLean

***CANCELED*** Popular articles about advances in autonomous systems suggest that in the very near future our cities will be covered by delivery drones and disasters will be met with swarms of helpful robots to gather data and deliver aid.  Reaching this future will require autonomy algorithms that are able to understand the assumptions they are making and assess whether or not their hardware and information systems allow them to make correct decisions.  It also requires tight coupling between the design of algorithms for autonomous systems and the design of their hardware for navigation and communications. In this talk, I will discuss an approach developed at the Naval Research Laboratory to design swarm control laws and mode-switching protocols to account for these "real-world" issues, beginning with a discussion of the trade space between complexity, performance and reliability.  I will then present potential field behaviors that are designed to work at specific instances within this trade space and a means of using linear temporal logic (LTL) to produce correct-by-construction mode selection tools that allow robots to detect their position in this trade space and select an appropriate behavior.   This LTL framework allows us to separate safety concerns from higher level autonomy such as task selection and scheduling, as the LTL controllers guarantee that vehicles will not engage in unsafe behaviors.  I will discuss how this approach allows us to reduce the state space of higher level autonomy algorithms such as task selection and scheduling. The talk will conclude with description of a simulated disaster relief mission performed by simulated and live teams of heterogeneous robots in outdoor environments.  The results of these experiments will show lessons learned about how to decouple complex problems into abstract decisions and concrete actions. Speaker(s): Dr. Thomas Apker, Location: Room: 3rd Floor Conference Room TeqCorner 1616 Anderson Rd McLean, Virginia 22102