Tech Summit Speaker 2016


Dr. Sascha Lange

PSIORI - 5D Lab GmbH

Dr. Sascha Lange

CEO and Founder

Learning to Win - Deep Neural Networks in Games
Long considered a dead research field, a few clever ideas have helped to revive the field of artificial neural networks and to cause a real hype ten years later; about deep neural networks but also about artificial intelligence and machine learning in general. This talk will outline the central ideas behind deep learning and how those will obviously have an impact on Electronic Games. But we'll also discuss what role Games played in the development of Deep Learning and Artificial Intelligence in general; as source of constant inspiration but also as a controlled test bed.

Samim Winiger

ArtificialExperience

Samim Winiger

Chief Creative Officer

Creativity, Machine Learning and Design - a new paradigm
In recent years, there has been an explosion of research and experiments that deal with creativity and machine learning. Almost every week there is a new A.I system that paints art, writes stories, composes music, designs objects and even builds houses. The question arrises: What is this all about? This talk presents a clear narrative and argues we are witnessing the birth of a new paradigm: creative A.I. It investigates the rich history of creative technologies - from augmentation to automation - and shows the opportunities intelligent machines offer for creative industries, design and the arts.

Prof. Alexander Löser

Beuth University of Applied Sciences Berlin

Prof. Alexander Löser

Professor for Databases and Text-based Information Systems (DATEXIS)

Understanding Text with Deep Machine Learning
A large number of people in the gaming community use written language to express sentiments and facts about players, walkthroughs, hints and new games. However, reading this text using machines and learn about gamers' demands is a difficult problem. In our talk we present deep machine learning techniques for understanding these semantics. For example, we introduce the DATEXIS Adaptive Entity Linking project where we utilize neural networks to extract structured data from domain-specific texts. We also demonstrate our research prototypes for market research, such as extraction-as-you-type, interactive review analysis and automatic generation of rare domain dictionaries. Finally, we give some insights into the scientific background and nuts and bolts we discovered when executing these algorithms efficiently on CUDA graphic cards.

Sebastian Arnold

Beuth University of Applied Sciences Berlin

Sebastian Arnold

PhD Student and Research Assistent at DATEXIS

Understanding Text with Deep Machine Learning
A large number of people in the gaming community use written language to express sentiments and facts about players, walkthroughs, hints and new games. However, reading this text using machines and learn about gamers' demands is a difficult problem. In our talk we present deep machine learning techniques for understanding these semantics. For example, we introduce the DATEXIS Adaptive Entity Linking project where we utilize neural networks to extract structured data from domain-specific texts. We also demonstrate our research prototypes for market research, such as extraction-as-you-type, interactive review analysis and automatic generation of rare domain dictionaries. Finally, we give some insights into the scientific background and nuts and bolts we discovered when executing these algorithms efficiently on CUDA graphic cards.

Dr. Damian Borth

German Research Center for Artificial Intelligence (DFKI)

Dr. Damian Borth

Head of Deep Learning Competence Center

Deep Learning for Visual Classification of Adjective Noun Pairs and its Application
Nowadays the Web, as a major platform for communication and information exchange, is shifting towards visual content. Unfortunately, visual content in form of images or videos is limited in its accessibility as compared to textual content. With recent advances in deep learning we are able to analyze the content of images and videos as not seen before. This talk will present the first framework able to extract sentiment from visual content by introducing the Visual Sentiment Ontology (VSO). This ontology consists of thousands of Adjective Noun Pair (ANP) concepts able to capture such polarities. Further, the talk introduces SentiBank, the associated deep convolutional neural network (CNN) used to detect the presence of up to 2089 ANPs in images. Originally designed to assess sentiment in visual content, SentiBank was already shown to have a broad spectrum of application domains ranging from aesthetic assessment, image popularity prediction, filtering explicit content, analysis of live-streamed video games. Finally, the talk will close with the Yahoo Flickr Creative Common 100 million (YFCC100m) dataset which is the largest available dataset in the academic community and challenges associated with large-scale training of CNNs.

Dr. Matthias Platho

 

Dr. Matthias Platho

 

Deep Learning as a Game Changer - Opportunities and Applications
Deep Learning is currently emerging as one of the most powerful techniques of Artificial Intelligence. Related startups have been bought for hundreds of millions of dollars. The hype around Deep Learning bases on the fact that it can be applied to different businesses – one of them being the games industry. Creating stronger AI opponents, evaluating player feedback automatically and replacing A/B testing by a learning system tailored to each individual user are just a few possible use cases.

Dr. Ralf Herbrich

Amazon Development Center Germany GmbH

Dr. Ralf Herbrich

Managing Director Amazon Development Germany and Director Machine Learning

Large Scale Machine Learning for Online Services
Over the past few years, we have entered the world of big and structured data - a trend largely driven by the exponential growth of internet-based online services such as Search, eCommerce and Social Networking as well as the ubiquity of smart devices with sensors in everyday life. This poses new challenges for statistical inference and decision-making as some of the basic assumptions are shifting including, firstly, the ability to store the parameters and data in the cloud and, secondly, the level of granularity and 'building blocks' in the data modelling phase. In this talk, I will discuss the implications of big and structured data for statistics and the convergence of statistical model and distributed systems. I will present one of the most versatile modeling techniques that combines systems and statistical properties - factor graphs - and review a series of approximate inference techniques such as distributed message passing. I will also talk about the connection to layered function models known as neural networks. The talk will be interspersed with real-world applications of these techniques in systems such as gamer ranking and matchmaking on Xbox Live (TrueSkill), recommendations and cloud machine learning services (Amazon Machine Learning).

Rafet Sifa

Fraunhofer Institute for Intelligent Analysis and Information Systems (IAIS)

Rafet Sifa

Media Engineering Department IAIS

Trends In Game Mining: An Overview from Pure Telemetry Analysis to Deep Learning
Understanding player behaviour using data science tools has become a vital step in today's agile game development cycle. Feedback received from such tools help developers and studios take immediate action to increase retention and monetization rates. Due to the tremendous growth in the size of game telemetry data and the level of complexity in player-game interactions, we require scalable, reliable and interpretable data analysis methods. In this talk we will give a use-case based overview of the challenges and the latest methodological trends in game mining putting emphasis on the role of classes of learning systems used and representation learning methods such as Deep Learning and Matrix Factorization in solving game business intelligence problems.

Ralph Hinsche

Nvidia

Ralph Hinsche

Business Development Manager

Deep Learning - Transforming how we look at photographs
GPUs, or Graphical Processing Units, are driving the industry adoption of deep learning. GPUs perform several calculations at once-or in parallel, making them ideal for training deep learning neural networks. EyeEm is a photography community and marketplace that offers over 15 million photographers a place to improve their skills, explore a world of beautiful photos, and earn money through licensing their images. Powered by NVIDIA GPUs, EyeEm Vision is EyeEm's proprietary framework for automatically categorizing and ranking photos based on their content and aesthetics. In this presentation, EyeEm's Co-Founder and CTO Ramzi Rizk (or Appu Shaji, Head of Research and Development) and NVIDIA's Business Development Manager Ralph Hinsche will discuss how EyeEm Vision, with the help of GPUs, transforms the way image keywording and categorization works, and sorts photos based on aesthetic qualities - transforming how we see photographs.

Ramzi Rizk

EyeEm

Ramzi Rizk

Co-Founder and CTO

Deep Learning - Transforming how we look at photographs
GPUs, or Graphical Processing Units, are driving the industry adoption of deep learning. GPUs perform several calculations at once-or in parallel, making them ideal for training deep learning neural networks. EyeEm is a photography community and marketplace that offers over 15 million photographers a place to improve their skills, explore a world of beautiful photos, and earn money through licensing their images. Powered by NVIDIA GPUs, EyeEm Vision is EyeEm's proprietary framework for automatically categorizing and ranking photos based on their content and aesthetics. In this presentation, EyeEm's Co-Founder and CTO Ramzi Rizk (or Appu Shaji, Head of Research and Development) and NVIDIA's Business Development Manager Ralph Hinsche will discuss how EyeEm Vision, with the help of GPUs, transforms the way image keywording and categorization works, and sorts photos based on aesthetic qualities - transforming how we see photographs.

Wojciech Samek

Fraunhofer Institute for Telecommunications, Heinrich Hertz Institute, HHI

Wojciech Samek

Head of Machine Learning Group - Department of Video Coding and Analytics

Interpretable Deep Learning
Deep neural networks (DNNs) have demonstrated high predictive performance on a number of tasks in the sciences and industry. However, these predictive models often behave as black-boxes, i.e., it is hard to grasp what makes them arrive at a particular decision. On practical problems where a single incorrect prediction can be costly, a simple prediction cannot be trusted by default. Instead, the prediction should be made interpretable to a human expert for careful verification.This talk presents a recently developed technique which allows to visualize and interpret the result of DNN inference. Different applications of this method to computer vision are presented.

Prof. Patrick van der Smagt

TU Munich

Prof. Patrick van der Smagt

Professor for Biomimetic Robotics and Machine Learning

Deep Learning in Movement Modelling and Robotics
A traditional way of representing movement of human or robotic limbs is by solving dynamical models, estimating their parameters, and combining those with the available neuronal or mathematical controllers. Opposing this systemic approach, we venture to represent movement using generative probabilistic models, generated through deep learning. Exploiting deep autoencoders and recurrent neural networks, we can use these to accurately model human or robot movement, based on measured movement data alone. Moreover, these movements can be reconstructed from different types of sensors, which when combined increase accuracy and reduce error. In this talk I will focus on machine-learning methodologies for movement representation and show how their results can be used in robot control, human movement prediction, assistive robotics, and human--machine interfacing.

Dr. Mike Preuss

Department of Information Systems and Statistics University of Munster

Dr. Mike Preuss

Research Associate

Computational Intelligence and Games: Creating, Balancing, Learning to Play
We are currently seeing the redefinition of the term Game AI. Modern Game AI is understood to be much more encompassing than just controlling opponent AI. A huge amount of research currently deals with supporting game creation by means of Procedural Content Generation. A closely linked and largely untackled problem is (semi-)automated balancing: if we let algorithms modify game content, we also need the tools to make a game playable (again). This in turn requires general, adaptable AI components that can represent players, as featured in the General Video Game AI (GVGAI) environment and competitions. We can find such tools in Computational Intelligence/Machine Learning: Evolutionary Algorithms (EA) and Monte-Carlo tree search (MCTS) are flexible, easy-to-use methods that heavily rely on controlled randomness.

M.Sc. Fabian Schrodt

Department of Computer Science University of Tübingen

M.Sc. Fabian Schrodt

Research Associate Cognitive Modeling

Cognitive Game Characters: Science and Techniques behind "MarioAI"
Artificial intelligence has made tremendous progress over the recent years. Yet, behavior of game characters is typically still scripted, reactive, and predictable. This talk provides insights from cognitive science and psychology for creating game characters that appear versatile and somewhat alive. In the example of a Super Mario clone - MarioAI - a cognitive architecture is applied to enable intelligent agents to learn and reason about their world, to communicate with the user via speech, and to act goal-directedly to reach self-motivated or instructed goals. In a similar manner, the architecture enables characters to learn from and about other agents to emulate social interactions. The talk will highlight future opportunities for developing adaptive game agents by means of cognitive and neural architectures.

Ulf Schöneberg

The unbelievable Machine Company GmbH

Ulf Schöneberg

Data Scientist

Neural Turing Machines
DeepMind has extended recurrent neural networks (RNNs) with the ability to access memory like a Turing machine. A traditional RNN is already Turing complete. The combination makes it even more competent. It is an excellent sequence learner and also capable of deriving complex algorithms like sorting or copying. The implications for game AIs are numerous. This talk will give an introduction to RNNs, Turing machines and reinforcement learning.

Olivia Klose

Microsoft Germany

Olivia Klose

Technical Evangelist Big Data, Machine Learning

How Deep Learning affects the Gaming Experience
The creation of machines with human-level intelligence has long been seen as the ultimate ambition of computer science. New developments in machine learning, coupled with exponential growth in both data and processing power, suggest the time may be ripe to take the next steps towards this elusive goal. In this talk, we will go through machine learning and deep learning efforts that can enrich the gaming experience through new interaction models: gestures, speech, vision, semantics. One of the underlying foundations is the Computational Network Toolkit (CNTK) - a unified deep-learning toolkit that describes neural networks as a series of computational steps via a directed graph. CNTK enables more accurate results regarding vision and speech understanding and thus can lead to scenarios that go beyond the classical screen game play, such as augmented reality.

Marcel Tilly

Microsoft Research

Marcel Tilly

TBA

How Deep Learning affects the Gaming Experience
The creation of machines with human-level intelligence has long been seen as the ultimate ambition of computer science. New developments in machine learning, coupled with exponential growth in both data and processing power, suggest the time may be ripe to take the next steps towards this elusive goal. In this talk, we will go through machine learning and deep learning efforts that can enrich the gaming experience through new interaction models: gestures, speech, vision, semantics. One of the underlying foundations is the Computational Network Toolkit (CNTK) - a unified deep-learning toolkit that describes neural networks as a series of computational steps via a directed graph. CNTK enables more accurate results regarding vision and speech understanding and thus can lead to scenarios that go beyond the classical screen game play, such as augmented reality.