Algorithms Predict Sports Teams’ Moves With 80% Accuracy

Summary: A new machine learning algorithm can predict the in-game actions of volleyball players with 80% accuracy.

Source: Cornell University

Algorithms developed in Cornell’s Laboratory for Intelligent Systems and Controls can predict the in-game actions of volleyball players with more than 80% accuracy, and now the lab is collaborating with the Big Red hockey team to expand the research project’s applications.

The algorithms are unique in that they take a holistic approach to action anticipation, combining visual data – for example, where an athlete is located on the court – with information that is more implicit, like an athlete’s specific role on the team.

“Computer vision can interpret visual information such as jersey color and a player’s position or body posture,” said Silvia Ferrari, the John Brancaccio Professor of Mechanical and Aerospace Engineering, who led the research.

“We still use that real-time information, but integrate hidden variables such as team strategy and player roles, things we as humans are able to infer because we’re experts at that particular context.”

Ferrari and doctoral students Junyi Dong and Qingze Huo trained the algorithms to infer hidden variables the same way humans gain their sports knowledge – by watching games. The algorithms used machine learning to extract data from videos of volleyball games, and then used that data to help make predictions when shown a new set of games.

The results were published Sept. 22 in the journal ACM Transactions on Intelligent Systems and Technology, and show the algorithms can infer players’ roles – for example, distinguishing a defense-passer from a blocker – with an average accuracy of nearly 85%, and can predict multiple actions over a sequence of up to 44 frames with an average accuracy of more than 80%. The actions included spiking, setting, blocking, digging, running, squatting, falling, standing and jumping.

Ferrari envisions teams using the algorithms to better prepare for competition by training them with existing game footage of an opponent and using their predictive abilities to practice specific plays and game scenarios.

Ferrari has filed for a patent and is now working with the Big Red men’s hockey team to further develop the software. Using game footage provided by the team, Ferrari and her graduate students, led by Frank Kim, are designing algorithms that autonomously identify players, actions and game scenarios.

This shows people playing volleyball
The algorithms used machine learning to extract data from videos of volleyball games, and then used that data to help make predictions when shown a new set of games. Image is in the public domain

One goal of the project is to help annotate game film, which is a tedious task when performed manually by team staff members.

“Our program places a major emphasis on video analysis and data technology,” said Ben Russell, director of hockey operations for the Cornell men’s team.

“We are constantly looking for ways to evolve as a coaching staff in order to better serve our players. I was very impressed with the research Professor Ferrari and her students have conducted thus far. I believe that this project has the potential to dramatically influence the way teams study and prepare for competition.”

Beyond sports, the ability to anticipate human actions bears great potential for the future of human-machine interaction, according to Ferrari, who said improved software can help autonomous vehicles make better decisions, bring robots and humans closer together in warehouses, and can even make video games more enjoyable by enhancing the computer’s artificial intelligence.

“Humans are not as unpredictable as the machine learning algorithms are making them out to be right now,” said Ferrari, who is also associate dean for cross-campus engineering research, “because if you actually take into account all of the content, all of the contextual clues, and you observe a group of people, you can do a lot better at predicting what they’re going to do.”

Funding: The research was supported by the Office of Naval Research Code 311 and Code 351, and commercialization efforts are being supported by the Cornell Office of Technology Licensing.

About this sport and AI research news

Author: Becka Bowyer
Source: Cornell University
Contact: Becka Bowyer – Cornell University
Image: The image is in the public domain

Original Research: Closed access.
A Holistic Approach for Role Inference and Action Anticipation in Human Teams” by Silvia Ferrari et al. ACM Transactions on Intelligent Systems and Technology


Abstract

A Holistic Approach for Role Inference and Action Anticipation in Human Teams

The ability to anticipate human actions is critical to many cyber-physical systems, such as robots and autonomous vehicles.

Computer vision and sensing algorithms to date have focused on extracting and predicting visual features that are explicit in the scene, such as color, appearance, actions, positions, and velocities, using video and physical measurements, such as object depth and motion.

Human actions, however, are intrinsically influenced and motivated by many implicit factors such as context, human roles and interactions, past experience, and inner goals or intentions. For example, in a sport team, the team strategy, player role, and dynamic circumstances driven by the behavior of the opponents, all influence the actions of each player.

This article proposes a holistic framework for incorporating visual features, as well as hidden information, such as social roles, and domain knowledge.

The approach, relying on a novel dynamic Markov random field (DMRF) model, infers the instantaneous team strategy and, subsequently, the players’ roles that are temporally evolving throughout the game.

The results from the DMRF inference stage are then integrated with instantaneous visual features, such as individual actions and position, in order to perform holistic action anticipation using a multi-layer perceptron (MLP).

The approach is demonstrated on the team sport of volleyball, by first training the DMRF and MLP offline with past videos, and, then, by applying them to new volleyball videos online.

These results show that the method is able to infer the players’ roles with an average accuracy of 86.99%, and anticipate future actions over a sequence of up to 46 frames with an average accuracy of 80.50%. Additionally, the method predicts the onset and duration of each action achieving a mean relative error of 14.57% and 15.67%, respectively.

Join our Newsletter
I agree to have my personal information transferred to AWeber for Neuroscience Newsletter ( more information )
Sign up to receive our recent neuroscience headlines and summaries sent to your email once a day, totally free.
We hate spam and only use your email to contact you about newsletters. You can cancel your subscription any time.