3D-PhysNet: Learning the intuitive physics of non-rigid object deformations

Zhihua Wang, Stefano Rosa, Bo Yang, Sen Wang, Niki Trigoni, Andrew Markham

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Abstract

The ability to interact and understand the environment is a fundamental prerequisite for a wide range of applications from robotics to augmented reality. In particular, predicting how deformable objects will react to applied forces in real time is a significant challenge. This is further confounded by the fact that shape information about encountered objects in the real world is often impaired by occlusions, noise and missing regions e.g. a robot manipulating an object will only be able to observe a partial view of the entire solid. In this work we present a framework, 3D-PhysNet, which is able to predict how a three-dimensional solid will deform under an applied force using intuitive physics modelling. In particular, we propose a new method to encode the physical properties of the material and the applied force, enabling generalisation over materials. The key is to combine deep variational autoencoders with adversarial training, conditioned on the applied force and the material properties. We further propose a cascaded architecture that takes a single 2.5D depth view of the object and predicts its deformation. Training data is provided by a physics simulator. The network is fast enough to be used in real-time applications from partial views. Experimental results show the viability and the generalisation properties of the proposed architecture.

LanguageEnglish
Title of host publicationProceedings of the Twenty-Seventh International Joint Conference on Artificial Intelligence
PublisherInternational Joint Conferences on Artificial Intelligence
Pages4958-4964
Number of pages7
ISBN (Electronic)9780999241127
DOIs
StatePublished - 2018
Event27th International Joint Conference on Artificial Intelligence 2018 - Stockholm, Sweden
Duration: 13 Jul 201819 Jul 2018

Publication series

NameProceedings of the International Joint Conference on Artificial Intelligence IJCAI
ISSN (Electronic)1045-0823

Conference

Conference27th International Joint Conference on Artificial Intelligence 2018
Abbreviated titleIJCAI 2018
CountrySweden
CityStockholm
Period13/07/1819/07/18

Fingerprint

Physics
Augmented reality
Materials properties
Robotics
Physical properties
Simulators
Robots

Cite this

Wang, Z., Rosa, S., Yang, B., Wang, S., Trigoni, N., & Markham, A. (2018). 3D-PhysNet: Learning the intuitive physics of non-rigid object deformations. In Proceedings of the Twenty-Seventh International Joint Conference on Artificial Intelligence (pp. 4958-4964). (Proceedings of the International Joint Conference on Artificial Intelligence IJCAI). International Joint Conferences on Artificial Intelligence. DOI: 10.24963/ijcai.2018/688
Wang, Zhihua ; Rosa, Stefano ; Yang, Bo ; Wang, Sen ; Trigoni, Niki ; Markham, Andrew. / 3D-PhysNet : Learning the intuitive physics of non-rigid object deformations. Proceedings of the Twenty-Seventh International Joint Conference on Artificial Intelligence. International Joint Conferences on Artificial Intelligence, 2018. pp. 4958-4964 (Proceedings of the International Joint Conference on Artificial Intelligence IJCAI).
@inproceedings{bb2d46b19aa744969524971170e37ee7,
title = "3D-PhysNet: Learning the intuitive physics of non-rigid object deformations",
abstract = "The ability to interact and understand the environment is a fundamental prerequisite for a wide range of applications from robotics to augmented reality. In particular, predicting how deformable objects will react to applied forces in real time is a significant challenge. This is further confounded by the fact that shape information about encountered objects in the real world is often impaired by occlusions, noise and missing regions e.g. a robot manipulating an object will only be able to observe a partial view of the entire solid. In this work we present a framework, 3D-PhysNet, which is able to predict how a three-dimensional solid will deform under an applied force using intuitive physics modelling. In particular, we propose a new method to encode the physical properties of the material and the applied force, enabling generalisation over materials. The key is to combine deep variational autoencoders with adversarial training, conditioned on the applied force and the material properties. We further propose a cascaded architecture that takes a single 2.5D depth view of the object and predicts its deformation. Training data is provided by a physics simulator. The network is fast enough to be used in real-time applications from partial views. Experimental results show the viability and the generalisation properties of the proposed architecture.",
author = "Zhihua Wang and Stefano Rosa and Bo Yang and Sen Wang and Niki Trigoni and Andrew Markham",
year = "2018",
doi = "10.24963/ijcai.2018/688",
language = "English",
series = "Proceedings of the International Joint Conference on Artificial Intelligence IJCAI",
publisher = "International Joint Conferences on Artificial Intelligence",
pages = "4958--4964",
booktitle = "Proceedings of the Twenty-Seventh International Joint Conference on Artificial Intelligence",

}

Wang, Z, Rosa, S, Yang, B, Wang, S, Trigoni, N & Markham, A 2018, 3D-PhysNet: Learning the intuitive physics of non-rigid object deformations. in Proceedings of the Twenty-Seventh International Joint Conference on Artificial Intelligence. Proceedings of the International Joint Conference on Artificial Intelligence IJCAI, International Joint Conferences on Artificial Intelligence, pp. 4958-4964, 27th International Joint Conference on Artificial Intelligence 2018, Stockholm, Sweden, 13/07/18. DOI: 10.24963/ijcai.2018/688

3D-PhysNet : Learning the intuitive physics of non-rigid object deformations. / Wang, Zhihua; Rosa, Stefano; Yang, Bo; Wang, Sen; Trigoni, Niki; Markham, Andrew.

Proceedings of the Twenty-Seventh International Joint Conference on Artificial Intelligence. International Joint Conferences on Artificial Intelligence, 2018. p. 4958-4964 (Proceedings of the International Joint Conference on Artificial Intelligence IJCAI).

Research output: Chapter in Book/Report/Conference proceedingConference contribution

TY - GEN

T1 - 3D-PhysNet

T2 - Learning the intuitive physics of non-rigid object deformations

AU - Wang,Zhihua

AU - Rosa,Stefano

AU - Yang,Bo

AU - Wang,Sen

AU - Trigoni,Niki

AU - Markham,Andrew

PY - 2018

Y1 - 2018

N2 - The ability to interact and understand the environment is a fundamental prerequisite for a wide range of applications from robotics to augmented reality. In particular, predicting how deformable objects will react to applied forces in real time is a significant challenge. This is further confounded by the fact that shape information about encountered objects in the real world is often impaired by occlusions, noise and missing regions e.g. a robot manipulating an object will only be able to observe a partial view of the entire solid. In this work we present a framework, 3D-PhysNet, which is able to predict how a three-dimensional solid will deform under an applied force using intuitive physics modelling. In particular, we propose a new method to encode the physical properties of the material and the applied force, enabling generalisation over materials. The key is to combine deep variational autoencoders with adversarial training, conditioned on the applied force and the material properties. We further propose a cascaded architecture that takes a single 2.5D depth view of the object and predicts its deformation. Training data is provided by a physics simulator. The network is fast enough to be used in real-time applications from partial views. Experimental results show the viability and the generalisation properties of the proposed architecture.

AB - The ability to interact and understand the environment is a fundamental prerequisite for a wide range of applications from robotics to augmented reality. In particular, predicting how deformable objects will react to applied forces in real time is a significant challenge. This is further confounded by the fact that shape information about encountered objects in the real world is often impaired by occlusions, noise and missing regions e.g. a robot manipulating an object will only be able to observe a partial view of the entire solid. In this work we present a framework, 3D-PhysNet, which is able to predict how a three-dimensional solid will deform under an applied force using intuitive physics modelling. In particular, we propose a new method to encode the physical properties of the material and the applied force, enabling generalisation over materials. The key is to combine deep variational autoencoders with adversarial training, conditioned on the applied force and the material properties. We further propose a cascaded architecture that takes a single 2.5D depth view of the object and predicts its deformation. Training data is provided by a physics simulator. The network is fast enough to be used in real-time applications from partial views. Experimental results show the viability and the generalisation properties of the proposed architecture.

UR - http://www.scopus.com/inward/record.url?scp=85055708591&partnerID=8YFLogxK

U2 - 10.24963/ijcai.2018/688

DO - 10.24963/ijcai.2018/688

M3 - Conference contribution

T3 - Proceedings of the International Joint Conference on Artificial Intelligence IJCAI

SP - 4958

EP - 4964

BT - Proceedings of the Twenty-Seventh International Joint Conference on Artificial Intelligence

PB - International Joint Conferences on Artificial Intelligence

ER -

Wang Z, Rosa S, Yang B, Wang S, Trigoni N, Markham A. 3D-PhysNet: Learning the intuitive physics of non-rigid object deformations. In Proceedings of the Twenty-Seventh International Joint Conference on Artificial Intelligence. International Joint Conferences on Artificial Intelligence. 2018. p. 4958-4964. (Proceedings of the International Joint Conference on Artificial Intelligence IJCAI). Available from, DOI: 10.24963/ijcai.2018/688