{"id":158,"date":"2016-06-14T23:51:18","date_gmt":"2016-06-14T23:51:18","guid":{"rendered":"http:\/\/dialport.ict.usc.edu\/?page_id=158"},"modified":"2016-07-28T22:08:03","modified_gmt":"2016-07-28T22:08:03","slug":"multimodal-tools","status":"publish","type":"page","link":"https:\/\/dialport.ict.usc.edu\/index.php\/resources\/multimodal-tools\/","title":{"rendered":"Multimodal Tools"},"content":{"rendered":"<p><strong><a href=\"http:\/\/dialport.ict.usc.edu\/index.php\/multisense\/\">Multisense<\/a>\u00a0<\/strong>is a perception framework that enables multiple sensing and understanding modules to inter-operate simultaneously, broadcasting data through the Perception Markup Language. MultiSense currently contains <a class=\"external-link\" href=\"http:\/\/multicomp.ict.usc.edu\/?s=gavam&amp;x=0&amp;y=0\" rel=\"nofollow\">GAVAM<\/a>, CLM FaceTracker and <a class=\"external-link\" href=\"http:\/\/projects.ict.usc.edu\/mxr\/faast\/\" rel=\"nofollow\">FAAST<\/a> which can be used with a webcam or Kinect. The Toolkit provides an example of how to use the MultiSense framework\u00a0(also known as multimodal framework, developed by\u00a0<a class=\"external-link\" href=\"http:\/\/multicomp.ict.usc.edu\/\" rel=\"nofollow\">Multicomp Lab<\/a>).MultiSense uses these technologies:<\/p>\n<ul>\n<li>CLM (developed by\u00a0<a class=\"external-link\" href=\"mailto:Jason.Saragih@csiro.au\" rel=\"nofollow\">Jason Saragih<\/a><a class=\"external-link\" href=\"http:\/\/web.mac.com\/jsaragih\/FaceTracker\/FaceTracker.html\" rel=\"nofollow\">\u00a0et .al<\/a>)<\/li>\n<li>Gavam face-tracker (developed by\u00a0<a class=\"external-link\" href=\"http:\/\/ict.usc.edu\/profile\/louis-philippe-morency\/\" rel=\"nofollow\">Louis-Philippe Morency<\/a>&#8216;s Multicomp Lab)<\/li>\n<li>Kinect<\/li>\n<li><a class=\"external-link\" href=\"http:\/\/projects.ict.usc.edu\/mxr\/faast\/\" rel=\"nofollow\">FAAST<\/a>\u00a0(developed by\u00a0<a class=\"external-link\" href=\"http:\/\/projects.ict.usc.edu\/mxr\/\" rel=\"nofollow\">Mixed Reality Group<\/a>)<\/li>\n<li><a class=\"external-link\" href=\"http:\/\/www.informatik.uni-augsburg.de\/lehrstuehle\/hcm\/projects\/tools\/ssi\/\" rel=\"nofollow\">ssi<\/a>\u00a0Library (developed by\u00a0<a class=\"external-link\" title=\"M.Sc. Johannes Wagner\" href=\"http:\/\/www.informatik.uni-augsburg.de\/lehrstuehle\/hcm\/staff\/wagner\/\" rel=\"nofollow\">Johannes Wagner,<\/a>\u00a0University of Augsburg).<\/li>\n<\/ul>\n<p>&nbsp;<\/p>\n","protected":false},"excerpt":{"rendered":"<p>Multisense\u00a0is a perception framework that enables multiple sensing and understanding modules to inter-operate simultaneously, broadcasting data through the Perception Markup Language. MultiSense currently contains GAVAM, CLM FaceTracker and FAAST which can be used with a webcam or Kinect. The Toolkit provides an example of how to use the MultiSense framework\u00a0(also known as multimodal framework, developed [&hellip;]<\/p>\n","protected":false},"author":19,"featured_media":0,"parent":19,"menu_order":0,"comment_status":"closed","ping_status":"closed","template":"","meta":{"footnotes":""},"class_list":["post-158","page","type-page","status-publish","hentry"],"_links":{"self":[{"href":"https:\/\/dialport.ict.usc.edu\/index.php\/wp-json\/wp\/v2\/pages\/158","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/dialport.ict.usc.edu\/index.php\/wp-json\/wp\/v2\/pages"}],"about":[{"href":"https:\/\/dialport.ict.usc.edu\/index.php\/wp-json\/wp\/v2\/types\/page"}],"author":[{"embeddable":true,"href":"https:\/\/dialport.ict.usc.edu\/index.php\/wp-json\/wp\/v2\/users\/19"}],"replies":[{"embeddable":true,"href":"https:\/\/dialport.ict.usc.edu\/index.php\/wp-json\/wp\/v2\/comments?post=158"}],"version-history":[{"count":0,"href":"https:\/\/dialport.ict.usc.edu\/index.php\/wp-json\/wp\/v2\/pages\/158\/revisions"}],"up":[{"embeddable":true,"href":"https:\/\/dialport.ict.usc.edu\/index.php\/wp-json\/wp\/v2\/pages\/19"}],"wp:attachment":[{"href":"https:\/\/dialport.ict.usc.edu\/index.php\/wp-json\/wp\/v2\/media?parent=158"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}