Design Futures: Tangible Media and the Internet of Things

Design Futures: Tangible Media and the Internet of Things







Design Futures: Tangible Media and the Internet of Things.

The internet of things is a new age technology where connection to the internet is not restricted to mobile phones and computers but includes all aspects of human life using sensors. The science is based on the use of sensors on objects measuring physical parameters like pressure, distance between objects, brightness, amongst others. The data gathered is communicated via cordless means to the internet for use by relevant applications. The field of the internet of things has engaged the minds of scientists for the past ten years. Modern trends indicate that the field has changed from area of pure engineering to that of the design circles. The focus has changed from that of technical solutions to user and application areas.

The development of the media technology has led to a blend of the physical, tangible and material world with the virtual world. The recent technological advancements have rendered it difficult to make a clear-cut differentiation between these two worlds. The tangible media delves into the design of user interfaces, linking humankind with the virtual world and the physical environment. Recent research in the media technology has led to a breakthrough in the tangible media industry. There has been development of relevant skills for sensing and controlling the physical world. This is however not necessarily done through the common Graphical User Interfaces. Scientific research in the tangible media is coming up with a variety of “tangible interfaces” that entail designing physical form into virtual information. This is achieved through the successful coupling of the parallel worlds of bytes and f atoms. The underlying idea has been to change the “painted bits” pertaining to Graphical User Interfaces to “tangible bits.”

One of the recent successes in the realm of tangible media is Trackmate. The idea behind the development of Tackmate was to come up with something better than the traditional use of the mouse and keyboard as input devices to the graphical user interface. Since the inception of the computer, the main input devices have been the keyboard and the mouse. These two interaction techniques have remained stagnant over the past forty years. The keyboard and the mouse have remained highly generic interfaces. This aspect has enabled their use on virtually all computing tasks. The negative side to this aspect however, is the inherent lack of a specialized functionality or feedback to be used in highly specialized tasks (Queensland University of technology, 2011).

Indeed, the main disadvantage to having a generic user interface is that one cannot have an optimal performance for a given task no matter how implicitly the interface is designed. Tools designed to be used for general purposes become irrelevant in highly specialized procedures. To expound further on this, we can use an example of seats designed for a theatre hall, which are meant to fit the average body. This means that the seats’ comfortable or optimal use is limited to persons with average body sizes. On the other hand, the seats cannot cater optimally for much slender or plus sized persons. This is the same case for generic interfaces. They become irrelevant in conducting highly specialized processes. This highlights the need for highly specialized equipment.

Trackmate is an application system – a tangible tracking system. The system was developed by the open source initiative from the creation of do-it-yourself projects. The Trackmate tracker is a media application that enables computers to identify given objects and their various aspects such as their position, alignment and color when the given object is placed on a certain surface. The physical aspects of the tagged objects are communicated to the computer via a programmed layer of unique spatial input devices known as LusidOSC (Queensland University of technology, 2011). Trackmate attempts to override the common issues facing tangible media interfaces. Such issues include high costs, complex system architecture, compelling application design and intense community involvement.

Community core vision is another vital development in the media industry. The application system is handy in computer vision and multi-touch sensing. The mechanics behind the whole platform are rather complex. In simple terms, the application involves an input and output event. The input event involves a stream into the computer in the form of a video. The stream is processed and the computer outputs physical data regarding the tagged object in the video. These physical attributes include the position or coordinates in respect to a given datum and blob size. The application also tracks touch events such as finger down, moved and released, that are used during multi-touch applications.

Community Core Vision is a great development in the media industry. The application can be interfaced with different networked cameras and video devices at the same time as well as various TUIO/ASC/XML applications. This enables it to support several multi-touch lighting techniques. The Community Core Vision Project was established and is maintained by the Natural User Interface Group community. The Natural User Interface Group has installed a myriad of features on the application that have an immense impact on the media industry. The Community Core Vision application is endowed with various filters that enable the dynamic background alteration of the threshold. The filters are not specialized but can work with various optical set ups. If one is not satisfied with the current filters, additional filters could also be installed for optimization purposes. The currently installed filters enable the tracker to work with both dark blobs and light blobs at the same time. This makes it unnecessary to install invert filters.

The Community Core Vision is also installed with a camera switcher. This becomes very handy when there are a couple of cameras installed in the same computer. The application makes it necessary for one to switch to different cameras installed in the same computer without having to exit the application. This comes in handy to the development of security and media cameras/computer interfaces. The input switcher is another feature installed in the Community Core Vision. This feature enables the user to create test videos and work with them instead of having to work with a live camera. This feature is activated at the switch of a button (TUIO ORG, 2010).

The Community Core Vision is also installed with the input flipper feature. This feature enables the user to rotate the camera or the image clockwise or anticlockwise depending on the user’s alignment requirements. The dynamic mesh calibration features enables for the dynamic set up of tables. With the feature, calibration points can be added to cater for large displays while for the smaller displays, fewer points could be set up. The warping feature on the application is used when a rough calibration is required. This is achieved by moving four points on the source image. This procedure warps the image to the projection area. This process also enables the quick calibration of the set up.

The TUIO Broadcasting feature enables the direct transfer of OSC TUIO messages from the configapp. The idea behind this is to eliminate the need for non Graphical User Interface modes or versions of the application. The sending of messages directly from the configapp is made much faster in this mode. The feature also enables the loading of a separate file instead of the normal exiting process when in the need for quick testing.

Kinect is software developed by the Microsoft Corporation for the X-box console. The software is a camera technology which receives and interprets the three dimensional physical data from an infrared that is actively projected to the tagged area. The whole connection entails a Kinect sensor comprising of RGB camera, depth sensor and multi-array microphone running proprietary software. This feature allows the device to capture whole body movements, facial recognition abilities and voice capture abilities. The software is also endowed with a depth sensor that is made up of an infrared laser projector and a monochrome CMOS sensor. These two capture image motion in three dimensions under sufficient lighting conditions. The range of the sensors is variable depending on the set adjustments. This is made possible through the calibration of the sensor in relation to the game play and the physical environment of the user.

The software is a revolution in the gaming industry which has been dominated by the use of game pads as the main input devices for the graphical user interfaces. Microsoft, through Xbox, is the first company to install this feature in their games. The application system allows the console to sense movements of the gamer and emulate them on the graphic user interface. The use of game pads limited the user to only use the hands as the main communication devices. Human beings are multi-modal. This means we get to experience the physical world with all the five senses. These are; smell, sight, vision, touch and hearing senses. This renders a redundancy in the observations and experiences of the virtual world.

Although the Kinect software is not currently endowed to sense all the aspects as those of man, the software is able to sense motion in three dimensions, facial and voice recognition. This also enables the user to enjoy a more increased area of engagement instead of being restricted to the couch as seen in game pads (XBOX, 2011).



Queensland University of Technology. (2011). Trackmate. Web. Retrieved from

TUIO ORG. (1 April 2010). “Tangible Media and the internet of things.” Community Core Vision. Web. Retrieved from

XBOX. 2011. Xbox Kinect. Web. Retrieved from



Still stressed from student homework?
Get quality assistance from academic writers!

WELCOME TO OUR NEW SITE. We Have Redesigned Our Website With You In Mind. Enjoy The New Experience With 15% OFF