International Journal of Multidisciplinary Research Professional(IJMDRP)

Post Page Advertisement [Top]



Touchless Touch Screen Devices



Open Access                                           


         


Authors : Mr. Vrushabh Shivanand Saharkar  
Paper ID : IJMDRPV1IS01
Volume & Issue : Volume 01, Issue 01 (November 2020)
Publisher Name : IJMDRP
License: Creative Commons License( International License)






It was the touchscreens which initially created a great outbreak. Gone are the days when you have to fiddle with the touchscreens and ends cratching up.Touchscreen displays are ubiquitous worldwide. Frequent touching a touchscreen display with a pointing device such as a finger or if there is any scratch caused due to major problems can result in the gradual de-sensitization of the touchscreen to input and can ultimately lead to malfunction of the  touchscreen. To avoid this, a simple user interface for Touchless control of electrically operated equipment is being developed. Elliptic Labs innovative technology lets you control your gadgets like Computers, MP3 players or mobile phones without touching them. A simple user interface for Touchless control of electrically operated equipment. Unlike other systems which depend on distance to the sensor or sensors election this system depends on hand and or finger motions, a hand wave in a certain direction, or a flick of the hand in one area,or holding the hand in one area or pointing with one finger for example. The  device is based on optical pattern recognition using a solid state optical matrix sensor with a lens to detect hand motions. This sensor is then connected to a digital image processor, which interprets the patterns of motion and outputs the results as signals to control fixtures, appliances, machinery, or any device controllable through electrical signals.


Key Words:Technology, Display, Screen, and Touch


Chapter 1. INTRODUCTION


Touchless Touch screen is technology that uses gesturing as form of input. It has no need of touching screen. This technology is high-end technology, that uses hand waves and hand flicks. Objective behind building such technology is making it even more comfortable and convenient for users to use their devices. 


It does not need touching of screen rather system detects hand movements in front of it by making use of various sensors. This technology looks visually  fascinating and is depicted in various Sci-fi movies such as Minority Report and Matrix Revolutions. The Touchless touchscreen sounds like it would be nice and easy,however after closer examination it looks like it could be quite a workout.This screen is  made by TouchKo, White Electronics Designs, and Group 3D. it works by detecting your hand movement so rhand wave incerta in directions in front of it. Touch less touch screen technology uses finger motions without touching a screen. It simply uses hand wave in certain direction, or a flick of the hand in one area. In the touch screen display if the screen is cracked then we cannot operate the device by simply touching a display. The purpose of this touch less technology is to make life simple and more convenient. This system requires a sensor but the sensor is neither hand mounted nor present on the screen. The sensor can be placed either on the table or near the screen. The hardware setup is so compact that it can be fitted into a device like a mobile phone or laptop screen. It recognizes the position of an object from 5 feet Touch less touch screen technology means without using a finger or without touching a device we can easily operate the system. It is also called as “Don’t touch me” technology. In this technology we have to simply draw a pattern for selecting tool or deleting a tool. 

This pattern we have to store in the database and the currently shown pattern is compared with the already stored images if pattern matches then the system work properly. Touch less display doesn’t require any special sensors that we wear on our finger or either on our hand. We have to just point at the screen  (from as far as 5 feet away), and we can easily operate the system Microsoft Company rebranded the technology as Pixel Sense once Microsoft introduced its unrelated Surface tablet to consumers. The name "Pixel Sense" refers to the way the technology actually works: a touch-sensitive protection glass is placed on top of an infrared backlight. As it hits the glass, the light is reflected back to integrated sensors, which convert that light into an electrical signal. That signal is referred to as a "value," and those values create a picture of what's on the display. The picture is then analyzed using image processing techniques, and that output is sent to the connected computer.In this paper the working of touch less display and its applications is mentioned.


1. HISTORY


A.1980’s: The decade of touch In 1982, the first human-controlled multi touch device was developed at the University of Toronto by Nimish Mehta. It wasn't so much a touch screen as it was a touch-tablet. The Input Research Group at the university figured out that a frosted-glass panel with a camera behind it could detect action as it recognized the different "black spots" showing up on-screen. Bill Buxton has played a huge role in the development of multi touch technology. The touch surface was a translucent plastic filter mounted over a sheet of glass, side-lit by a fluorescent lamp. A video camera was mounted below the touch surface, and optically captured the shadows that appeared on the translucent filter. (A mirror in the housing was used to extend the optical path). The output of the camera was digitized and fed into a signal processor for analysis. Touch screens began being heavily commercialized at the beginning of the 1980s. HP (then still formally known as HewlettPackard) tossed its hat in with the HP-150 in September of 1983. 

The computer used MSDOS and featured a 9-inch Sony CRT surrounded by infrared (IR) emitters and detectors that could sense where the user's finger came down on the screen. The system cost about $2,795, but it was not immediately embraced because it had some usability issues. For instance, poking at the screen would in turn block other IR rays that could tell the computer where the finger was pointing. This resulted in what some called "Gorilla Arm," referring to muscle fatigue that came from a user sticking his or her hand out for so long. The first multi touch screen was developed at Bell Labs in 1984. [Bill Buxton] reports that the screen, created by Bob Boie, "used a transparent capacitive array of touch sensors overlaid on a CRT." It allowed the user to “manipulate graphical objects with fingers with excellent response time”. The discovery helped create the multi touch technology that we use today in tablets and smart phones. In 1984, Fujitsu released a touch pad for the Micro 16 to accommodate the complexity of kanji characters, which were stored as tiled graphics.[15] In 1985, Sega released the TerebiOekaki, also known as the Sega Graphic Board, for the SG-1000video game console and SC-3000home computer. It consisted of a plastic pen and a plastic board with a transparent window where pen presses are detected. 


B. 1990’s: Touch screens for everyone! Apple also launched a touch screen PDA device that year: the Newton PDA. Though the Newton platform had begun in 1987, the Message Pad was the first in the series of devices from Apple to use the platform. As Timenotes, Apple's CEO at the time, John Sculley, actually coined the term "PDA" (or "personal digital assistant"). Like IBM's Simon Personal Communicator, the Message Pad featured handwriting recognition software and was controlled with stylus. IGesture Pad: Westerman and his faculty advisor, John Elias, eventually formed a company called Finger Works. The group began producing a line of multi touch gesture-based products, including a gesture-based keyboard called the Touch Stream. This helped those who were suffering from disabilities like repetitive strain injuries and other medical conditions. Finger Works was eventually acquired by Apple in 2005, and many attribute technologies like the multi touch Track pad or the iPhone’s touch screen to this acquisition. 









C. 2000’s and beyond

With so many different technologies accumulating in the previous decades, the 2000s were the timefor touch screen technologies to really flourish. The 2000s  were also the era when touch screens became the favourite tool for design collaboration.

D. 2001: Alias | Wavefront’s gesture-based Portfolio Wall As the new millennium approached, companies were pouring more resources into integrating touch screen technology into their daily processes. 3D animators and designers were especially targeted with the advent of the Portfolio Wall. This was a large-format touch screen meant to be a dynamic version of the boards that design studios use to track projects. Though development started in 1999, the Portfolio Wall was unveiled at SIGGRAPH in 2001 and was produced in part by a joint collaboration between General Motors and the team at Alias| Wavefront. Buxton, who  now serves as principal research at Microsoft Research, was the chief scientist on the project. "We're tearing down people the wall and changing the way effectively communicate in the workplace and do business," he said back then. "Portfolio Wall's gestural interface allows users to completely interact with a digital asset. The Portfolio Wall used a simple, easy-to-use, gesture-based interface. It allowed users to inspect and images, animations, and 3D files  with just their fingers.. It was also easy to scale images, fetch 3D models, and play back video.


E. 2002: Mutual capacitive sensing in Sony’s Smart Skin,In 2002, Sony introduced a flat input surface that could recognize multiple hand positions and touch  points at the same time. The company called it Smart Skin. The technology worked by calculating the distance between the hand and the surface with capacitive  sensing and a mesh- shaped antenna. Unlike the camera-based gesture recognition system in other technologies, the sensing elements were all integrated into the touch surface. This also meant that it would not malfunction in poor lighting conditions. The ultimate goal of the project was to transform surfaces that are used every day, like your average table or a wall, into an interactive one with the use of a PC nearby. However, the technology did more for capacitive touch technology than may have been intended, including multiple contact points. Jun Rekimoto at the Interaction Laboratory in Sony's Computer Science Laboratories noted the advantages of this technology in a whitepaper. He said technologies like Smart Skin offer "natural support for multiple-hand, multiple-user operations." More than two users can simultaneously touch the surface at a time without any interference. Two prototypes were developed to show the Smart Skin used as an interactive table and a gesture-recognition pad. The second prototype used Smart Skin used as an interactive table and a gesture-recognition pad. The second prototype used finer mesh compared to the former so that it can map out more precise coordinates of the fingers. Overall, the technology was meant to offer a real-world feel of virtual objects, essentially recreating the human use with their fingers to pick up objects and manipulate them.






F. 2002-2011: Failed tablets and Microsoft Research’s Touch Light Multi touch technology struggled in the mainstream, appearing in specialty devices but never quite catching a big break. One  almost came in 2002, when  Canada-based DSI Datotech developed the Hand Gear+GRT device (the acronym "GRT" referred to the device's Gesture Recognition Technology). The device's  multi point touchpad worked a bit like the aforementioned iGesture pad in that it could recognize various gestures and allow users to use it as an input  device to control their computers. Hand Gear also enabled users to "grab" three- dimensional objects in real-time, further extending that idea of freedom and productivity in the design process. The company even made the API available for  developers  via  Auto Desk. Unfortunately, as Buxton mentions in his overview of multi touch, the company ran out of money before their product shipped and DSI closed its doors. Two years later, Andrew D. Wilson, an employee at  Microsoft Research, developed a gesture-based imaging touch screen and 3D display. The Touch Light used a rear projection display to transform a sheet of acrylic plastic into a surface that was interactive. The display could sense multiple fingers and hands of more than one user, and because of its 3D capabilities, it could also be used as a makeshift mirror. The Touch Light was a neat technology demonstration, and it was eventually licensed out for production to Eon Reality before the technology proved too expensive to be packaged into a consumer device.

2006: Multi touch sensing through “frustrated total internal reflection”. In 2006, Jeff Han gave the first public demonstration of his intuitive, interface- free, touch- screen at a  TED Conference in Monterey, CA. In  his  presentation, Han moved and  manipulated photos on a giant light box using only his fingertips. He flicked photos, stretched and pinched them away, all with a captivating natural ease. "This is something Google should have in their lobby," he joked. The demo showed that a high-resolution, scalable touch screen was possible to build without spending too much money. Han had discovered that the "robust" multi touch sensing was possible using "frustrated total internal reflection" (FTIR), a technique from the biometrics community used for finger print imaging. FTIR works by shining light through a piece of acrylic or plexil glass. The light (infrareds’ commonly used) bounces back and forth between the top and bottom of the acrylic as it travels. When a finger touches down on the surface, the beams scatter around the edge where the finger is placed, hence the term "frustrated." The images that are generated look like white blobs and are picked up by an infrared camera. The computer analyzes where the finger is touching to mark its placement and assign a coordinate. The software can then analyze the coordinates to perform a certain task, like resize or rotate objects.




In 2007: the Microsoft Surface was essentially a computer embedded into a medium-sized table, with a large, flat display on top. The screen's image was rear-projected onto the display  surface from within the table, and the system sensed where the user touched the screen through cameras mounted inside the table looking  upward toward the user. As fingers and hands interacted with what's on screen, the Surface's software tracked the touch points and triggered the correct actions. Later in its development cycle, Surface also gained the ability to identify devices via RFID.

In 2011: Microsoft partnered up with manufacturers like Samsung to produce sleeker, newer tabletop Surface hardware. For example, the Samsung SUR40 has a 40-inch 1080p LED, and it drastically reduced the amount of internal space required for the touch  sensing mechanisms. At 22-inches thick, it was thinner than its predecessors, and the size reduction made it possible to mount the display on a wall rather than requiring a table to house the camera and sensors. It cost around $8,400 at the time of its launch and ran Windows 7 and Surface 2.0 software.


CHAPTER 2: WHAT IS TOUCHSCREEN?

Touch Screen is an important source of input or output device normally layered on top of an electronic visual device. A user gives the input or control the  information processing through single or multi-touch gestures by touching the screen. It enables the user to interact directly with what is displayed, rather than using any intermediate device. 


Resistive Touch Screens 

One of the most basic systems mostly used in ATM’s is the resistive touch screen system. It consists of two electrically conductive layers,one of which is resistive and the other one is a conductive layer. These two layers are separated by spacers, which keeps them a part until you touch it.A scratch resistant on top completes the whole setup. An electrical current runs through the two layers at all times. When you touch the screen the two layers are pressed together, and the electrical current changes at the point of contact. The change in electrical field and its co-ordinates are calculated by the software, which further carries out function  corresponding to that spot. Although this system is durable and consistent,they can only handle one touch at a time.That is why high end devices most likely use capacitive touch screens.




Capacitive Touch screens


In capacitive system, a layer that stores electric charge constructed from materials like copper or indium tin oxide is used. Sensors at corners and  protective casing complete the whole setup. A minute amount of voltage is applied to all corners of the touch screen. So how does it work? Human body can act as a capacitor. That means it can conduct electricity. So when a user touches this screen with his or her finger,some  of the charge is transferred to the user.This is sense date achcor neratthe screen. The electric current value at each corner will differ according to the  touchpoint.This relative difference aids the software to find exactly where the touch took place, and further it carries out function corresponding to  thatspot. Iphones, most mid- range to high ends smart phone, tablets and computers use this system.





Infrared Touch screens


It is the less common and less precise one.It consists of LEDs and Light detecting photo cells arranged on the opposite sides of the screen.The LEDs shine infrared light in front of the screen – a bit like an invisible spider’s web. If a user touches the screen at a certain point, the user interrupts two or more beams. This aids the controller to find the exact location of the touch and  the corresponding function is carried out. Since the beam is interrupted, infrared screens work just as well whether we use finger or stylus or even using it with gloveson. But in case of capacitive touch screen, with gloves on will not work, since a glove doesn’t conduct electricity.Infrared touch screen are mostly used in Amazon Kindle and Sony eBook readers. 




Surface Acoustic Wave (SAW) Touch Screen 


Surface acoustic wave detects fingers using sound instead of light. Ultrasonic sounds which are too high pitched for humans to hearare reflected back and for the cross its surface. When the screen is touched the user interrupts the sound beam, and the location of the touch is calculated.




Chapter 3.WHAT IS TOUCH LESS TOUCHSCREEN?

Imagine a world you could control with your hands. It was developed by Elliptic Labs. This system depends on the finger or hand motions, hand wave in certain direction, with th is your hand doesn’t have to come in contact with the screen. It requires a sensor,the sensor can be either placed near the screen or on the table. Elliptic Labs named it as “Touch Less Human or Machine User Interface for 3D Navigation”.


Chapter 4: The touchless touchscreen technology is divided into following parts for its working

1. Movement detection –

This itself is technology that detects change in position of an object with respect to its surrounding and also change in position of surroundings with  respect to object. The motion or movement detection can be either mechanical or electronic in nature. This is done by system called motion detector that is used to know changes in positions of objects or their  surroundings. Most important element associated with motion detector is sensor located near its screens. It understands changes by their interaction with  line of sight that is interrupted in case of any sort of motion.


2. Optical pattern recognition –

Optical pattern recognition is system that is based on optical patterns with the help of solid state particle matrix so as to understand and detect motions.


3. Motion pattern interruption –


Motion Pattern interruption is sensor. It is connected with digital image processor that interprets motion patterns. The digital image processor sends signals to devices, machinery or appliances. These devices in turn are controlled by making use of electrical signals.


4. Screen pointing –


Screen pointing mechanism allows users to point at items and icons present on screen without physically touching screen. This mechanism interprets several human hand gestures to make decision of what task user wants to perform.





WaveFlow


The system is capable of detecting movements in 3- dimensions without ever having to put your fingers on the screen. Sensors are mounted around the screen, by interacting in the line-of-sight of these sensors the motion is detected and interpreted into onscreen movements. The device is based on optical pattern recognition using a solid state optical matrix sensor with a lens to detect hand motions. This sensor is then connected to a digital image processor, which interprets the patters of motion and outputs the results as signals to control fixtures or any device controllable through electrical signals. It consists of three infrared lasers which scan a surface. It recognizes the position of an object from as 5 feet.





CHAPTER 3. WORKING

The device is based on optical pattern recognition using a solid state optical matrix sensor with a lens to detect hand motions. This sensor is then connected to a digital image processor, which interprets the patterns of  motion and outputs the results as signals to control fixtures, appliances, machinery, or any device controllable through electrical signals. The touch less display can detect the 3D motions without putting your fingers on the screen. Sensors are placed around the screen. First the moving image comes like finger or hand in front of the sensor. The sensor detects the image and then light enters to the sensor and hits the pixel matrix, after hitting to pixel matrix the pixel converts incoming light  into electric charge with the help of photo diode. The sensor then generates electric signals and theses electric signals are processed to provide the output  to user.This technology uses a kind of artificial intelligence called machine intelligence. It predicts what the user intends to do and follows through with the motion. This technology uses a gesture tracker, vision-based or RF-based sensor, information on the user, environmental conditions, and even uses an eye-gaze tracker to determine what the user aims to do. Because you don’t need to touch the screen, you may not need a screen altogether. This type of touch less technology could incorporate the use of holograms  instead of a screen itself. The software-based solution is ready and can be easily incorporated into existing touchscreens and interactive displays.




Touchscreen technology has enhanced many of our devices and made transactions more convenient. But the pandemic has made the world afraid of the touchscreen. The obvious natural progression and fix to this issue is a touchless touchscreen. A new patented technology developed by engineers at the University of Cambridge accomplishes just that. They call this technology “predictive touch”. Using a  combination of sensor technology and artificial intelligence, it predicts the user’s projected target and selects it before the user touches it. In a public  setting, this technology can save us from touching displays and contracting and spreading pathogens. We may not realize it, but we touch many different  touchscreens in our everyday transactions. ATMs, self-service checkout at the grocery store, parking meters, and ticketing stations. Even after the pandemic,  these touchscreens can save us from contracting a simple cold or flu. The predictive touch technology was developed by the University of Cambridge in collaboration with Jaguar Land Rover and will make using the screens in your car safer by eliminating the time spent interacting with the screen. Tests show the technology reduced the time needed to interact with the screen by up Touchscreens have become ubiquitous in everyday life. Not only are they the gateway to our much loved mobile phones but, until recently, we have been happy  to regularly interact with them at everything from ATMs and ticket machines to interactive kiosks and supermarket checkouts. And then COVID-19 struck. 


The pandemic has undoubtedly changed the ease with which we touch shared screens – especially once scientists found that SARS-CoV-2, the virus that causes  COVID-19, can last on surfaces for up to three days. In fact, an UltraLeap survey of the UK and US public in May this year found that 80% of consumers think touchscreens are unhygienic and 50% would be unlikely to use them in the future.  “Consumers around the world are worried about the safety and cleanliness of public touchscreens, such as self-service kiosks, and would be very open to  trying new ways of interacting if they are available,” explains Catherine Morgan, director of Ocean Labs, a technology division of digital  out of home firm  Ocean Outdoor. “The rapid adoption of contactless payments is one proof point.” With this in mind, it should come as no surprise that interest is growing in touchless interfaces. Most touchless solutions capitalise on recent advances in sensing and data processing technologies,” explains Bashar Ahmad, a senior research associate in  the department of engineering at University of Cambridge. “One example is gesture-recognition-based systems, where the user undertakes a pre-learnt mid-air  symbolic gesture such as swipe or wave and the interface produces a particular response linked to this gesture. These types of systems have generally been quite clunky.” New developments in touchless solutions promise something better, however. Ocean Outdoor, for example, has integrated touchless technologies across its portfolio in partnership with US  haptics firm UltraLeap. 


“UltraLeap has developed incredibly accurate hand tracking, as well as the world’s only mid-air haptics, which creates the sense of touch in mid-air using ultrasound,” explains Morgan. “The hand tracking technology uses an inexpensive infrared stereo camera, which means we can create a very cheap and effective touchless solution that can replace touchscreens.”  Meanwhile, Skrypchuk and Ahmad, along with their colleagues at the University of Cambridge and Jaguar Land Rover, have developed a no-touch touchscreen  called ‘predictive touch’, which uses a combination of artificial intelligence (AI) and widely available sensor technology (such as hand/finger tracking) to  predict a user’s intended target on touchscreens and other interactive displays or control panels, selecting the correct item before the user’s hand reaches  the display. “Our innovation is completely unique because it has the ability to predict the interface item the user intends to select from their freehand pointing  movement, eye-gaze behaviour and other contextual information such as environment, user profile, interface design and historical data such as  frequency of  use,” says Ahmad. “It still uses the intuitive point-select hand/finger motion everyone is familiar with, except that the user needs not touch the display.” 





Chapter 5. APPLICATIONS


Touch less monitor

It is specially designed for the applications where touch may be difficult, such as for doctors who might be wearing gloves. The display features capacitive sensors that can read movements from upto 15–20 cm away from the screen and the software translates these gestures into commands. The monitor screen is based on technology from TouchKo was recently demonstrated by White Electronic Designs and Tactyl Services at the CeBit show.


Touch less UI


UI in Redmond head quarters and it involves lots of gestures which allow you to take applications and forward the monto others with simple hand movements. So after reading a document, you could just push it off the side of your screen.


Touch less SDK


The Touch less SDK is an open source SDK for .NET application. It enables developers to create multi-touch based applications using a webcam for the inputs. Colour based markers defined by the user are tracked and their information is published through events to clients of the SDK.


Touch Wall


Touch wall refers to the touchscreen hardware setup it self; the corresponding software to run Touch Wall,which is built on a standard version of Vistas, is called Plex. Touch Wall and Plex are superficially similar to Microsoft Surface, a multi-touch computer that was introduced in 2007 and which recently became commercially available in select AT&T stores.


References:

1.wikipedia

2.https://www.geeksforgeeks.org/touchless-touchscreen-technology/

3.IRJET research paper Volume 5 | Issue 04 | 2018

4.http://www.123seminarsonly.com/EC/Touchless-Touchscreen-Technology.html

5.https://www.colocationamerica.com/blog/touchless-touchscreen-technology





Touchless Touch Screen Devices In PDF









Download in PDF Format




Bottom Ad [Post Page]