Home Tags Google glass

Tag: google glass

New System That Allows Google Glass to Detect Early Brain Disorders

Google Glass to Detect Early Brain Disorders

Imagine a Google Glass type eyewear that can recognize integrated augmented reality with your own and give you the live information about your surroundings. A new device developed by researchers may make this possible.

They have developed the Google-Glass diagnosis system based on Virtual-Reality (VR) technology to detect early neurodegenerative disorders which also includes Multiple Sclerosis (MS) and Parkinson’s disease. It consists of non-contact sensor controller and a mobile platform.

The motion sensor detects the changes in the human body posture when the person puts on the glasses and selects the VR mode where the environment changes the slope. This system is developed by the Tomsk Polytechnic University (TPU) and Siberian State Medical University (SSMU) in Russia.

Google Glass App Projects a Magnified Smartphone Screen for Visually Impaired

According to the researchers, a person without any disorder quickly adapts to VR and keeps a stable position while a person with disease can’t adapt and loses the balance. They have been already tested this system by about 50 volunteers

“We have integrated existing devices and developed mathematical models for data analysis. We have also created a human skeleton model, identified 20 important points that Kinect monitors. Diagnosis provides results of deviations in the 20 points,” said David Khachaturyan, a scientist from TPU.

“In the experiment, we tested how VR influences people. The procedure took almost 10 minutes. The experiment engaged both healthy people and those whom doctors had already been found disorders by,” said Ivan Tolmachov from TPU.

Meet the Sunglasses with built-in bone conduction speaker and Mic. Zungle Panther

“We have also found out how people with different diseases react to a virtual environment. For instance, people with Parkinson’s disease get hand tremor, more pronounced in the case of a central nervous system,” he said.

According to the reports of researchers, the Parkinson’s disease cell death process will start at the age of 30, but the symptoms will be noticeable at the age of 50. To avoid this type of situation, the scientists had developed the early detectable diagnosis methods for a Neurodegenerative disease.

The researchers had mentioned that the system needs clinical trials and also requires technical and toxicological certification. The system process will be completed in the year 2017.

Google Glass App Projects a Magnified Smartphone Screen for Visually Impaired

Google glass

Schepens Eye Institute researchers developed a smartphone app that projects a magnified smartphone screen to provide better visibility to low-vision users.

The team from Schepens Eye Research Institute of Massachusetts Eye and Ear/ Harvard Medical School outlined the app that made advancement upon the built-in zoom feature of smartphones by projecting the display to Google Glass, which users can navigate head movements to view a corresponding portion of the magnified screen.

“Given the current heightened interest in smart glasses such as Microsoft’s Hololens and Epson’s Moveria, it is conceivable to think of a smart glass working independently without requiring a paired mobile device in near future,” said first study author Shrinivas Pundlik.

“Skull Vibrations” will soon set you free from passwords

The concept of head controlled screen navigation can be useful in such glasses even for people who are not visually impaired, Pundlik wrote in the study published in the journal IEEE Transactions on Neural Systems and Rehabilitation Engineering.

The Pundlik led team developed the head motion application to address the limitations of conventional smartphone screen zooming, which does not provide sufficient context and can be painstaking to navigate.

When people with low visual acuity zoom in on their smartphones, they see only a small portion of the screen or in the corner of the screen, noted senior author Gang Luo.

List of Top Apps Released, Interestingly 90% of Google Play Revenue Comes From Games

“The application transfers image from smartphone to Google Glass and thus users control the screen by moving their heads to the screen, which gives them an excellent sense of orientation, “Luo added.

The Researcher of Schepens Eye Research Institute observed two groups of research subject (one which measures head motion Google Glass application and the other see the built in feature on a smartphone) and measured the time they took to complete certain tasks.

The researcher stated that nearly 28 percent of the time reduced by using Head based navigation method when to compare to manual scrolling.

Google Glass?

Google glass is also known as project glass which is a Hand free format that displays information. The one who wears Google Glass will communicate with The Internet via diff language voice command.

Google Patents a Cyborg Lens may be thinking about putting Android in your eyeball

 Microsoft Hololens?

Microsoft Hololens is a smart glass headset with cordless, self-contained Windows 10 computer. It contains various sensors, an HD stereoscopic 3D optical head mounted display which allows for augmented reality with a user interface where they can interact through voice command, hand gestures, and Gaze.

“Skull Vibrations” will soon set you free from passwords

skull vibration

Remembering too many passwords is no more a difficult task with the new form of biometrics. Fingerprints, Brain prints and now it’s time for the new biometric, which was developed by three scientists from the University of Stuttgart, Saarland University and the Max Plank Institute for informatics. The new biometric system that uses bone conduction of sound through the user’s skull in order to identify the individual by using devices, such as Google Glass or VR headsets is called Skullconduct.

With integration of Google glass smart glasses, scientists were successful in this test on few participants and the identification of individual was 97 percent accurate. Well, the process of SkullConduct uses the bone conduction speaker and microphone readily integrated into the eyewear computer and analyses the characteristic frequency response of an audio signal sent through the user’s skull.

“If recorded with a microphone, the changes in the audio signal reflect the specific characteristics of the user’s head,” the report published by Journal of the ACM states.

New Invisible “Second Skin” Treats Eczema and Temporarily Wraps Wrinkles

As the brain is unique, with slight structural differences and would be almost impossible to replicate, Skullconduct a prototype test developed Bone conduction is other method of receiving sound in a person which is used in this process of Skullconduct. When sound travels through the bones of the skull it is recognized for individual’s identification. This is not for the first time bone conduction technology is brought into the market. Since early 2000’s Headphones with feature of bone conduction are being sold. This technology played a crucial role in the raid that killed Osama bin Laden.

“Firstly, the prototype test Skullconduct was tested without any background noise, so making the system work effectively in an everyday environment will be the team’s next task. Secondly, the sound – some white noise – could also be annoying to users, and needs to be replaced with a short piece of music or a jingle,” the researchers added. Skullconduct is in its early stages and system has few serious works to be done before its sale. As it is initially tested in an environment without noise, it needs to be tested in such situations with noise to confirm whether it is helpful in areas like crowded internet cafes etc.

Latest Technology makes your Skin a Touchscreen for Smartwatches

The conclusion of the researchers study says, “While other biometric systems require the user to enter information explicitly (e.g., place the finger on a fingerprint reader), our system does not require any explicit user input.” SkullConduct team will present the work at the Conference for Human-Computer Interaction in San Jose, California, in the month of May.