Someday semantic lighting—a new kind of smart system under development—might enhance the vibrancy of your home’s colors, superimpose your shopping list on your refrigerator, and detect the temperature of your tea.
The system is being designed to use lasers instead of LEDs, and will employ multispectral cameras and microphones embedded in the fixtures to perform such tasks. It will be able to do so without the need to embed sensors in everyday objects to connect them to the Internet.
IEEE Fellow Zary Segall is developing a semantic lighting system through his company Selitera, based in Stockholm. Although he is focused on building it for retail stores, he has developed a prototype of a home system in collaboration with Ikea. The technology is still a few years away, he says, but it’s in our future and could improve our quality of life.
The Institute asked Segall to explain the technology and its potential benefits.
Explain semantic lighting.
These lights will essentially work like an overhead projector, only all the components would be inside a bulb. This projector could send light in every direction—360 degrees—and superimpose colors and 3-D images all around us. People would not have to wear headsets or look through a phone screen to have this experience. The information would be processed through the lighting system’s software, which relies on a cloud-based artificial intelligence platform.
Can you give some examples of how that might be useful in the home?
Semantic lights will be able to recognize objects around us without having to add sensors to connect them to the Internet. Imagine you have an heirloom from your grandmother, such as wine glasses, but you don’t want to put sensors on them—which is required for the Internet of Things. The lights would still be able to detect they are glasses and make the object “smart” by, for example, enhancing the color of the wine and superimposing text around the glass that informs people the type of wine they are drinking.
In England, one of the tests for elderly people to live independently in their homes requires them to prove they are able to perform certain tasks, like making a cup of tea. Semantic lights have demonstrated that they can help by spotlighting the items needed to make tea—a cup, a tea bag, a kettle, water, and sugar—and superimpose step-by-step directions on the kettle, such as how much water to use. The system also can overlay information such as the water temperature.
Another feature is creating ambience in the home. The system could, for example, recognize what is on the kitchen table and automatically adjust the lights accordingly. If there’s a newspaper and a cup of orange juice on the table, the lights assume people are eating breakfast and the system switches the spectrum of light to a daylight setting, making the room feel sunny, even if it’s overcast outside. Similarly, if there’s a candle and wine on the table, the lights dim slightly.
Describe your prototype for Ikea.
We built a lamp that recognizes objects on a dining room table, and interacts with them. The lamp emits various spectrums of light on each item to, say, enhance the colors of the plates and make the vegetables greener. The lights can also detect the temperature of the soup inside a bowl, and warn diners whether it’s too hot to eat. It also is able to overlay a list of ingredients in a meal.
The applications you mention sound comparable to augmented reality. Are they?
Yes. Our technology could be compared to Google’s Project Tango, an augmented reality computing platform. Only its system requires users to wear a headset or use a mobile device. One of Tango’s applications is a virtual cat that can jump on your furniture, but the only way to see the cat is by viewing it through your mobile screen. Our system would use light to project a cat into the room, and automatically place the pet where it makes sense to, such as on the couch. In essence, we are connecting physical and digital objects through light.
What gave you the idea to develop semantic lighting?
We get a lot of information through light—colors, shapes, and so on—and I was intrigued by this concept. I have a background in wearable computing, so I started thinking about whether light can be smart and perceive physical objects and our relationship to them. That was in 2004, when I started working on semantic lighting. Today, semantic lighting must be three things: human-aware, context-aware, and task-aware. This way it can understand what’s going on in our physical space and add digital information to improve our quality of living.
When do you think semantic lighting will make its way into our homes?
The technology can already be found in certain applications. My company, Selitera, is using it for its Klikify system, which connects shoppers’ mobile phones to digital advertisements through light. Consumers can get information about a pair of jeans displayed on an ad, for example, just by pointing their phone at it.
Semantic lighting is quite expensive, which is why the systems are not yet widely produced and implemented. But as the technology begins to be mass produced in a few years’ time, it will become more affordable and increasingly adopted for applications in health care, smart homes, smart cities, and others that we may not even be able to imagine right now.