At the Samsung Developer Conference 2021 (SDC21), Samsung Electronics is presenting its consumer-centric approach to innovation in partnership with its developer communities. In order to learn about the innovative collaborative technologies coming in the near future that are set to transform users’ lives by making them richer and more convenient, Samsung Newsroom sat down with SDC21 session speakers in order to hear more.

 

 

Voice Recognition With Bixby: More Convenient Than Touch

Artificial intelligence (AI) enhances the performance of all kinds of devices essential to our daily lives, including smartphones, tablets, TVs and more. In particular, AI-powered voice recognition facilitates more seamless interactions between these devices. Principal Engineer Joohwan Kim from the AI ​​Client Development Group, Mobile Communications Business at Samsung Electronics, shares below how the Bixby voice assistant platform is set to enable more convenient experiences – using just your voice.

 

 

 

Q: What does the newly updated Bixby voice assistant platform look like?

 

Bixby aims to be the best voice interface able to control multiple Samsung devices using only a small amount of data. From this standpoint, we focused on providing a unique user experience that only Galaxy can provide when developing Samsung’s latest devices. For example, Galaxy Z Flip3 provides a Bixby function that allows users to utilize the Cover Screen, and Galaxy Watch4 series includes hands-free functionality.

 

Bixby has advanced in many ways as an AI platform, and, in terms of the engine, it supports Automatic Speech Recognition (ASR) on-device. Because it processes users’ voices immediately from within the device itself, it is fast and secure in terms of privacy protection. Moreover, the development of Bixby Home Platform further expands the range of supported devices. What used to require multiple voice commands is now possible with only a single command that the device understands and performs accordingly.

 

 

Q: What are the areas of possible collaboration with developers around the world for the Bixby Platform?

 

There are several ways third-party developers can participate in Bixby. They can develop various voice services using Samsung’s AI engine in our smartphones, watches and Family Hub. This service, called a capsule, is provided by the Bixby Developer Center. We have also updated the developer tools to make debugging and natural language learning easier, and to allow convenient use of various UI components, all of which will be presented at SDC21.

 

The update being announced will make it possible to use for developers to harness multiple contexts when adding devices. Smarter device control has been made possible by Bixby Home Platform, an intelligent layer which sits between Bixby Voice – which provides natural language understanding and issues commands – and SmartThings’ Internet of Things (IoT) layer – which provides connection and operation between devices. Commands that previously had to be made individually for separate devices can now be more efficiently undertaken using the Bixby Home Platform. We are introducing this tool at SDC21 and currently working on providing it to third-party developers in the future.

 

 

Enjoy More of the Content You Love: Samsung Smart TV’s Tizen

In the past, TVs were evaluated only on their resolution and design, but given changing lifestyles and user demands, the kind of content a TV is able to provide has become paramount. Thanks to the Tizen platform, Samsung’s Smart TVs are able to provide users with a wide range of content able to suit any taste. Ju-hyun Choi, a Staff Engineer at the Software Development Group, Visual Display Business at Samsung Electronics, describes below how Tizen has evolved to provide partners with the optimal environment in which they can develop with wide compatibility and stability.

 

 

 

Q: What are the new functions of Tizen Web Platform for Samsung Smart TV?

 

Through a powerful web standard technology called WebAssembly, high-performance web apps can now be developed on Samsung Smart TVs. For SDC21, we have prepared an upgrade plan that allows for the installation of the very latest web features for previously-released Smart TVs.

 

 

Q: The competitive edge held by Tizen-based display products has been strengthened thanks to the cooperation of partners from various fields. What are your future collaboration plans?

 

We are currently working on web-based cloud games, with the goal of allowing users to play the latest games on their Samsung Smart TV equipped with the Tizen platform – all without the latest consoles or expensive graphic cards. In order to achieve this, we are working closely with our current service partners. The Tizen platform, an open source-based operating system (OS), is open to everyone and we are working to provide the best possible development environment to developers that features a wide range of compatibility and development tools.

 

 

A Must for Innovative User Experiences: On-Device Personalized Neural Networks

Recently, on-device AI technology has been garnering attention. In the past, a high-performance cloud server capable of performing complex calculations was required in order to run AI-based apps, but when the neural network used in AI apps runs within the device, many advantages for users emerge. Jihoon Lee, an Engineer at the On-Device Lab at Samsung Research, explains below the role of on-device AI and cooperation with developers.

 

 

 

Q: On-device AI is considered a must for powerful and innovative consumer experiences. What kind of benefits will users enjoy from using it?

 

The biggest advantage offered by on-device AI is that the privacy of the users is maintained since all processing occurs on-device. Developers, too, will be able to provide services that are that much closer to the users, while also free from privacy concerns. For example, a camera app that harnesses facial recognition will not share sensitive data with a server if the AI is run on-device.

 

Another advantage is that on-device AI models can be used without an Internet connection or any mobile data consumption. For example, if a certain on-device AI model is required to be on constant low-power standby, the necessity of an Internet connection or data consumption could be a big limitation. With on-device AI, user application experiences are improved since there is no need to send data to a server and wait for results – resulting in reduced response times.

 

Thirdly, one is able to create custom AI directly on their device. Previously, the concept of personalizing the already-deployed AI firsthand was unexplored, as there was no framework to help train a deep-learning model on-device. Moving forward, we expect that NNTrainer, which we are presenting at SDC21, will play a big role in better recognizing a user’s face, their accent and even their pets to automatically create photo albums based on their furry friends – and more.

 

 

Q: In the future, what kind of synergies can be expected in this field through cooperation with the developers present at SDC?

 

Ultimately, we hope to bring users closer to AI through on-device training and personalization. We look forward to developers who attend SDC21’s sessions recognizing and utilizing NNTrainer in order to provide AI that is closer to users. Following such collaboration, NNTrainer, currently in its initial stages will become an even more mature framework. Since it is being developed as an open source, anyone can search for it and contribute directly.

Leave a Reply