Ad
Ad
Ad
Ad
Ad
Ad
Ad
Ad
Sponsored content by Congatec AG

Function toolkit for automated retail checkout systems

Technology fusion offers more than the sum of parts congatec, Basler and NXP Semiconductors have developed a function toolkit for deep learning applications in retail. The platform is a proof-of-concept, using artificial intelligence (AI) to fully automate the retail checkout process.
The toolkit demonstrates the possibilities of vision technologies in embedded applications and how they can simplify our daily lives. The kit is application ready, providing everything necessary for training automated checkout systems. Goods have already been trained for automatic video recognition without the use of bar or QR codes. This way, goods such as fruit or vegetables, which cannot be identified by a code, can now also be checked out. The kit can further create a symbolic invoice total. This illustrates that the modular system has all the basic features needed for integration into existing checkout systems that also map all payment functions. Such vision-based systems open up new perspectives for retail applications – in particular, if it is easy to add new products to the range. Retailers benefit from lower labor costs and a significantly improved shopping experience through instant checkout, shorter queues and 100% checkout capacity at all times – even when the store is open 24 hours a day. However, providing such solutions requires preparatory work that OEMs serving the retail market cannot accomplish from a standing start. This is why they need partners, such as the congatec, Basler and NXP team, who collaboratively provide them with application-ready platforms for the integration of camera and AI technologies at the embedded level. The congatec Basler Vision System with AI from Irda Labs is the first development resulting from the collaboration between Basler and congatec. An AI solution based on sparse modeling is currently being developed, with the expectation that it will be ready for launch at Embedded World 2020. The effort this involves is not significantly different from the effort required for the integration of other peripheral components. So it shouldn’t really present any major challenges – if it wasn’t for the need to integrate additional AI technologies that don’t require costly and lengthy training in server farms, but that get by with just a few images and can even be trained in the embedded system itself. There is also invariably some effort associated with developing application-ready platforms on the basis of ARM technologies, since these must be adapted to the application-specific requirements. So regardless of which processor technology is used, there is always a need for OEMs to bring the sum of the individual parts to series maturity as smoothly as possible. Ideally, they find a supplier who can provide them with specific solution platforms that already offer more than the sum of the individual components, as this allows them to fully concentrate on new application development. Heterogeneous solutions offered by processor manufacturers The challenges begin with the integration of MIPI-CSI based camera technologies, for example. While they are standard for ARM-based technologies, x86 platforms require special integration effort. AMD and Intel also have quite different software support strategies for AI technologies. As with OpenCX/CV, AMD relies on open source solutions such as ROCm and TensorFlow to support the heterogeneous use of embedded computing resources needed for deep learning inference algorithms. Intel, on the other hand, offers customers a distribution of the OpenVINO toolkit that optimizes deep learning interference while also supporting many calls to traditional computer vision algorithms implemented in OpenCV – in other words, it provides a total integrated package. Ultimately, by supporting FPGAs and the Intel Movidius Neural Compute Stick, Intel aims to use not only the expensive GPUs from AMD or Nvidia, but also to present other in-house alternatives for the inference systems. Caption 1: Smart embedded vision platforms with AI-based situational awareness are composed of many small function blocks whose interoperability must be validated. NXP offers answers for the use of AI as well, with the eIQ Machine Learning Software Development Environment. Next to the automotive segment, this also targets the industrial environment. It includes inference engines, neuronal network compilers, vision and sensor solutions, and hardware abstraction layers, providing all key components required to deploy a wide range of machine learning algorithms. Based on popular open source frameworks that are also integrated into the NXP development environments for MCUXpresso and Yocto, eIQ is available in the early access release for i.MX RT and i.MX. Embedded computing platforms must match the solution As these three different AI approaches of the semiconductor manufacturers clearly indicate, OEMs have different implementation requirements for their applications depending on the chosen solution path. But in any case, the embedded computing hardware must be prepared for whichever software solution is used, and this requires careful selection of the individual hardware components, which is why cooperation between semiconductor manufacturers and embedded computing providers is so crucial. By working with companies such as congatec, which are among the leading providers in this field, and which have already presented application-ready bundles based on solutions developed in collaboration with semiconductor manufacturers, OEMs can rest assured that the vital homework has already been done. However, AI implementations are only valuable to the degree that they support interoperability with the appropriate embedded vision technologies. For this reason, congatec has entered into a cooperation with Basler that aims to offer customers perfectly matched components for embedded vision applications. Two very similar application platforms have already emerged from this cooperation: one with NXP technology and the other one based on Intel processors. Caption 2: The vision-based retail deep learning platform from congatec, NXP and Basler automatically recognizes goods and can fully automate the checkout process in the retail sector. Three different solutions from the same source The smart embedded image recognition platform based on Intel technology recognizes faces and can analyze them according to age and mood. It is based on Basler’s USB 3.0 dart camera module and conga-PA5 Pico-ITX boards with 5th generation Intel Atom, Celeron and Pentium processors. congatec will also be integrating the pylon Camera Software Suite as standard software into suitable kits. The NXP solution platform – which will be available from Basler later this year – targets deep learning applications in retail to fully automate the checkout process. It recognizes packaging via an AI inference system and is based on a Basler Embedded Vision Kit featuring an NXP i.MX 8QuadMax SoC on a conga-SMX8 SMARC 2.0 Computer-on-Module from congatec, a SMARC 2.0 carrier board and Basler’s dart BCON for MIPI 13 MP camera module. While the two applications are rather similar, they use highly heterogeneous components whose interoperability must be validated to ready the OEM solution for series production as smoothly as possible. In addition, congatec has now also integrated an early solution for AI into its Intel Atom-based platforms. It is based on sparse modeling and requires only a small number of images (approximately 50) for training, allowing this to be done in the embedded system itself. Retailers wishing to add new products to their range would require only about 50 different product images – for example, from a shopping basket or a checkout conveyor belt – to train their systems so they will recognize the products. This training could be done directly at checkout, enabling even retailers who have just one checkout terminal to use the system efficiently. Updating the many checkout terminals of large retailers would then simply be a question of smart cloud connectivity. Partners are key OEMs using such application-ready solution platforms benefit from significantly reduced development effort, since many functionalities have already been tested and the interoperability of the individual components has been validated. If required, congatec also offers these individual, custom components as a fully developed, series-ready solution platform, including all certifications required for series delivery to the end customer – whether AMD, Intel or NXP-based. This way, customers benefit from simplified handling and accelerated design-in of the embedded vision computer component, as well as optimized service and support conditions. At congatec, such projects are often based on Computer-on-Modules, because they make it particularly easy to scale performance in line with requirements and to implement closed-loop engineering strategies. However, it is always possible to fuse module and carrier board into a customized OEM solution, including the development of a custom-specific solution platform with housing and IoT connectivity. In short: a true solution platform portfolio for OEMs. Author: Zeljko Loncaric is Marketing Engineer at congatec
Ad
Ad
Ad
Ad
Load more news
November 12 2019 7:31 am V14.7.10-1