5 Examples of AI/ML Innovations which Bring Tangible Results
Machine Learning and Artificial Inteligence are becoming fundamental in many industries, such as digital marketing, quality control, general task automation and business intelligence. Many enterprises are already leveraging these technologies to save time for their employees and take care of higher-value tasks. In this article, we will show you 5 particular examples of where AI and ML generate tangible results already today.
1. Shelf Inspector – the solution to ensure on shelf availability
One of the biggest pains in the FMCG world is the on-shelf-availability tracking. Taking the product photos, counting them, and/or browsing through thousands of pictures every day is extremely labour intensive. This process is also extremely prone to human errors. All the above-mentioned reasons led to the development of the Shelf Inspector application. While using computer vision and machine learning, the application analyses client pictures, detecting “objects of interest” with a 97% accuracy.
When compared to planograms, we provide out-of-stock alerts and display timelines of visits and trends over time. These features are all presented with customizable Power-BI dashboards. In addition, we analyse the competing goods and provide metrics, like shelf share. Finally, we also provide price tag analyses, with alerts indicating incorrect prices, detecting promotions and comparisons with competitive products. Additional client specific metrics can be added as well, thanks to the modularity of the solution.
As the pictures are the key aspect of a good detection, we have also developed an accompanying mobile app for photo collection. This is based on the feedback from the field merchandisers who are our long-term partners. This allows us to have control over the quality of incoming photos and therefore provide the best results.
Shelf Inspector provides a complete service to tackle the monitoring of your product display by retail partners, from photo collection through product detections to meaningful KPI insights, all in one package.
2. jobno.one - match the best CV with every job description
Today’s recruitment process is quite time consuming. Each CV must be opened, read, and checked for a match with currently open positions. There is also a heavy reliance on human memory to keep both available candidates and job openings, especially when new candidates or job openings are added. Furthermore, candidates may not share their CVs even if they are interested in your company, because they do not see any relevant positions today. Many organisations are therefore losing talented candidates and often have their CV databases out-of-date. Finally, people might be biased and subconsciously reflect them in their decision making.
Together with mBlue, we have created jobno.one, a tool that increases processing speed of the number of received CVs. It ranks them autonomously, filters all relevant candidates, filling correct fields straight into Applicant Tracking Software (ATS) or HRIS without any human participation. We use parsing, comprehensive matching, and multidimensional filtering with Machine Learning and Artificial Intelligence to extract key words from both CVs and job descriptions to find the best match. This allows organisations to save enormous amount of time during the process, while proactively recommending only the relevant vacancies to the job seekers. This is done either immediately upon sharing a CV or retrospectively when a new offer becomes available.
3. Invoice Reader – text extraction and classification in one tool
The market leaders in current invoice reading technology have moderate accuracy and require high quality input data (usually, electronically generated invoices or near-perfect scans).
We decided to challenge this niche and have built our own solution – the ‘Invoice Reader’. It is an end-to-end, semi-supervised invoice reading solution that utilizes text detection, OCR, and NLP technologies to extract and classify information from invoices.
We recognize that many times, incoming invoices might be in all shapes or forms. We tackled this challenge by building a robust solution that can process photos of printed documents, skewed photos and even handwritten text, in addition to scanned PDFs.
While the text extraction, recognition and classification technologies are constantly improving, we use a highly modular architecture to quickly upgrade or add new technologies. This way, we can keep up with the state-of-the-art models and keep our customers way ahead of the curve.
4. “Smart Data Model” - getting your data ready for ML use cases
As organisations are increasingly maturing and have enough data to build machine learning solutions on them, we see one recurring problem in particular. In order to drive ML solutions, there is a difference between having the right data and being able to use them. This disparity comes from the fact that businesses store their data in a way that is decided by the technology they use and the way how this data is used by the business.
However, the data model needs are usually different. A layer is required to be in place between the data the business has, and the data being used by the model. The business’ data is primarily used for its day-to-day insights and contains all the required data. To train the models, however, only a fraction of this data will be useful and will have to be transformed to the right granularity and state. We call this inter-layer a Smart Data Model (SDM). It contains already cleaned, parsed, and transformed data ready for consumption by the data analysts.
It is one source of truth for data driven solutions. When the models are reviewed, everybody is then looking at the same incoming data. Creating this “one source of truth” helps to solve a situation we have seen all too often – multiple departments maintain their own data in different data sources. By creating an SDM, the fruits of work of the non-cooperating departments are offered to the data analysts in one place, thus democratizing the data produced by the company.
SDM also saves work – a company usually has many use cases to be solved with ML, many of which will have to go through very similar data preparation steps. Due to the complexity of these tasks, the development often iterates – such as going back to feature selection and choosing different features based on model prototype review with the customer. The inter-layer must be therefore a flexible data format, such as the SDM. In our experience, SDM helps our customers to accelerate their development, shortening the time it takes to go back and make changes, and makes the review process much easier with everybody having one source of data to fall back to.
5. Feature Store - move fast when developing new models
Once SDM has been implemented and the data analysts have a one-stop-shop for all the data for building ML solutions, the next step will be a Feature store. At its core, the Feature store is a table where only features should be stored. While SDM still has to keep all the key columns to all the tables and a lot of additional data, the Feature store holds only features useful for ML models.
So instead of “Customer”, “Purchases”, “DateOfPurchase” columns, the feature store might hold columns such as “How many purchases has the customer made in the last 3 days?”, “What month of the year is the customer most active with our products?”, or “Has the customer bought any other related product?”. Having these data ready leads to the ability to quickly develop prototype models – just take all the features and enter them into the model. Consequently, this promotes the ability to quickly try out solutions – especially when there are many models you want to build, without a vital requirement for their performance. Any time you create a new feature, you add it to the Feature store.
For Feature store to be useful, it is also important to keep the history of the feature values. When you create new models and want to compare them to previous iterations, it is necessary to have the historical data on-hand. With Feature store implemented, the ability to reuse already prepared features makes successful finish of new projects much easier.
DataSentics is an AI Product Studio focused on having a real impact on organisations through data science and AI. We aim to demystify the hype and black magic surrounding these technologies and provide transparent production-level solutions and products. Backed by more than 100 experienced scientists and engineers, we support clients around the world. We build our products and solutions on cloud-based technologies, such as Microsoft Azure, Amazon Web Services and Databricks.