The omni:us AI platform leverages deep learning technologies (NLP and Computer Vision) to train AI models to support the document digitization process. The understanding of documents is based on the trained AI-Tasks which includes the training of classification of pages and information extraction. The AI-task training is an independent process without the need for involvement from data scientists. The trained AI-tasks are applied to automatically understand the newly uploaded documents and provide the structured data via omni:us API. Clients are enabled to teach documents to the AI platform to process incoming document submissions automatically and expand the understanding of the new types of documents to the AI platform.
This release provides the following new features to omni:us AI Platform v.1.2
Form Enablement – Understanding Forms
The omni:us AI-Platform Console is now empowered with the capability to process documents with a Static layout. Static layout documents contain static or fixed regions for handwritten data extraction. With this major enablement, the omni:us AI-Platform Console can now seamlessly handle the annotation, training, and extraction of Forms containing user-filled handwritten data, and different Form-versions. Examples of Static documents are European Accident Statement (EAS) Forms, Registration Forms and the like.
Handwritten Text Recognition (HTR)
Introduced Handwritten Text Recognition as a trainable AI-Task in the omni:us Console to enable extraction of handwritten information from Forms. The HTR AI-Task can be trained with annotated documents in the omni:us Trainer and then the trained AI-Task can be exported to the omni:us Engine for extraction of handwritten information from real-time Forms.
- You can select the Handwritten Text Recognition Training AI-Task, select the collections containing annotated documents which you want to use for AI training, select the Data Model and start a new training job.
- Once the HTR AI-Task is trained, you can evaluate the performance of the training runs and export the best HTR AI-Task to the omni:us Engine for document processing and handwritten information extraction.
Visual Page Classification
Introduced Visual Page Classification as a trainable AI-Task in the omni:us Console to enable classification of pages based on their visual layout. The AI-Tasks can be trained with annotated documents in the omni:us Trainer and later exported to the Engine for document processing.
- You can select the Visual Page Classification Training AI-Task, select the collections that you want to use for training, select the Data Model, select the duration of training execution, and start a new training job.
- Once the relevant AI-Task is trained, you can evaluate the performance of the training runs and export the best AI-Task to the omni:us Engine for document processing and classification.
Introduced the Document Reprocessing feature in the omni:us Trainer – Annotate module and the omni:us Engine – Case Reviewer module to enable the user to reprocess a document if the page classification and page orientation is not correct.
- On the Reprocess document page, you can rotate the page to your desired orientation, select the correct document class and send the document for reprocessing.
- After reprocessing a document, it is added to the processing queue and you can view it on the Document Explorer page and the status changes to “Processing”. You can annotate or review the reprocessed document after the status changes to “Processed”.
Introduced the Mapping Dictionary module in the omni:us Trainer Console page to enable a Document Annotator to select the data type for specific fields as “Mapping Dictionary”. Before creating a Data Model, you can create Mapping Dictionaries which is a list of allowed values for a field. Raw predictions of fields may contain typos and a Mapping Dictionary can enable quick auto-correction to the expected text. Typical applications can be a list of manufacturer or product names.
- A Mapping Dictionary is created by a Domain Expert before configuring a Data Model. While creating a specific field in the Data Model and indicating its location for data extraction, you can select the field type “Mapping Dictionary” for that field.
- During annotation and after document processing, the field results are restricted to the list of entries contained in the selected Mapping Dictionary
- In the Case Reviewer, fields that have dictionaries are converted into dropdown fields where the user can quickly select the relevant value from an auto-suggested Mapping Dictionary entry. Consequently, the accuracy of the post-processing results can be significantly improved.
- Mapping dictionaries allow the predicted values to be mapped to an alternative response that can be retrieved via API. For example, a car manufacturer name “BMW” is detected and mapped to “id 100” that your downstream systems may require to consume the predicted information.
Introduced the Template Modeler feature in the omni:us Trainer – Data Modeler module to enable the user to create multiple templates. Though Forms have a static layout, there are typically multiple versions of a form used. The relevant data of these different editions of a form is identical but the visual layout is specific to a certain version. To incorporate these differences, you can create multiple form templates for the same Data Model using the Template Modeler, one template for each version of a form. After you create the first template, you can create a second template and highlight the relevant information of the data model in the newly added template. If multiple versions of the form contain slightly different fields, we recommend summing up all relevant fields in the same data model.
- You can upload a prototype of the document for which a template will be created and then capture the grouping of fields within the document along with the demarcated bounding boxes so that the AI can learn the locations for handwritten data extraction during AI training.
- After you create the first template, you can create a second template by uploading a prototype of the Forms document with a different layout. You can re-use the design of the first template to create the second template by adding modifications on top of it.
Introduced the Template Alignment capability as a pre-configuration in the omni:us AI Platform which performs the vital action of auto-realignment of poorly aligned scanned or photographed documents. This feature mitigates the possibilities of error-prone document extraction results due to improper mapping of the AI model template and the real-world document template during data mapping and extraction by the omni:us AI.
Document Viewer within Data Modeler
Introduced the Document Viewer pane within Data Modeler to ease out the Data Model creation process by enabling users to view side-by-side, the Data Model and the uploaded prototype document for which they are creating the Data Model. The user can create the groups and fields for the Data Model and simultaneously draw bounding boxes around regions of interest on the Document Viewer pane.
Browser compatibility support for Microsoft Edge
The omni:us AI Platform is now compatible with Microsoft Edge Version 44.18362.449.0.
Written by Piya Das, omni:us technical writer.1.2|platform|release|releasenotes|version