Technology

Putting together a function neural network with state of the art features at scale is hard. That's why we've done it for you.

Simplicity on top of complexity

One thing we've always noticed while looking through the AI space is that everything is built to be consumed by software developers or experts in the language technology field. By packaging up the latest developments in a processing pipeline and then exposing that through a drag an drop graphical user interface. We make it possible for anybody to create AI virtual agents with no knowledge of the underlying technology.

A side benefit of this is that we're able to swap out the underlying technology for newer developments and implement the latest research papers without having to worry about breaking API interfaces or existing systems.

As new technologies become available or improve, they are immediately available and existing systems just get better.



Natural Language Processing

At the foundation of our work is the NLP layer. This layer allows a computer to understand the components of language. We currently primarily use Spacy, Spark, NLP.js, and in house code for this.

Transformer Pipelines

We use pipelines to extract various things from text such as named entities, features, sentiment, question answering, and more. We use pipelines such as BERT and Wav2Vec2.

Hundred of Servers

We have a large system build on Hashicorp's technology that provides us with instant scaling, regional stability, multi datacenter failover, and support for various levels of hosted microservices (again providing the ability to swap out components at will).

Custom Training Pipeline

We have developed in-house training software to help teach the system what is correct and incorrect and to continuously improve the quality of results through systems such as GANs (Generative Adversarial Networks).

Multiple Language Support

The system has been designed from the ground up for multiple language support. When you design a virtual agent you just turn on or off checkboxes for each language you'd like to support and then the definition for each language sits in the same place. There is also automatic transcript translation and TTS (Text to Speech) for testing multiple languages even if you don't speak them.

Conversational Healing

While we're not a fan of the term, we do support it. Traditionally in the conversational AI space this has been used so if something is said later in a conversation, it will change the meaning of what was said earlier in the conversation. For example, "I want to transfer $100 to my savings account. Actually on second thoughts, let's make that $200." We believe that if a normal human would understand a sentence then an AI should too. This shouldn't be a "feature".

Named Entity Recognition

Our system is able to detect and tag named entities such as companies, people, etc. and understand their meaning and significance.

Sentiment Analysis

Sentiment analysis provides support for understanding how a customer is feeling about their interaction with our AI agents and potentially branch to different paths based on this. For example, you may try to calm somebody down for a bit before continuing if they're getting upset.

Pluggable Architecture

Because of the microservice design of the system, it is easy for us to bridge in new pieces and new research paper findings without negatively effecting the rest of the system. Hashicorp's Nomad allows us to use multiple rollout strategies so we can expose new features to small subsets of customers at a time.

API Shim Layer

Interacting with other legacy systems can be hard and messy. An AI agent shouldn't have to deal with that. We ship off the computational complexity for this to a shim layer so that our AI agents always interact with a simple REST API even though in the background the shim layer deals with SOAP, XML, REST, CSV files, FTP, carrier pigeons, and the like.