Introduction

This tutorial shows the developer how to add to the core Vantiq natural language processing capabilities.
Natural language processing enables customers to interact with Vantiq applications using language with which they are familiar, rather than having to learn the language of the application.
The Vantiq system provides a set of capabilities for interacting with the Vantiq system itself, and is extensible so that application developers can add language appropriate to their application and/or user base.

In this tutorial, we will learn how to add natural language processing to a Vantiq application, and how to extend the base system to include language specific to the application.
Our approach in this tutorial is to move forward by incrementally building a working system.
We will first add the Vantiq-supplied natural language base,
then add the extensions specific to this application.

Throughout this tutorial, we demonstrate the working system by interacting with Vantiq using Slack.
Any client that can interoperate with a Vantiq chatbot source is fine;
here, we have chosen to use Slack.

The application in this tutorial is a simple hospital application.
This application will allow hospital staff to query the Vantiq system for patients based on condition and/or location.
This tutorial imagines a hospital application where patients are admitted and their various test results cataloged.
The natural language system is added to allow queries based on that patients’ conditions:

  • hypertension
  • tachycardia (high heart rate)
  • diabetes mellitus

and their current location (room) in the hospital

  • emergency room
  • admissions
  • intensive care unit
  • room number

Thus, rather than use query-like language to find this information (e.g. list patients where glucose > 100),
the application user will be able to use a more (medical) people literate form (e.g. who’s in ER with diabetes).

All lessons assume the developer has a working knowledge of the Vantiq IDE. It is recommended that a new developer completes the lessons in the Introductory Tutorial before starting the lessons in this tutorial.
In addition, please see the Natural Language Guide for more information about the Vantiq natural language processing system.

Overview


The overall structure of the Vantiq natural language system is depicted in the following diagram.

Vantiq Architecture Diagram

Architecturally, the Conversational Language Understanding (CLU) application is responsible for interpreting these utterances, analyzing them, and returning the appropriate intent and associated entities.
The Vantiq application is responsible for understanding these intents and entities, and executing them to provide the user with the appropriate response.

In this tutorial, we will make use of the chatbot, a compatible chat client, and the CLU application.
We will perform the following actions.

  • Create a CLU Application
  • Create a Vantiq Procedure to execute intents
  • Create a Vantiq Rule to receive utterances, orchestrate their interpretation, and execute the intents
  • Extend our CLU application with some hospital specific utterances
  • Extend our Vantiq application to process these utterances

We will start with the CLU application.

Set Up the Converstational Language Understanding Application

In the subsequent sections, we will walk through the steps to create and modify our CLU application.
Please see the Conversational Language Understanding documententation for a more detailed explanation and walk through.

Create A Cognitive Services Account

CLU applications run in Microsoft’s Azure cloud, and are part of a wider offering known as Cognitive Services.
Information about setting up accounts and creating a service can be found at the Conversational Language Understanding home.
Before proceeding, we will need to have an account set up for our CLU application.

Create the account and Azure Language resource now.

Import the Vantiq Natural Language Subset

Once the account is set up, we will be able to log in and find the list of projects page.

My Apps Before Import

(In this case, this is a new account.
If we have already imported or created other applications,
some applications may be listed here.)

The next step is to import the Vantiq natural language subset (for details, see the Natural Language Guide).
The Vantiq natural language subset contains various intents and entities that allow for simple queries against the Vantiq system.
These include things like

  • Generic Queries
    • list projects
    • list people whose name is fred (where “people* is a type defined in Vantiq)
    • describe people
  • Vantiq Specific language
    • show active collaborations since the eighth of September
  • “Smalltalk”
    • howdy
    • Thanks

A zip file containing the Vantiq NLS can be found here.
Please download and unzip this this file, keeping track of its location.

Once downloaded and unzipped, return to the *My Apps page depicted above.
Press the Import button, which will bring up an import dialog.

Import Dialog

Press the Choose File button, and select the VantiqBase_CLU.json from the file you downloaded and unzipped. Add a project name to the import dialog – here, we’ve used VantiqNatLangTutorial. (If no name is provided, it will default to VantiqBase.)
Once complete, this will place us on the Schema Definition screen of that project.

Schema Definition

Here, we see the new CLU application VantiqNatLangTutorial intents and entities used. Click on Training Jobs.

Train and Deploy

Our next step is to make the core Vantiq capabilities available without alteration.
To do this, we need to train and deploy our CLU application.

So far, we have imported the natural language subset that can understand some commands about the Vantiq system.
However, to make this available, we must train the application and publish it.

Training Your CLU Application

Training is a step where the definition of the natural language subset is processed by the CLU system,
and the resulting training data made ready to publish.
The natural language subset contains a set of entities, intents, and utterances. (These are oulined in the CLU documentation and in the Vantiq Natural Language Guide.)
The training process maps the utterances to intents and entities so that similar utterances presented by CLU application users can be transformed into the appropriate intents and entities.

To train the CLU application, we press the Start a training job button from our project’s Training Jobs page. Here, we’ll need to define the model name to be trained. You will use this name later to invoke your CLU application. We have used VantiqTutorialModel as our model name.

Training Setup

Start the job, and we see the list of recent training jobs. Since this is our first, there is only one such job.

Training Progress

When this is done, we will see informational message(s) near the top of the browser window that provide our status.

Deploying our CLU Application

To provide access, we will deploy the application using the trained model we have just created. Press the Deploying a model item, then the Add deployment button. This will present us with the following choices.

Deploying the model

Here, we selected a name for our deployment, VantiqTutorialDeployment (this, too, will be used to access our application), and the model we’ve just trained. Now press the Deploy button.

Once complete, we will see our list of deployments.

Deployment List

Testing the CLU Application

To test our CLU application, we select the Testing deployments item. From that, we select our deployment and enter some text.

Application Testing

We do this by typing various phrases that our application should understand into the testing area (seen in the previous image with the phrase Type a test utterance & press Enter).
Once we provide an utterance, we will see the result parsed below.
We verify that it is correct (that the correct intent was selected, that the correct entities were identified, etc.).

In our example, we entered list collaborations younger than 3 days, and the results were displayed. Specifically, we have an intent of system.showActive and entities of system.typeName, system.comparator_gt, and datetimeV2. The text is also shown with the entities identified.

You are encouraged to try other supported (and unsupported) utterances to see how the data is returned.

(Note: Once you begin developing your extensions, you will spend a good deal of time on this page. This is where you “debug” your CLU application.
If and when you get incorrect results, you will refine your language by adding utterances, and return to this page.
We will see more of this in a subsequent section.)

Note that it is necessary to deploy before testing. As an developer, take care not to overwrite a production CLU application until any changes needed are ready.

Using the Deployed Application

To make use of the application we’ve deployed, we will need some facts from our deployment. Specifically, we’ll need to know the trained model and deployment name as well as the URL.
In our case here, our trained model is VantiqTutorialModel, and our deployment is VantiqTutorialDeployment.

To get the URL, return to the Deploying a model item, select our deployment, and press the Get prediction URL button. This will present information including the Azure region information and the URL for accessing the deployment. There is also a CURL invocation (not shown here) that you can use to test from a command line/shell. Make note of the provided URL (or be prepared to come back there to fetch it for creating the CLU source.

Create a Vantiq Application

Our next step is to connect the Vantiq system to the CLU application we have just created.

Create a CLU Source

(For more detailed information about source creation, please see the Source Tutorial and/or the Remote Source Guide.)

Before creating our source, we will want to create a Vantiq secret to hold the access key for our Azure language resource. To find that key, open the project settings and press the eye button beside the string labeled Primary key. Copy the value and create a Vantiq Secret to hold that value. Here, we’ve saved that in a Vantiq Secret named AzureOCPKey.

To create a source in an existing Project for communication with our CLU application, use the Add button to select Source…, and, from there, use New Source. For this tutorial, we’ll add a REMOTE source named cluSource.

We provide the Server URI that was saved from the deployment details.

CLU on Azure uses the Ocp-Apim-Subscription-Key header to provide the subscription key, so we set that under the Headers section in the Request Defaults area. Here, we use the @secrets(AzureOCPKey) notation to allow Vantiq to substitute the value saved in the secret in the header.

CLU Source in Vantiq

Save the CLU Source. The source is now ready to use to provide access to our deployed CLU application.

Note that the Vantiq system interacts with the published CLU application using the POST method (as opposed to GET).

Create a Chatbot

To interact with our Vantiq application, we will need a chatbot defined.
The details regarding how to create a chatbot can be found in the Chatbot Guide.
If a chatbot already exists in the Vantiq namespace, we do not require another.

Once created, we will want to connect our chatbot to a channel. Information regarding how to do that is available at https://azure.microsoft.com/en-us/products/bot-services.
As noted above, examples in this document use slack.

It is worth noting here that Vantiq’s direct interaction with a chatroom (via the Vantiq Mobile App) requires the chatbot to have a Direct Line Secret Key.
When you set up the chatbot, make sure that you add that channel to your chatbot.
Information about adding the Direct Line channel to the bot can be found in the Azure Bot Services documentation.

For the purposes of this example, the chatbot source we create in Vantiq is named theChatbot.
You are free to name yours whatever you desire, but be aware of any usages and substitute accordingly.

Create a Service to Process Utterances

At this point, we have the following parts

  • CLU application
  • Vantiq Remote Source linked to that CLU application
  • Chatbot
  • Vantiq Chatbot Source linked to our chatbot

Referring to the overall stucture diagram provided above and described in the Natural Language Guide, the basic flow is as follows.

  • Natural language utterance is entered via some channel.
  • Utterance makes its way to the Vantiq system via the Chatbot
  • Vantiq Chatbot source delivers the message to the Vantiq runtime
  • Vantiq Rule (defined in the next step) receives the event
  • Vantiq Rule executes the command and responds appropriately.

In this step, we will create a simple procedure that will be used by our rule.

By convention, we are collecting the procedures used in this tutorial in a service named NatLangTutorial. You are, of course, free to use any service name you wish; be aware of the name substitution as required.

Our simple procedure is called processUtterance, and is shown below.

PROCEDURE NatLangTutorial.processUtterance(utterance String,
                    languageService String, cluModel String, cluDeployment)

// Let's figure out if we can translate these into actions...
var response = "Default Response Value"

try {
    var interpretation = NaturalLanguageCore.interpretConversationalQuery(utterance,
                           languageService, cluModel, cluDeployment)

    log.debug("ProcessUtterance() interpretation: {}", [interpretation.stringify(true)])

    if (interpretation.errorMsg != null) {
        // Then, we had some error.  Let's just dump that as the response and move on
        log.debug("ProcessUtterance(): Found Error: {} -- {}", [interpretation.errorMsg, languageService])
        response = interpretation.errorMsg
    } else if (interpretation.response.intent.startsWith("system.")) {
        log.debug("ProcessUtterance():  Attempting interpretation of intent: {}", [interpretation.response.intent])
        var interpretedString = NaturalLanguageCore.executeSystemIntent(interpretation.response)
        response = interpretedString.response
    } else {
       response = "I don't know how to execute intent: " + interpretation.response.intent
    }
}
catch (error) {
    log.error("ProcessUtterance(): Error Encountered: " + stringify(error))
    response = error.message
}

return response

The utterance, the name of the CLU source, and the names of the model and deployment are passed to this procedure. The model and deployment name are those we created in the
training and deployment steps.
The procedure makes use of some of the procedures described in the Natural Language Guide, and operates as follows.

The else clause returns a simple “I don’t know how…” message if given a non-system intent.
We will address this below.

Create Rule to Process Natural Language Requests

With the procedure defined, create a simple rule to link the receipt of the utterance with that procedure, completing the flow of information from the utterance to its execution.
Once this is done, we will have a working Vantiq application that makes use of our CLU application.

To create a rule in an existing Project, use the Add button to select Rule…, and, from there, use New Rule.
Create a rule named ConverseViaRules, with the contents shown below.
Our rule makes use of a few of the natural language procedures defined in the Vantiq Resources section of the Natural Language Guide.

RULE ConverseViaRules
WHEN EVENT OCCURS ON "/sources/theChatbot" AS message where message.channelId != "directline"
// We'll let the chatroom do it's own thing -- that's why we're ignoring 'directline' stuff here.

log.debug("ConverseViaRules: message in: {}", [message.stringify(true)])

// Remove formatting or encoding that may have come from the chat channel

var preppedText = NaturalLanguageUtils.prepareText(message.channelId, true, message.text)

// Now, let's go perform the work

var response = NatLangTutorial.processUtterance(preppedText, "cluSource",
            "VantiqTutorialModel", "VantiqTutorialDeployment")

// the following "prepare" is not necessary as publishReponse handles it for you
// message.text = NaturalLanguageUtils.prepareText(message.channelId, false, response)

message.text = response

log.debug("ConverseViaRules: message out: {}", [message.stringify(true)])

NaturalLanguageCore.publishResponse(message, message.text, "theChatbot")

This rule operates as follows.

  • The rule fires on messages received by theChatbot.
  • In this case, we exclude things on the directline channel as those are generally intended for chatrooms in collaborations. (See the Collaboration Guide for more information).
  • The incoming utterance is prepared for processing by prepareText(), which removes various formatting and/or encoding from the incoming utterance. This is described here.
  • These requirements are dependent upon the channel, so the channel ID, taken from the chatbot’s event, is included in the call.
  • We then process the utterance. This uses the procedure defined in the previous section, passing the name of the source, model, and deployment. (If you used different names for these, substitute accordingly.)
  • Once the results of that processing are obtained, we publish the result back to the channel.
  • This is done using the NaturalLanguageCore.publishResponse().

Save the rule, and our Vantiq Application is operable.

Verify the Overall Flow

At this point, before extending anything, we can verify that our application works as expected.
As outlined, Vantiq natural language set includes a variety of commands.
We can use those to test our overall system.
The following image shows examples from that set, including hi, please describe projects, and show active collaborations since the first of september.
In these examples, the user utters something (types in an utterance), and the system responds with the results (i.e., it interprets the utterance, and executes the intent using the entities derived from the utterance, returning the result).

slack Application with Vantiq Utterances

In the image above, the lines entitled …_local_bot are the response from Vantiq to the slack user.

The data is necessary for medical personnel to determine patients by their conditions can be found in this section, and is available via the Vantiq intent set. (Note that in our tutorial thus far, these are not runnable yet, as we have not yet added the Patient type and instances.
If you with to run these now,
please see the Add Sample Data section.)

slack List Patients

and, with some specifics

slack query Patients

However, this requires our medical personel to think like Vantiq application people.
That is, it requires (Vantiq) application literacy.

Our goal is to add some people-literate capabilities to our application.
In this case, we want the application to present itself in terms that medical people will understand.

Extend the CLU Application

Once the we establish the complete flow, we are ready to enhance our natural language capabilities to accept more hospital-friendly lanaguge.
We will start with the CLU application.

(Note that we are not representing that this is the best medical interface.
It is but an example.)

To enhance the CLU application, we must consider the intent set we want to add.
For this tutorial, we would like to be able to ask about our hospital’s patient population in terms that are appropriate for medical personel. Specifically, we would like to support utterances like these.

  • who has diabetes in the emergency room
  • who’s in admissions
  • who’s in the critical care ward
  • who is sick
  • who has been admitted
  • who has tachycardia in the ER

Recall that architecturally, our CLU application is responsible for interpreting these utterances, analyzing them, and returning the appropriate intent and associated entities.
The Vantiq application is responsible for understanding these intents and entities, and executing them to provide the user with the appropriate response.

Language Design

In this case, all of these different phraseologies can be handled with a single intent.
The intent will query the hospital population for the conditions and locations outlined above.
The entities of interest are the conditions involved and the location queried.
In the natural language to be developed, the user can specify either the patient condition, the patient location, neither, or both.

Add Entities

Although somewhat counterintuitive, we recommend defining the entities first.
The entities, as outlined above, refer to the condition and location queried.

In order to maintain some structure, we will name our intents and entities using the prefix health.
Recall that the intents and entities provided by the system are prefixed with system.. We saw this convention in use in our NaturalLanguageTutorial.processUtterance() procedure.

To create an entity, we go back to the Language Studio, navigate to Schema definition, and select the Entities tab.

From there, we will see something like this.

CLU Entities

To create an entity, press Add.
This will present us with an add entity dialogue.

Add Hospital Room

Our entity will be named health.room_special, and its type is List.
Type in the name, and select List for the entity type.
When we press Add Entity, we will be presented with screen on which to provide the list of values.
This value list has notions of list keys and synonyms.
Which values you put in list keys vs. synonyms depends upon your usage.

List entities specify a set of values that are identified with that entity.
If there is a known list of values, this makes understanding the entities when the IntentSpecification is returned much easier.
However, it works only when there is a known set of values.

Add Hospital Room Values

(Note that to add a list, enter the list key in the box, then tab to the area next to it to add synonyms. This is not particularly obvious.)

Our second entity will also be aList entity.

For purposes of this tutorial, we will pretend that there are only 3 medical conditions of interest: diabetes, tachycardia, and hypertension.
Thus, we create 3 (+ 1 for the general case) List entities.

To create List entities, press Create, add the name, and select List as the entity type.
For names, we will use health.condition_ as the general prefix for this collection of condition lists,
so they are easily recognized as health conditions.

Add List Entity

When we press Done, we will again be presented with screen on which to provide the lists.
For our case, we will simply add diabetes and some synonyms by selecting Add new list from the List portion of the screen.

Diabetes List

Similarly, we will define entities for other conditions – first with Add new list,
then filling in the lists as shown below.

For health.condition_cardiovascular,

Cardiovascular List

For health.condition_hypertension,

Hypertension List

We will also add a general case for folks who are just generally sick (meaning that the query is not condition specific). Thus, for health.condition_general,

Sick List

We will see the processing differences when we look at the processing required to handle these two types of entities.

With that, our entity list is complete.

Add intents

We now move on to adding our intent.
As noted above, we can get by here with a single intent, one that we will call health.patientsByCondition.

To add an intent, we select Intents from the Schema definition page.

To add an intent, press Add.
This will present you with a simple dialogue; provide the intent name health.patientsByCondition, and press Add Intent.

Add Intent

Once we save this intent, we will be placed in the intent where we can begin to add utterances.

Thus far, we have defined the structure of our intent set with the entity and intent definitions.
Adding the utterances is where this structure is used to interpret the language.
As we add utterances, CLU will attempt to categorize things and offer its interpretation,
as shown in the following images.
In these windows, we can correct that interpretation as required.

Using the set of utterances in Appendix Utterances,
add these utterances now.
Our first example is a simple one.
We will add the utterance who is sick to the health.patientsByCondition intent using the Data labeling area.

Once added, we see it interpreted.

Add Utterance Done

For each utterance added,
check that CLU assigned the correct
entity.
Entities here are shown under the word or words recognized.
You will likely have to select the words to assign to the entities.

In the next image, we add the utterance who’s in the ER with diabetes.

ER Diabetes

In the result, we can see that er was assigned to the health.room_special entity,
and diabetes was assigned to health.condition_diabetes.
List entities handle this work for us. Note that you may have to perform this assignment yourself. Repeated such assignments will improve the model’s accuracy.

Now, repeat this process with other utterances from the appendix.

For example, try the utterance, who’s in room 1234?
When that is added, the 1234 should be a builtin entity number.

CLU Add Room Number Result

This appears in the Language Studio as number.
We will see more about this when we look at Entity Processing.

Continue adding the utterances in the Appendix.
When this is complete, we will have all utterances entered.

We should then see something like the following.
(Note that the order of the utterances is not important.)

partial utterances

When testing our CLU application finds weakness, generally we add more utterances to more fully train the application.

In building this tutorial, there were quite a few utterances defined before testing was reasonably successful.

Train and Deploy

Once we have our language defined, we train and deploy as shown after our original import.

As part of training, we test that our new utterances are processed as expected.
For example, who’s in ICU with diabetes

Test ICU

Here, we see that our intent is correct (health.patientsByCondition),
and the entities are properly identified.

In practice, we will end up back here often as we perfect our CLU application.

We should note that the set of utterances provided is not the complete set understood.
CLU uses these to construct its understanding of the intent and will interpret other utterances as it understands them.
For example, our collection of utterances includes who’s in the ER but does not include who’s in the ER with hypertension.
CLU will make that generalization.
Sometimes this is good, sometimes not.
This is why we must continue to test our CLU application and refine its understanding.

Once training, testing, and deployment is done, our CLU application is complete.
We can now move on to extending our Vantiq application to understand and use these new entities and intents.

Add Sample Data

To run our health queryies, we need some test data.
We produce it with a simple procedure.

The Patients List

For this simple example, we have a very simple data structure. To create a type in an existing Project, use the Add button to select Type…, and, from there, use New Type.
We will define a single type, Patient, that contains the following information.

  • name: String
  • room: String
  • age: Integer
  • bpSystolic: Integer (the patient’s blood pressure’s systolic component – the “upper” number)
  • bpDiastolic: Integer (the patient’s blood pressure’s diastolic component)
  • heartRate: Integer
  • glucose: Integer

The following procedure creates that type and adds some sample data.

PROCEDURE NatLangTutorial.addPatients()

// First, let's see if the type already exists.  If so, we're done.
// Note here that we do not validate that this is the type for this tutorial.
// So if there's already a type with that name but different properties,
// confusion may result.

var ptType = SELECT ONE name from types where name == "Patients"
var result = "notCreated"

if (ptType == null ) {
    log.info("Creating type patients for Natural Language Tutorial")
    CREATE TYPE Patients (  name String,
                            room String,
                            age Integer,
                            bpSystolic Integer,
                            bpDiastolic Integer,
                            heartRate Integer,
                            glucose Integer)
        INDEX UNIQUE name
        WITH naturalKey = ["name"]

    result = "created"
}

var patientSet = []
patientSet.push({name: "fred", room: "ER", age: 50, bpSystolic: 120, bpDiastolic: 79, heartRate: 102, glucose: 99})
    // Fred has tachycardia
patientSet.push({name: "wilma", room: "ICU", age: 49, bpSystolic: 142, bpDiastolic: 95, heartRate: 77, glucose: 82})
    // Wilma has hypertension
patientSet.push({name: "barney", room: "102", age: 50, bpSystolic: 150, bpDiastolic: 102, heartRate: 88, glucose: 129})
    // barney has hypertension and diabetes
patientSet.push({name: "betty", room: "admissions", age: 52, bpSystolic: 120, bpDiastolic: 79, heartRate: 80, glucose: 85})
    // betty is healthy (according to these measurements)
patientSet.push({name: "kermit", room: "2204", age: 62, bpSystolic: 150, bpDiastolic: 110, heartRate: 92, glucose: 76})
    // Kermit has hypertension
patientSet.push({name: "cookie monster", room: "ICU", age: 25, bpSystolic: 95, bpDiastolic: 75, heartRate: 88, glucose: 150})
    // oscar has diabetes
patientSet.push({name: "animal", room: "ER", age: 43, bpSystolic: 120, bpDiastolic: 79, heartRate: 150, glucose: 99})
    // Animal has tachycardia

for (p in patientSet) {    
    upsert Patients(name = p.name, room = p.room, age = p.age, bpSystolic = p.bpSystolic,
                bpDiastolic = p.bpDiastolic, heartRate = p.heartRate, glucose = p.glucose)
}

return result

This procedure is designed so it can be run repeatedly to reset data if necessary.
Please execute this procedure now so that we have a context and some data within which to work.

Now that we have some data and a structure, we can look at how to process it in the natural language context.

Extend the Vantiq Application

From here, extending our Vantiq application is not difficult.

  • Procedures to understand the entities
  • Procedure to execute our new intent
  • Extend our utterance processing for this new intent

Entity Processing

The natural language interpreter returns the intent and the list of entities and their associated values.
To understand the context required for intent execution,
our Vantiq application will have to intepret the entities in the context of the intent.
See the intent specification section of the Natural Language Guide for more details.

Extend the Application: Health Conditions

Disclaimer

Any condition or interpretation thereof is not to be interpreted as medical advice, diagnosis, or anything remotely related.

Any condition interpretation is provided solely for the purpose of this tutorial.

In Vantiq terms, we are going to turn the various health.condition entities into query conditions.
That is, we will look at the condition and turn it into a where clause.
To do this, we use query conditions.

We define the procedure determineConditionClause (still in the NatLangTutorial service).

PROCEDURE NatLangTutorial.determineConditionClause(interpretation Object)

var conditionEntity = null
var conditionClause = null
var foundError = null
var result

// First, search the entity list for a condition.  Not present is fine

for (e in interpretation.entities until (conditionEntity != null)) {
    if (e.name.startsWith("health.condition_")) {
        conditionEntity = e.name    // In this case, we don't need the details
    }
}

if (conditionEntity != null) {
    if (conditionEntity == "health.condition_diabetes") {
        conditionClause = {}
        var comparison = {}
        comparison["$gt"] = 99
        conditionClause.glucose = comparison
    } else if (conditionEntity == "health.condition_cardiovascular") {
        conditionClause = {}
        var comparison = {}
        comparison["$gt"] = 100
        conditionClause.heartRate = comparison
    } else if (conditionEntity == "health.condition_hypertension") {
        conditionClause = {}
        var comparisonSys = {}
        // systolic pressure > 139
        comparisonSys["$gt"] = 139
        var comparisonDiastolic = {}
        // diastolic pressure > 89
        comparisonDiastolic["$gt"] = 89
        var systolic = {}
        systolic.bpSystolic = comparisonSys
        var diastolic = {}
        diastolic.bpDiastolic = comparisonDiastolic
        // If either systolic or diastolic are over their limits...
        conditionClause["$or"] = [systolic, diastolic]
    } else if (conditionEntity == "health.condition_general") {
        // In this case, we do nothing -- no condition.  Leaving "if" statement in for documentation
    } else {
        foundError = "Condition " + conditionEntity + " was not recognized\n"
    }
}

result = { error: foundError, clause: conditionClause }

return result

This procedure looks for the various health.condition entities, constructing an appropriate condition.
If no such entity is present, no query is constructed.
This results in the same thing as the health.condition_general entity.
We just get a list of patients in our hospital.

The value of using the List entities here is that as new terms are needed for, say, hypertension, they can be added to the CLU application.
The Vantiq application remains unaware.
Similarly, if this intent set is defined for a different culture (e.g., French, Japanese),
those will result in defining the same entities.
The underlying natural language mapping is completely different, but the Vantiq application need not change.

Extend the Application: Hospital Locations

This section involves adding the understanding of hospital locations to our Vantiq application.
We allow these locations to be specified as one of a set of known locations (ER, ICU, admitting office) or by room number.

From the natural language processor, a location may appear in one of two ways:

  • health.room_special entity.
    • This is a list entity that understands various terms for those special rooms in a hospital.
  • number entity
    • This is a CLU built in, and will appear when the utterance contains a number.
      • Things like who’s in room 102 will provide the 102 in a number entity.

Our procedure is defined as follows.

PROCEDURE NatLangTutorial.determineLocationClause(interpretation Object)

log.debug("determineLocationResults() from entities {}", [interpretation.entities])
var foundError = null
var result

// First, search the entity list for a condition.  Not present is fine

var locationClause = null
var locationEntity = null
var locationValue = null
for (e in interpretation.entities until (locationEntity != null)) {
    if (e.name == "health.room_special") {
        locationEntity = e.name
        if (["emergency", "emergency room", "er"].contains(e.value.toLowerCase())) {
            locationValue = "ER"
        } else if (["icu", "intensive care", "critical care", "critical care ward"].contains(e.value.toLowerCase())) {
            locationValue = "ICU"
        } else if (["admissions", "admitting office"].contains(e.value.toLowerCase())) {
            locationValue = "admissions"
        } else {
            foundError = "Unrecognized hospital location: " + e.value
        }
    } else if (e.name == number.number") {
        locationEntity = e.name
        locationValue = e.value
    }

}

if (foundError == null) {
    if (locationEntity != null) {
        // Here, we could have either a room number or a health.room_special.  As it turns out, we treat them the same here.
        locationClause = {}
        locationClause.room = locationValue
    }
}
result = { error: foundError, clause: locationClause }

log.debug("determineLocationClause(): result {}, locationEntity: {}, locationValue{}", [result, locationEntity, locationValue])
return result

Here, if the number entity is present, we use the value as our query.
If, however, the health.room_special entity is present,
this procedure must examine the value of the entity, and construct a query condition based on the canonical value of that term.

In some cases this may be valuable;
in others, it may be easier to provide a number of List entities as was done for the health.condition cases.

In any event, the query is constructed and returned.

Procedures for Custom Intents

To handle our new intent, we will add a new procedure that uses the entity evaluations above to provide our results.

PROCEDURE NatLangTutorial.executeCustomIntent(interpretation Object)

// Here, we'll need to determine which intent we got & interpret the entities appropriately.
// For our simple tutorial, we have only one intent, so we'll just deal with it here.

var response = ""
if (interpretation.intent == "health.patientsByCondition") {   
   // To determine our subfunction, we'll need to look at the entities returned.
   //
   // NOTE: These are samples for the tutorial.  THIS MUST NOT BE INTERPRETED AS MEDICAL ADVICE
   // 
   // In particular:
   //   health.condition_general -- list all Patients
   //   health.condition_diabetes -- list Patients with glucose > 99
   //   health.condition_cardiovascular -- list Patients with a heart rate > 100
   //   health.condition_hypertension -- list Patients with bpSystolic > 139 OR byDiastolic > 89

   var foundError = null
   var conditionResults = NatLangTutorial.determineConditionClause(interpretation)
   var locationResults  = NatLangTutorial.determineLocationClause(interpretation)
   var qryCondition = null

   if (conditionResults.foundError) {
       foundError = conditionResults.foundError
   } else if (locationResults.foundError) {
       foundError = locationResults.foundError
   }

    if (foundError == null) {
       // Now, we combine the various conditions we may have

       if (locationResults.clause && conditionResults.clause) {
           qryCondition = {}
           qryCondition["$and"] = [locationResults.clause, conditionResults.clause]
       } else if (locationResults.clause) {
           qryCondition = locationResults.clause
       } else if (conditionResults.clause) {
           qryCondition = conditionResults.clause
       }

       var rowCount = 0
       SELECT * FROM Patients as row WHERE qryCondition {
           // the \u2022 character in the format statement below is a bullet character
           var thisRow = 
              format("\u2022 Name: {0}, Age: {1}, Room: {2}, BP: {3}/{4}, Pulse: {5}, Glucose: {6}\n",
                row.name, row.age, row.room,
                row.bpSystolic, row.bpDiastolic, row.heartRate,
                row.glucose)
           response += thisRow
           rowCount += 1
       }
       response += "\nTotal: " + rowCount + " Patients"
    } else {
         // Had some error interpreting the conditions.  Return that
       response = foundError
    }
} else {
    response = "I don't know how to perform \"" + interpretation.query + " (intent: " + 
                            interpretation.intent + ")"
}

interpretation.response = response
return interpretation

This procedure calls the entity interpreters in the previous sections.
Once it has the query conditions, it merges them together as appropriate.
That is, if the utterance was who’s got diabetes in the ER, then we want a query that looks for a condition of diabetes AND a location of the ER.
If the utterance was who’s got diabetes, there is no location specified.

Once the condition is determined, we run a query over our Patients type, and return the results.

Extend our Utterance Processing

Once we have our custom intent procedure ready, we simply include it in the processUtterance() procedure, replacing the else clause with a call to our executeCustomIntent() procedure.
The completed procedure is shown below.

PROCEDURE NatLangTutorial.processUtterance(utterance String, languageService String)

// Let's figure out if we can translate these into actions...
var response = "Default Response Value"

try {
    var interpretation = NaturalLanguageCore.interpretConversationalQuery(utterance, 
                           languageService, "VantiqTutorialModel", "VantiqTutorialDeployment")
    log.debug("ProcessUtterance() interpretation: {}", [interpretation.stringify(true)])
    if (interpretation.errorMsg != null) {
        // Then, we had some error.  Let's just dump that as the response and move on
        log.debug("ProcessUtterance(): Found Error: {} -- {}", [interpretation.errorMsg, languageService])
        response = interpretation.errorMsg
    } else if (interpretation.response.intent.startsWith("system.")) {
        log.debug("ProcessUtterance():  Attempting interpretation of intent: {}", [interpretation.response.intent])
        var interpretedString = NaturalLanguageCore.executeSystemIntent(interpretation.response)
        response = interpretedString.response
    } else {
      log.debug("ProcessUtterance():  Attempting interpretation of custom intent: {}", [interpretation.response.intent])
        var interpretedString = NatLangTutorial.executeCustomIntent(interpretation.response)
        response = interpretedString.response
    }
}
catch (error) {
    log.error("ProcessUtterance(): Error Encountered: " + stringify(error))
   response = error.message
}

return response

With that procedure altered, both our CLU application and our Vantiq applications are complete.

Verify, Test, Retrain, Redeploy, Repeat

Now that our application is complete, we can verify that it works as expected.

We verify our complete application using our external chat channel.
Enter some of the expected commands, and verify that the correct results are returned.

For example,

Our Health Application

As language issues (CLU Application) are found, the solution is, usually, to go back to the
Language Studio and add or amend utterances (see Add Intents or add further terminology to the List entities).
Once that is done, train and deploy again.

As operational issues are found, return to the Vantiq system and make the appropriate corrections.

Once these steps are finished, our natural language addition to the application is complete.

Appendix: Utterances

This section lists the utterances used (at the time of this writing) to train our CLU application – specifically the intent outlined in this tutorial.
The basic function is the ability to ask questions about the hospital population in terms of location and condition.

We support the “official terms” as well as slang for conditions (e.g. tachycardia vs. heart trouble) as well as for locations (e.g. emergency room vs. ER).
Similarly, there are terms in both the noun and adjectival forms (e.g. diabetes mellitus vs. diabetic).
Generally, for conditions, these are covered in the various health.condition_ entities outlined in Add Entities.

Note that there are quite a few utterances here.
This may seem like a lot for such a simple case.
It is, but this is largely what it takes to make a natural language understanding project (in this case, a CLU application) successful.
These are, generally, big data systems.
The more data, the better.
The more utterances are provided, the more flavors of similar things,
the more likely the system is to recognize the intents desired and to generalize
that understanding to similar phraseologies.
Fundamentally, the more examples, the better.

  1. who’s diabetic
  2. who has diabetes
  3. who’s in room 1234
  4. who’s in the er
  5. who’s hypertensive
  6. cardiac patients in icu?
  7. who has hbp
  8. who has cardiovascular issues
  9. who has hypertension
  10. whos got heart trouble
  11. whos hypertensive
  12. whos diabetic
  13. who is diabetic
  14. who is hypertensive
  15. admit hbp er
  16. admits heart condition icu
  17. admit hypertension
  18. admit tachycardia
  19. admit diabetes
  20. admit er
  21. admitted critical care
  22. admitted er
  23. admits hbp
  24. admits er
  25. admits
  26. admitted sick
  27. admit sick
  28. admits with tachycardia
  29. admits with hypertension
  30. patients with diabetes
  31. patients in er
  32. who’s sick

Trademarks

  • Microsoft, Azure, and Microsoft Teams are trademarks of the Microsoft group of companies.