Menu
Sign up Log in

Tutorials

Topic guides

How-to guides

Reference guides

Match unknown words

4 min read.

On this page

    Usually, your agent uses annotations to define the sentence structure. But having a well-defined and consistent structure can be difficult, if not impossible. One way to solve this problem is to use the “Any option “.

    “Any” option to the rescue

    When annotating a formulation, a special option called “Any” is available. It allows the agent to match anything to this position in the sentence and link it to the associated JavaScript variable in the solution.

    You already saw how an agent can match an unexpected road name for the “Address Tutorial” agent that we created in the “Getting started” tutorial. Let’s dig a little deeper with this example.

    Without “Any” option

    First, remove the “Any” option of the “route_names” annotation if it is already set. Do so on every formulations in the “address” interpretation. The associated entities list should only contains two entities: “Champs Elysées” and “Rivoli”.

    Try the sentence “12 avenue Rivoli 75019 Paris” in the console. Note the matching interpretation score. Since it is a perfect match so we have a perfect score of 1.0.

    Console perfect match

    Change the road name to something that is not part of the entities list such as “12 avenue Louise Weiss 75019 Paris”. Try again in the console. Nothing match.

    Console nothing match

    With “Any” option

    Now reenable the “Any” option on the “route_names”. Retry your previous sentence with the unknown road name, in this example “12 avenue Louise Weiss 75019 Paris”.

    The interpretation match because even if a part of the sentence is still missing, the NLP is now forced to fill in the blank. Note that the score is lower because of this option.

    Console any match

    If you type the road name with the known road name, here “12 avenue Rivoli 75019 Paris” the score is 1.0 again even with the “Any” option still present.

    Console any perfect match

    What happened is that the annotated interpretation or list of entities will be used first, but if nothing matches, the “Any” option will be used as a fallback solution and will still collect the corresponding text. In this case, the associated score will be decreased compared to not triggering the “Any” option.

    On this page

      Close
      Home Pricing Open Source Blog Documentation