# Smart Conversations callback information When using the Smart Conversations functionality, Machine Learning and Artificial Intelligence analyses are delivered through specific callbacks on the Conversation API. Note: You can configure, at most, five webhooks per Conversation API app. For more information on other Conversation API webhook triggers and callbacks, click [here](/docs/conversation/callbacks). In order to take advantage of Smart Conversations functionality, you must configure your solution to subscribe to the `SMART_CONVERSATION` webhook or the `MESSAGE_INBOUND_SMART_CONVERSATION_REDACTION` webhook. ## Message Inbound Smart Conversation Redaction trigger The `MESSAGE_INBOUND_SMART_CONVERSATION_REDACTION` webhook uses machine learning models to analyze messages from end-users, allowing the system to detect offensive content (in both text and images). This can be used to assist in content filtering and moderation, effectively allowing you to block undesired messages before they reach their intended destination. Additionally, this webhooks redacts known PII to avoid the propagation of sensitive data. It is possible to use this trigger instead of the `MESSAGE_INBOUND`, which will deliver a payload with a `message_redaction` field instead of a `message`, allowing you to easily differentiate the callbacks: MESSAGE_INBOUND example ```json { "message": { "contact_message": "..." } } ``` MESSAGE_INBOUND_SMART_CONVERSATION_REDACTION example ```json { "message_redaction": { "contact_message": "..." } } ``` There are two redaction types: - If **offensive content** (for example, sexual, aggressive, or drug related content) is detected, the message text is replaced with `{Message masked due to inappropriate content}`. The whole message is masked because the problematic content tends not to be limited to a single word or token. - If **personally identifiable information** (PII) is detected, the sensitive words or tokens are labeled and masked. You can activate each one of the redaction types under the Smart Conversations section for your App. The redaction information is also available in the `SMART_CONVERSATIONS` callback. For more information on what is considered PII and offensive content as well as the list of PII masking labels, see the documentation on [the PII results array](#the-ml_pii_result-array) and [the offensive analysis array](#the-ml_offensive_analysis_result-array). ## Smart Conversation trigger The `SMART_CONVERSATION` webhook allows you to subscribe to notifications that provide machine learning analyses of inbound messages from end-users on the underlying channels. You can leverage this knowledge in multiple creative ways, including building analytic dashboards and reports, integrating with your products or third-party systems, automating tasks, and more. In addition to including message identification information, these notifications can deliver the following services and analyses: style .tableSection th:nth-child(2){display:none !important} .tableSection tr td:nth-child(2){display:none !important} .tableSection th:nth-child(4){display:none !important} .tableSection tr td:nth-child(4){display:none !important} .tableSection th:nth-child(5){display:none !important} .tableSection tr td:nth-child(5){display:none !important} div | Feature | Description | Description | Information and sample requirements | More information | | --- | --- | --- | --- | --- | | Sentiment Analysis | Provides an assessment of the likelihood that the emotional tone of a message is positive, negative, or neutral. | Provides an assessment of the likelihood that the emotional tone of a message is positive, negative, or neutral. For information on how this analysis is represented in a callback, see the description of [the ml_sentiment_result array](/docs/smart-conversations/callbacks/#the-ml_sentiment_result-array). | There is no minimum sample set requirement. | | | Natural Language Understanding (NLU) | Provides an assessment of the likelihood that the message corresponds with a specific set of intents. For example, the likelihood that a message is a greeting, a request for information, or an expression of satisfaction or dissatisfaction. | Provides an assessment of the likelihood that the message corresponds with a specific set of intents. For example, the likelihood that a message is a greeting, a request for information, or an expression of satisfaction or dissatisfaction. For information on how this analysis is represented in a callback, see the description of [the ml_nlu_result array](/docs/smart-conversations/callbacks/#the-ml_nlu_result-array). | There is no minimum sample set requirement. However, we recommended you have [at least 50](https://community.sinch.com/t5/Sinch-AI/Does-NLU-always-extract-the-data-correctly/ta-p/9662) samples of expressions for each intent to achieve accurate results. | More contextual information regarding Natural Language Understanding can be found on our [Community site](https://community.sinch.com/t5/Sinch-AI/tkb-p/SinchAI/label-name/%20NLU). | | Image Comprehension Engine | Provides an analysis of images included in the received message. This includes the identification of probable document types in the image, optical character extraction, and the assignment of values to probable fields identified on the image. | Provides an analysis of images included in the received message. This includes the identification of probable document types in the image, optical character extraction, and the assignment of values to probable fields identified on the image. For information on how this analysis is represented in a callback, see the description of [the ml_image_recognition_result array](/docs/smart-conversations/callbacks/#the-ml_image_recognition_result-array) | For Document Image Classification (DIC), a data set of at least 100 images for each document class is required. For Document Field Classification (DFC), the regex patterns for each field to be extracted must be provided. For more information, click [here](https://community.sinch.com/t5/Sinch-AI/How-does-IRIS-work/ta-p/9609). | More contextual information regarding the Image Comprehension Engine, which is also called IRIS, can be found on our [Community site](https://community.sinch.com/t5/Sinch-AI/tkb-p/SinchAI/label-name/IRIS). | | PII Masking | Provides an analysis of content included in the received message to identify and mask sections of text that correspond to any representation of information that discloses the identity of an individual. Name, phone number, national ID, email, and other sensitive pieces of information are considered Personal Identifiable Information (PII) and will be masked. | Provides an analysis of content included in the received message to identify and mask sections of text that correspond to any representation of information that discloses the identity of an individual. Name, phone number, national ID, email, and other sensitive pieces of information are considered Personal Identifiable Information (PII) and will be masked. For information on how this analysis is represented in a callback, see the description of [the ml_pii_result array](/docs/smart-conversations/callbacks/#the-ml_pii_result-array) | There is no minimum sample set requirement.By default, the following common PII are supported/recognized. You can enable/disable each one to configure how PII should be identified during redaction.Recognized PII:`AMOUNT_OF_MONEY`: Any monetary value. Example: $5.99`CARD_NUMBER`: Credit/Debit card number. Example: 1234-5678-9123-4567`DATE:` Any date written in common formats. Example: 01/01/1990`DRIVER_NUMBER`: Driver's license number. Example: AB123456`EMAIL`: Email addresses. Example: bob.sinch@example.com`GENDER`: A person's gender. Example: Male`NATIONAL_ID`: Common national IDs for different countries, such as SSNs in the USA and CPF in Brazil. Example: 000-00-0000`ORDINAL`: Ordinal number. Example: 3rd`PASSPORT_NUMBER`: Passport number.`PHONE_NUMBER`: MSISDN or phone/mobile number. Example: (123) 456 7899`TIME`: Any time written in a standard format. Example: 10:00`URL`: Uniform Resource Locator. Example: www.bobsinch.com`VISA_NUMBER`: The visa permits number, also known as the visa foil number of the visa document.`ZIPCODE`: The Zone information postal (Zip) code. Example: 473121-829`NUMBER`: Any number that is different from previous patterns. Example: 5`PERSON`: Full name, first name, or last name. Example: Bob Sinch`LOCATION`: Home address, country, state, or city. Example: USA | | | Offensive Content Analysis | Provides an assessment of the likelihood that the analyzed text or image message contains offensive content (for example, explicit images, hate speech, offensive language, etc.). | Provides an assessment of the likelihood that the analyzed text or image message contains offensive content (for example, explicit images, hate speech, offensive language, etc.). For information on how this analysis is represented in a callback, see the description of [the ml_offensive_analysis_result array](/docs/smart-conversations/callbacks/#the-ml_offensive_analysis_result-array) | There is no minimum sample set requirement. | | | Transcription: Speech to Text | Provides a transcription of the analyzed audio. | Provides a transcription of the analyzed audio. For information on how this analysis is represented in a callback, see the description of [the ml_speechto_text_result array](/docs/smart-conversations/callbacks/#the-ml_speechto_text_result-array) | There is no minimum sample set requirement. | | This information allows you to further customize your solution with automated responses. For example, you can create a chatbot to respond to customers differently based on the intent of the customer message. Additionally, you could program the bot to connect a customer with a human operator in the event that the sentiment of a received message crosses a pre-defined negative threshold. ## Smart Conversations callbacks Each callback dispatched by the Conversation API has a JSON payload with the following top-level properties: | Field | Type | Description | | --- | --- | --- | | `project_id` | string | The project ID of the app which has subscribed for the callback. | | `app_id` | string | Id of the subscribed app. | | `accepted_time` | ISO 8601 timestamp | Timestamp marking when the channel callback was accepted/received by the Conversation API. | | `event_time` | ISO 8601 timestamp | Timestamp of the event as provided by the underlying channels. | | `message_metadata` | string | Metadata associated with the conversation. | The Smart Conversations callback is used to deliver machine learning analyses about received messages. The details are given in a top level `smart_conversation_notification` field. It's a JSON object with the following properties: | Field | Type | Description | | --- | --- | --- | | `contact_id` | string | The unique ID of the contact that sent the message. | | `channel_identity` | string | The channel-specific identifier for the contact. | | `channel` | string | The channel on which the message was sent. | | `message_id` | string | The unique ID of the corresponding message. | | `conversation_id` | string | The text of the message. | | `analysis_results` | object | The analysis provided by the Smart Conversations machine learning engine(s). The contents of the object are determined by the functionalities that are enabled for your solution. | Each `analysis_results` object contains the results of the analyses you've enabled for your solution. For example, if you have enabled sentiment analysis and NLU analysis, you may get a callback similar to the one below: ```json { "app_id": "01FW3DP26MEN4JKSME44JDXWC4", "accepted_time": "2022-07-15T14:31:52.458350165Z", "event_time": "2022-07-15T14:31:52Z", "project_id": "0f93046c-91e1-426f-89b7-d03deb8ff872", "smart_conversation_notification": { "contact_id": "01FX7MQMZ0HVK5GPK4R0RBS3VT", "channel_identity": "alphanumeric_identity", "channel": "TELEGRAM", "message_id": "01G814BT8NKT7VYQ7FA58MWJ10", "conversation_id": "01FX7MQNJNYQ3685MFR7KB7HF7", "analysis_results": { "ml_sentiment_result": [ { "message": "Run sentiment & NLU analysis", "sentiment": "neutral", "score": 0.97966236, "results": [ { "sentiment": "negative", "score": 0.0039568725 }, { "sentiment": "neutral", "score": 0.97966236 }, { "sentiment": "positive", "score": 0.016380679 } ] } ], "ml_nlu_result": [ { "message": "Run sentiment & NLU analysis", "intent": "general.yes_or_agreed", "score": 0.6248218, "results": [ { "intent": "general.yes_or_agreed", "score": 0.6248218 }, { "intent": "chitchat.bye", "score": 0.2360245 }, { "intent": "chitchat.how_are_you", "score": 0.06233201 }, { "intent": "chitchat.greeting", "score": 0.03595746 }, { "intent": "chitchat.thank_you", "score": 0.028020523 }, { "intent": "general.i_dont_know", "score": 0.012405818 }, { "intent": "general.no", "score": 0.00026780643 }, { "intent": "chitchat.who_are_you", "score": 0.00017008775 } ] } ] } }, "message_metadata": "" } ``` The `analysis_results` are represented as a JSON object with the following properties: | Field | Type | Description | | --- | --- | --- | | `ml_sentiment_result` | array | An array that contains the analyses of the sentiments of the corresponding messages. | | `ml_nlu_result` | array | An array that contains the analyses of the intentions of, and entities within, the corresponding messages. | | `ml_image_recognition_result` | array | An array that contains the image recognition analyses of the images identified in the corresponding messages. | The `ml_sentiment_result`, `ml_nlu_result`, and `ml_image_recognition_result` arrays are described below. ### The ml_sentiment_result array The `ml_sentiment_result` array may be included in your Smart Conversations callback. An example of a Smart Conversations callback payload that includes the `ml_sentiment_result` array is below: ```json { "app_id": "01FW3DP26MEN4JKSME44JDXWC4", "accepted_time": "2022-07-15T14:27:16.528875627Z", "event_time": "2022-07-15T14:27:15Z", "project_id": "0f93046c-91e1-426f-89b7-d03deb8ff872", "smart_conversation_notification": { "contact_id": "01FX7MQMZ0HVK5GPK4R0RBS3VT", "channel_identity": "alphanumeric_identity", "channel": "TELEGRAM", "message_id": "01G8143CS9ZJ62H1487GZB7Q2C", "conversation_id": "01FX7MQNJNYQ3685MFR7KB7HF7", "analysis_results": { "ml_sentiment_result": [ { "message": "run sentiment analysis", "sentiment": "neutral", "score": 0.9774604, "results": [ { "sentiment": "negative", "score": 0.0030293926 }, { "sentiment": "neutral", "score": 0.9774604 }, { "sentiment": "positive", "score": 0.019510288 } ] } ] } }, "message_metadata": "" } ``` Each `ml_sentiment_result` is an array of JSON objects of the following structure: | Field | Type | Description | | --- | --- | --- | | `message` | string | The message text that was analyzed. | | `sentiment` | string | The most probable sentiment of the analyzed text. One of `positive`, `negative`, or `neutral`. | | `score` | float | The likelihood that the assigned sentiment represents the emotional context of the analyzed text. `1` is the maximum value, representing the highest likelihood that the message text matches the sentiment, and `0` is the minimum value, representing the lowest likelihood that the message text matches the sentiment. | | `results` | array | An array of JSON objects made up of `sentiment` and `score` pairs, where the `score` represents the likelihood that the message communicates the corresponding `sentiment`. | Each JSON object in the `results` array are made up of `sentiment` and `score` fields, which are described below: | Field | Type | Description | | --- | --- | --- | | `sentiment` | string | A potential sentiment of the analyzed text. One of `positive`, `negative`, or `neutral`. | | `score` | float | The likelihood that the corresponding sentiment represents the emotional context of the analyzed text. `1` is the maximum value, representing the highest likelihood that the message text matches the sentiment, and `0` is the minimum value, representing the lowest likelihood that the message text matches the sentiment. | ### The ml_nlu_result array The `ml_nlu_result` array may be included with your Smart Conversations callback. An example of a Smart Conversations callback payload that includes the `ml_nlu_result` array is below: ```json { "app_id": "01FW3DP26MEN4JKSME44JDXWC4", "accepted_time": "2022-07-15T14:29:22.935294279Z", "event_time": "2022-07-15T14:29:22Z", "project_id": "0f93046c-91e1-426f-89b7-d03deb8ff872", "smart_conversation_notification": { "contact_id": "01FX7MQMZ0HVK5GPK4R0RBS3VT", "channel_identity": "alphanumeric_identity", "channel": "TELEGRAM", "message_id": "01G814786076SGDNHSMB67M3XN", "conversation_id": "01FX7MQNJNYQ3685MFR7KB7HF7", "analysis_results": { "ml_nlu_result": [ { "message": "run nlu analysis", "intent": "chitchat.greeting", "score": 0.5713836, "results": [ { "intent": "chitchat.greeting", "score": 0.5713836 }, { "intent": "general.yes_or_agreed", "score": 0.19936033 }, { "intent": "chitchat.bye", "score": 0.17034538 }, { "intent": "chitchat.how_are_you", "score": 0.029416502 }, { "intent": "chitchat.thank_you", "score": 0.027005624 }, { "intent": "general.i_dont_know", "score": 0.0020965587 }, { "intent": "chitchat.who_are_you", "score": 0.00020547185 }, { "intent": "general.no", "score": 0.00018652831 } ] } ] } }, "message_metadata": "" } ``` Each `ml_nlu_result` is an array of JSON objects of the following structure: | Field | Type | Description | | --- | --- | --- | | `message` | string | The message text that was analyzed. | | `intent` | string | The most probable intent of the analyzed text. For example, `chitchat.greeting`, `chitchat.bye`, `chitchat.compliment`, `chitchat.how_are_you`, or `general.yes_or_agreed`. | | `score` | float | The likelihood that the assigned intent represents the purpose of the analyzed text. `1` is the maximum value, representing the highest likelihood that the message text matches the intent, and `0` is the minimum value, representing the lowest likelihood that the message text matches the intent. | | `results` | array | An array of JSON objects made up of `intent` and `score` pairs, where the `score` represents the likelihood that the message has the corresponding `intent`. | Each JSON object in the `results` array are made up of `intent` and `score` fields, which are described below: | Field | Type | Description | | --- | --- | --- | | `intent` | string | A potential intent of the analyzed text. For example, `chitchat.greeting`, `chitchat.bye`, `chitchat.compliment`, `chitchat.how_are_you`, or `general.yes_or_agreed`. | | `score` | float | The likelihood that the corresponding intent represents the purpose of the analyzed text. `1` is the maximum value, representing the highest likelihood that the message text matches the intent, and `0` is the minimum value, representing the lowest likelihood that the message text matches the intent. | ### The ml_image_recognition_result array the `ml_image_recognition_result` array may be included with your Smart Conversations callback. An example of a Smart Conversations callback payload that includes the `ml_image_recognition_result` array is below: ```json { "app_id": "01FW3DP26MEN4JKSME44JDXWC4", "accepted_time": "2022-07-15T14:30:18.741258673Z", "event_time": "2022-07-15T14:30:17Z", "project_id": "0f93046c-91e1-426f-89b7-d03deb8ff872", "smart_conversation_notification": { "contact_id": "01FX7MQMZ0HVK5GPK4R0RBS3VT", "channel_identity": "alphanumeric_identity", "channel": "TELEGRAM", "message_id": "01G8148YQRMAWCABMFWR9EAQFR", "conversation_id": "01FX7MQNJNYQ3685MFR7KB7HF7", "analysis_results": { "ml_image_recognition_result": [ { "url": "image_url_example", "document_image_classification": { "doc_type": "test_document", "confidence": 1 }, "optical_character_recognition": { "result": [ { "data": [ "Characters extracted from one section of an image." ] }, { "data": [ "Characters", "extracted from", "another section of", "the image.", ] } ] }, "document_field_classification": { "result": { "date": { "data": [ "DD/MM/YYYY" ] }, "zipcode": { "data": [ "112 18", "30301" ] } } } } ] } }, "message_metadata": "" } ``` Each object in the `ml_image_recognition_result` array represents an image identified in the processed message. Each JSON object has the following structure: | Field | Type | Description | | --- | --- | --- | | `url` | string | The URL of the image that was processed. | | `document_image_classification` | object | An object that identifies a document type within the image, along with a confidence level for that document type. | | `optical_character_recognition` | object | An object containing a `result` array that reports the machine learning engine's character extraction results. | | `document_field_classification` | object | An object containing a `result` object that reports on all identified fields, as well as the values assigned to those fields. | The `document_image_classification` object is described below: | Field | Type | Description | | --- | --- | --- | | `doc_type` | string | The document type that the analyzed image most likely contains. | | `confidence` | float | The likelihood that the analyzed image contains the assigned document type. `1` is the maximum value, representing the highest likelihood that the analyzed image contains the assigned document type, and `0` is the minimum value, representing the lowest likelihood that the analyzed image contains the assigned document type. | The `optical_character_recognition` object contains a `result` array. Each object in the `result` array represents a portion of an image that underwent optical character recognition. The objects in this `result` array contain `data` arrays that are described below: | Field | Type | Description | | --- | --- | --- | | `data` | array | The `data` array contains the string(s) identified in one section of an analyzed image. | The `document_field_classification` object contains a `result` object. Each object under the `result` object represents a field that was identified and populated on the analyzed image. These objects contain `data` arrays that are described below: | Field | Type | Description | | --- | --- | --- | | `data` | array | The `data` array contains the string(s) assigned to the corresponding document field. | ### The ml_pii_result array The `ml_pii_result` array may be included in your Smart Conversations callback. An example of a Smart Conversations callback payload that includes the `ml_pii_result` array is below: ```json { "app_id": "01FW3DP26MEN4JKSME44JDXWC4", "accepted_time": "2022-07-15T14:27:16.528875627Z", "event_time": "2022-07-15T14:27:15Z", "project_id": "0f93046c-91e1-426f-89b7-d03deb8ff872", "smart_conversation_notification": { "contact_id": "01FX7MQMZ0HVK5GPK4R0RBS3VT", "channel_identity": "alphanumeric_identity", "channel": "TELEGRAM", "message_id": "01G8143CS9ZJ62H1487GZB7Q2C", "conversation_id": "01FX7MQNJNYQ3685MFR7KB7HF7", "analysis_results": { "ml_pii_result": [ { "message": "Hi! My name is John and I am a male person. I have a website with my projects that you can see here www.my-projects.com. This website has been live since 26/3/2022. The 3rd project is the best, it only cost me $5.99! If there are any questions, contact me any time after 9:00 AM. Here is my phone (123) 456 7899 and my email contact@example.com. Visa (0000 0000 0000 0000)", "masked": "Hi! My name is {PERSON} and I am a {GENDER} person. I have a website with my projects that you can see here {URL}. This website has been live since {DATE}. The {ORDINAL} project is the best, it only cost me {AMOUNT_OF_MONEY}! If there are any questions, contact me any time after {TIME} AM. Here is my phone number {PHONE_NUMBER} and my email {EMAIL}. Visa ({CARD_NUMBER})" } ] } }, "message_metadata": "" } ``` Each object in the `ml_pii_result` array is a JSON object that has the following structure: | Field | Type | Description | | --- | --- | --- | | `message` | string | The message text that was analyzed. | | `masked` | string | The redacted message text in which sensitive information was replaced with appropriate masks. A `MISC` mask is applied to a term that has been identified as PII, but with low confidence regarding which type of mask to assign. | ### The ml_offensive_analysis_result array The `ml_offensive_analysis_result` array may be included in your Smart Conversations callback. An example of a Smart Conversations callback payload that includes the `ml_offensive_analysis_result` array is below: ```json { "app_id": "01FW3DP26MEN4JKSME44JDXWC4", "accepted_time": "2022-07-15T14:27:16.528875627Z", "event_time": "2022-07-15T14:27:15Z", "project_id": "0f93046c-91e1-426f-89b7-d03deb8ff872", "smart_conversation_notification": { "contact_id": "01FX7MQMZ0HVK5GPK4R0RBS3VT", "channel_identity": "alphanumeric_identity", "channel": "TELEGRAM", "message_id": "01G8143CS9ZJ62H1487GZB7Q2C", "conversation_id": "01FX7MQNJNYQ3685MFR7KB7HF7", "analysis_results": { "ml_offensive_analysis_result": [ { "message": "My bloody phone number is (123) 456 7899", "evaluation": "UNSAFE", "score": 0.5250608921051025 } ] } }, "message_metadata": "" } ``` Each `ml_offensive_analysis_result` is an array of JSON objects of the following structure: | Field | Type | Description | | --- | --- | --- | | `message` | string | Either the message text or the URL of the image that was analyzed. | | `evaluation` | string | A label, either `SAFE` or `UNSAFE`, that classifies the analyzed content. | | `score` | float | The likelihood that the assigned `evaluation` represents the analyzed `message` correctly. `1` is the maximum value, representing the highest likelihood that the content of the `message` matches the `evaluation`. 0 is the minimum value, representing the lowest likelihood that the content of the `message` matches the `evaluation`. | ### The ml_speechto_text_result array The `ml_speechto_text_result` array may be included in your Smart Conversations callback. An example of an `ml_spechto_text_result` array is below: ```json { "ml_speechto_text_result": [ { "url": "url_to_YOUR_audio_file.mp3", "results": [ { "transcript": "transcript of the first audio section", "confidence": 0.8100824356079102 }, { "transcript": " transcript of the second audio section", "confidence": 0.920749843120575 } ] } ] } ``` Each object in the `ml_speechto_text_result` array contains an array representing an audio file. Each JSON object within that array represents a transcribed section of the audio file. Each JSON object has the following structure: | Field | Type | Description | | --- | --- | --- | | `transcript` | string | A transcript of the corresponding section of the analyzed audio file. | | `confidence` | string | The likelihood that the analyzed audio aligns with the provided `transcript`. `1` is the maximum value, representing the highest likelihood that the analyzed audio matches the `transcript`, and `0` is the minimum value, representing the lowest likelihood that the analyzed audio matches the transcript. |