Navu is committed to employing AI in a safe and ethical way on behalf of our clients. To that end, Navu has carefully considered the guardrails needed in our application.
The primary use of AI in Navu is to provide a conversational interface for web visitors to interact with the company based on information accessible through the company’s website. Clients may also identify non web content to augment the training of the AI. Navu also uses AI for a number of ancillary tasks such as summarizing pages and categorizing responses. In all cases, the information provided to the AI is from client selected content or the AI conversations themselves. The output of the conversations is shared only with the specific web visitors and authorized company representatives.
Navu is implemented using OpenAI’s and Gemini APIs. Navu creates unique instructions and underlying training sets for each client. and creates separate threads for each visitor conversation allowing for no interactions between visitors. In order to offer a good user experience, Navu streams information directly from the OpenAI APIs into the visitor’s browser.
The first guardrail is our most important which is leveraging the extensive guardrails built into the underlying AI systems and models.
Second, Navu does not permit the underlying model to train on any of the customer data that Navu provides via their API. No visitor-provided information can “leak” into LLMs via this path.
Third, Navu has built in compartmentalization. With each customer having a separate AI instance and thread, there is no opportunity for leakage between customers or even between different web visitors.
Fourth, Navu makes all of the raw communication data available for review and post-processing by the customer. Navu provides a downloadable CSV containing each question and response which are linked back to the full journey context in the Navu portal. These questions are organized into topics and each response is tagged if it is considered non-responsive. Using this mechanism Navu has have a process for reviewing this material looking for opportunities for continuous improvement. Navu looks for the following and encourages our customers to do the same:
- Harmful, biased, or toxic responses: To date there has not been a single example of this — presumably due to the OpenAI filters already in place;
- Hallucination: There have been rare examples of hallucination which were corrected through careful adjustments to the instructions which err on the side of refusing to answer without high certainty.
- Citations: Every response to the web visitor includes citations to the relevant website content. Navu audits how many citations are included with each response and look for cases where citations were missing.
- Privacy: Since the assistant has no access to private company information, there is little reason for concern over privacy leakage from company information and because of compartmentalization, from any other visitor. If the visitor chooses to disclose PII in the conversation, this information is stored securely and accessible only to authorized customer company personnel. A specific reference to the company’s privacy policy is included in the disclaimer on the sidebar warning users accordingly.
- Language: The assistant is designed to be able to respond in different languages depending on the situation. The AI is instructed to respond in the language of the question. The AI may translate website content in order to be able to answer the question.