Navu AI Guardrails and Protections

Navu is committed to employing AI in a safe and ethical way on behalf of our clients.  To that end, we have carefully considered the guardrails needed in our application.


The primary use of AI in Navu is to provide a chat-based interface for web visitors to ask questions about their company based on public information accessible through their website.  We also use AI for a number of ancillary tasks such as summarizing pages and categorizing responses.  In all cases, the only source of information is acquired from their public web content or the content of those AI conversations.  The output of the conversations is shared only with the specific web visitors and authorized company representatives.

Navu is implemented using OpenAI’s APIs.  We employ a combination of assistants, vector storage, file storage, embedding, and chat completion APIs.  We do not host or train our own models.  Today we use a combination of gpt-4o and gpt-4o-mini models for different parts of our solution.  The primary API is the “assistant” API that handles retrieval-augmented generation (RAG) to manage the context and response generation for the chats.  We create separate assistants and vector file stores for each of our customers that have nothing shared between customers.  We create a unique set of instructions for each assistant.  We create separate threads for each visitor conversation allowing for no interactions between visitors.  In order to offer a good user experience, we stream information directly from the OpenAI APIs into the visitor’s browser.

Our first guardrail is our most important — i.e., that we take advantage of the extensive guardrails built into the underlying OpenAI systems and models.

Second, we do not permit OpenAI to train on any of the customer data that we provide via their API.  No visitor-provided information can “leak” into LLMs via this path.

Third, we build in compartmentalization.  With each customer having a separate assistant (and vector file storage) and each visitor having a separate thread, there is no opportunity for leakage between customers or even between different web visitors.

Fourth, we make all of the raw communication data available for review and post-processing by the customer.  We have a downloadable CSV containing each question and response which are linked back to the full journey context in the Navu portal.  These questions are organized into topics and each response is tagged if it is considered non-responsive.   Using this mechanism we have a process for reviewing this material looking for opportunities for continuous improvement.  We are looking for the following and encourage our customers to do the same:

  • Harmful, biased, or toxic responses:  To date we have not found a single example of this — presumably due to the OpenAI filters already in place;
  • Hallucination:  We have seen rare examples of hallucination which we were able to correct through careful adjustments to our instructions which err on the side of refusing to answer without high certainty.
  • Citations:  Every response to the web visitor includes citations to the relevant website content.  In our auditing we report on how many citations are included with each response and look for cases where citations were missing.  We expect those cases to primarily consist of non-responsive answers (e.g., “Sorry.  There is no relevant content available on the website to answer this question.”)
  • Regulatory compliance:  Since all responses are based on content in the company website, we are unlikely to stray beyond the bounds of regulatory limits.  Nevertheless, in sensitive sectors (such as healthcare) we are particularly vigilant.
  • Privacy:  Since the assistant has no access to private company information, there is little reason for concern over privacy leakage from company information and because of compartmentalization, from any other visitor.  If the visitor chooses to disclose PII in the conversation, this information is stored securely and accessible only to authorized customer company personnel.  A specific reference to the company’s privacy policy is included in the disclaimer on the sidebar warning users accordingly.
  • Language:  The assistant is designed to be able to respond in different languages depending on the situation.  We instruct the assistant to respond in the language of the question.  The assistant may translate website content in order to be able to answer the question.  We see this working well most of the time.  We rarely see the assistant making an error in choice of language.  This is an area we are currently working on.

Trying Navu is as easy as 1, 2, 3!

Step 1: Tell us your email and domain

Step 2: We send you a link to demo your custom Sidebar

Step 3: Publish your Sidebar and see the results free for 14 days!

Get Started