Skip to main content

Ascend AI trust and safety

This page describes how Ascend handles data in connection with Otto and other AI-powered product features in the Ascend platform. It is intended for security, legal, and compliance reviewers.

For setup guides (API keys, providers), see AI providers and the Otto overview.


Purpose and scope

In scope

  • Otto — chat, inline code completion, error explanation, suggested prompts, pilot summaries, commit-message generation, custom agents, and related AI-assisted workflows in the Ascend web application.
  • Configuration of AI providers and models in your Instance.

Out of scope for this page

  • Terms of service, privacy policy, and data processing addenda between you and Ascend — those govern your overall use of the service; this page is a technical and product summary, not a contract.

Terms used here

TermMeaning
Customer DataData you provide or generate in Ascend, including code, configs, chat content, and metadata about your projects.
Customer-managed providerA third-party LLM or cloud AI service accessed using your credentials (API keys, service accounts, or IAM roles) stored in your vault.
Ascend-managed providerLLM access provided through Ascend’s cloud relationships (for example, Ascend-managed Bedrock or Foundry), where Ascend’s infrastructure calls the provider on your behalf.

At a glance

  • Ascend does not use your Customer Data to train, retrain, or fine-tune proprietary foundation models operated by Ascend. Otto uses third-party foundation models via API.
  • Your Otto conversations and context are not exposed to other Ascend customers.
  • If you use a customer-managed provider, prompts and responses are processed under your account with that vendor, subject to your agreement and settings with them.
  • If you use an Ascend-managed provider, Ascend engages subprocessors under contractual terms that address appropriate use of data (including restrictions on use of your data to train models for general third-party consumption, as set out in Ascend’s agreements with those providers).
  • Retention has two parts: (1) Ascend stores Otto thread history for the life of the Instance unless Ascend’s product or operations policy changes; (2) LLM providers apply their own retention, logging, and observability rules to API traffic — review their documentation (for example OpenAI API data controls) especially when using BYOK.

How Otto works

  1. You interact with Otto in the Ascend UI (chat, completions, attachments, and tools).
  2. The Instance API assembles context (for example: open files, project metadata, errors, your message) and sends it to the LLM provider you have enabled.
  3. The provider returns a response; Ascend streams or displays it and may persist thread history for continuity and auditability.

Customer-managed vs Ascend-managed providers

Customer-managed — You configure OpenAI, Azure OpenAI, Google Vertex AI, Google AI Studio, AWS Bedrock (via your IAM role), or Microsoft Foundry using secrets and endpoints in your vault. Ascend’s services retrieve credentials at runtime and call the provider on your behalf. Data processing and retention are primarily governed by your relationship with that vendor.

Ascend-managed — Ascend operates the integration to certain providers from Ascend’s cloud accounts. Your prompts and responses still belong to your Instance; subprocessors and Ascend’s commitments are defined in your agreement with Ascend and applicable subprocessor / DPA documentation.

Supported providers and models are configured under Settings → AI & models in your Instance. See AI providers.

What data can be sent to an LLM?

Depending on what you open, attach, and ask, context sent to the model may include:

  • Chat messages and file attachments you provide
  • Code and configuration from open editors (for example SQL, Python, YAML)
  • Project and pipeline metadata (names, paths, errors, build/run context)
  • Outputs from Otto tools — for example pipeline or component details returned when Otto inspects your project, or rows and schemas from SQL or other queries Otto runs against your connections (when you allow those actions). Anything returned as a tool result can be sent to the LLM in follow-on turns.
  • User profile fields used for personalization (for example name, email)
  • Connection names or similar non-secret identifiers

Credential values (passwords, API secret values) are not sent to the model when collected through secure UI flows that write only to your vault; Otto may receive secret names or references for configuration tasks.

There is no automated PII redaction in prompts. Do not paste data you are not permitted to send to your chosen provider.


What Ascend stores vs what stays under your control

Typically under your control or in your environmentProcessed or stored by Ascend
Data in your warehouse / data plane and credentials in your vault backends (GCP Secret Manager, Azure Key Vault, AWS Secrets Manager, etc.)Otto thread history and messages (durable event stream and local cache within the Instance footprint)
BYOK API traffic to the LLM vendor’s accountAttachments and blobs Otto stores for replay and analysis within Instance storage
Pipeline source code in Git as you configure itProduct telemetry (for example frontend performance and error reporting) and feature-flag evaluation — separate from LLM inference
Operational traces (metadata such as identifiers, token usage, latency) where Ascend or you configure observability backends

Pipeline execution vs. Otto: The pipeline runtime that executes your Flows on schedule or in response to events does not send warehouse table data or transformation results to LLMs as part of that execution path. Otto is separate: it can cause pipeline- or warehouse-related information to reach the LLM when it uses tools (such as fetching component or pipeline details) or when it runs queries against your data connections and the results are passed into the conversation. You can turn off those and other Otto tools (and related capabilities) if you want to prevent that class of exposure.


Model training and improvement

  • Ascend does not operate its own general-purpose foundation model trained on Customer Data.
  • Otto does not use your pipelines, SQL, or warehouse data to train Ascend-owned models.
  • Third-party providers may offer enterprise options (for example zero data retention, modified abuse monitoring) or default API logging; those are controlled by your contract and project settings with the vendor when you use customer-managed access.
  • Ascend may run internal quality evaluations of Otto using synthetic or controlled scenarios; that is separate from training a customer-facing foundation model on your production data.

Subprocessors and third-party services

When you enable AI features, LLM providers you select may process Customer Data as subprocessors of Ascend (for Ascend-managed routes) or directly as your vendors (for BYOK).

Examples of LLM-related vendors (depending on your configuration):

VendorRole
OpenAIChat, completions, and related APIs
AnthropicClaude family models and related APIs (including when accessed through Amazon Bedrock or Microsoft Foundry)
Amazon Web Services (Bedrock)Hosted models (Claude, others)
Google (Vertex AI, AI Studio)Gemini and related APIs
Microsoft (Azure OpenAI, Foundry)Hosted models (OpenAI, Claude, and others)

Other services involved in operating the product (not LLM inference) may include, for example, feature-flag, error-reporting, and frontend observability providers. For authoritative lists and updates, rely on Ascend’s subprocessor disclosure and your DPA, not this page alone.


Data retention and deletion

Otto conversations (Ascend)

  • Thread history is intended to remain available for the life of the Instance unless Ascend changes its product retention policy and notifies customers as required by agreement.
  • You can remove threads from the Otto UI; underlying durable records may still exist in Ascend’s event and backup systems until a hard-delete or retention program exists and is described in product or contract terms. Ask your Ascend contact if you need a specific retention or deletion commitment for compliance.

LLM providers

  • Providers retain API payloads, logs, traces, or dashboard data according to their policies. For OpenAI API usage, see Data controls in the OpenAI platform. Other vendors publish equivalent documentation.

Observability

  • If Ascend or you export traces to systems such as OTLP backends, retention follows those systems’ policies.

Security and credentials

  • AI provider secrets are referenced from your configured vault; Ascend does not store raw provider API keys in place of vault references.
  • Traffic between Ascend and LLM APIs uses TLS.
  • Follow least privilege for IAM roles used with Bedrock or other cloud AI services.

Customer controls

  • Instance administrators enable or disable models and providers and set defaults in Settings → AI & models.
  • You can standardize on BYOK so inference billing and data-handling terms flow through your cloud or AI vendor account.
  • If you do not configure a provider, Otto’s model-backed features may be unavailable until configuration is complete.
  • Otto tools (including those that read pipeline structure or run warehouse queries) can be disabled or left unused so that class of data is not retrieved for the model. Specific toggles and approval flows depend on product settings (for example query execution mode).

Acceptable use and limitations

  • You are responsible for lawful use of AI features and for reviewing model outputs before relying on them in production (especially SQL, infrastructure changes, or security-sensitive code).
  • AI features can hallucinate or misunderstand context; they augment, not replace, human judgment.

  • Input and output you obtain through Otto remain subject to your agreement with Ascend and applicable law. Ascend does not claim ownership of your pipeline code or your prompts solely because Otto assisted in creating them — confirm details in your Master Subscription Agreement or order form.
  • This page does not amend your contract. In case of conflict, the signed agreement prevails.

Frequently asked questions

Data governance

Does Ascend use my data to train AI models?
Ascend does not use your Customer Data to train proprietary foundation models. Otto calls third-party models via API; those vendors’ training and logging practices are governed by their terms and your configuration (especially under BYOK).

Is my data visible to other Ascend customers?
No. Otto context and threads are scoped to your Instance.

Does pipeline or warehouse data get sent to LLMs?
Scheduled and event-driven pipeline execution does not send warehouse data to LLMs. Otto can still cause pipeline details, component definitions, or query results (including warehouse rows) to be sent to the LLM when it uses tools that fetch that information or when it runs queries you allow. Editor contents and chat you provide are also in scope. You can disable Otto tools and related capabilities to limit exposure.

Providers and BYOK

Who is data controller / who do I contract with?
For BYOK, your relationship with the LLM vendor is primary for that traffic. For Ascend-managed inference, Ascend’s subprocessor and DPA framework applies as described in your Ascend agreement.

Retention and deletion

How long are chats kept?
Ascend retains Otto history for the life of the Instance unless superseded by a published policy or your enterprise agreement. UI deletion may not immediately purge all durable copies.

How long do OpenAI / Anthropic / Google / Microsoft keep API data?
See each vendor’s documentation (for OpenAI, start with your data).

Contact

Who can answer follow-up questions?
Use your Ascend account team or the Support page (Slack and email). For contractual definitions (subprocessors, DPA, HIPAA, etc.), request the current legal packet from Ascend.