Skip to main content
Version: 3.0.0

๐Ÿ“ข What's new

๐Ÿ—“๏ธ Week of 2025-05-19โ€‹

๐Ÿš€ Featuresโ€‹

  • ๐Ÿค–๐Ÿ New Otto capabilities:
    • Otto can perform multiple checks on project files
      • Linting (and fixing) of YAML files
      • Connection testing (and error fixing)
    • Otto can list and explore your Connections to help create read Components
    • Otto can now run Flows, as well as individual Components, wait for completion, and address any errors.
  • Expanded Component dependency support to all Component types.
    • You can now add any set of Components as non-data graph dependencies to any other Component.
  • Git status and coloring added to the tab bar and file browser.
  • Configurable retry logic for all Component types.
  • Ability to pause all Automations in a Deployment.
reminder

If you want help trying our latest features, try asking Otto!

๐ŸŒŸ Improvementsโ€‹

  • Reduced column width and remembered the width on the runs table.
  • Streamlined user experience across Automation forms.
  • Clarified Component build errors to pinpoint the problematic Component and enhanced error handling to catch a wider range of exceptions.

๐Ÿ› ๏ธ Bug fixesโ€‹

  • Ensured reconnect when retrying with Database Invalid session error for DBX.
  • Fixed file refresh to update cached and open files.
  • Resolved issue with the repo save button being disabled when no updates were made.
  • Fixed intermittent error when listing branches.
  • Fixed zoom functionality for individual nodes when viewing the expanded Application Component graph tab.
  • Avoided default catalog when creating a connection to Databricks.
  • Corrected wrong state rendered on UI when quickly saving/building/running.
  • Fixed MySQL engine parameter for SSL=True case causing an exception.
  • Used fully qualified name for merge table references.
  • Fixed build failures caused by undetected out-of-memory (OOM) conditions.
  • Empty files no longer error on project build.

Agentic Data Engineeringโ€‹

Ascend is the industry's first Agentic Data Engineering platform, empowering teams to build and manage data pipelines faster, safely, and at scale. With Ascend's platform, engineers benefit from the assistance of context-aware AI agents that deeply understand their data pipelines.

Meet Otto, the intelligent data engineering Agent designed to eliminate repetitive tasks, accelerate innovation, and enable faster development cycles.

Integration with Ascend platformโ€‹

Otto works seamlessly with other aspects of the Ascend platform:

  • Chat with your stack: Engage in natural language conversations with Otto about your entire data infrastructure. Ask questions about data lineage, Component configurations, or pipeline performance, and receive contextual answers that incorporate knowledge of your specific environment.

  • In-line code suggestions: Receive intelligent recommendations as you write SQL, Python, or YAML. Otto analyzes your code patterns, data structures, and pipeline context to suggest optimized transformations, efficient joins, and best practices for your specific data plane.

  • Background agents: Leverage autonomous agents that continuously monitor your data pipelines, detect anomalies, and proactively suggest optimizations. These agents work silently in the background, identifying performance bottlenecks, data quality issues, and optimization opportunities without manual intervention.

  • Custom agents (coming soon): Create specialized AI assistants tailored to your organization's unique needs. Configure agents with specific business logic, data domain expertise, and compliance requirements to automate complex tasks across your data engineering workflows.

By understanding the relationships between these elements, Otto provides contextual assistance that considers your entire data engineering environment.

With Otto, you can:

  • Understand data lineage across your entire pipeline with column-level tracing
  • Transform Components between frameworks with automatic code migration
  • Implement robust data quality tests with intelligent recommendations

Discover these capabilities and many more!

โžก๏ธ Ready to Agentify your data engineering experience? Schedule a demo to see Ascend in action.

Ascend Gen3โ€‹

โ˜๏ธ Gen3 is a ground-up rebuild of the Ascend platform, designed to give you more control, greater scalability, and deeper visibility across your data workflows. It's everything you already love about Ascend โ€“ now faster, more flexible, and more extensible.

  • Ascend's new Intelligence Core combines metadata, automation, and AI in a layered architecture, empowering all teams to build pipelines faster and significantly compress processing times.

  • Git-native workflows bring version control, collaboration, and CI/CD alignment to all teams through our Flex Code architectureโ€” empowering both low-code users and developers to contribute.

  • Observability features expose detailed pipeline metadata so teams have deeper visibility into their system to diagnose problems quickly, reduce manual investigation, and optimize system behavior.

  • Modular architecture empowers data and analytics teams to manage increasingly large and complex pipelines with improved performance and maintainability.

  • Standardized plugins and extension points enable data platform teams to customize and automate workflows more easily.

โžก๏ธ Ready to explore? Join the Gen3 public preview to get early access.

Ascend Gen3 Demo

๐Ÿš€ Featuresโ€‹

Explore the latest enhancements across our platform, from improved system architectures to optimized project management. This section highlights major new functionalities designed to boost performance and flexibility.

Systems & architectureโ€‹

Explore the foundational improvements in our system's architecture, designed to enhance collaboration, resource management, and cloud efficiency.

  • Version control-first design โ€“ Collaborate and track changes with Git-native workflows
  • Project-based organization โ€“ Organize and manage resources with intuitive, project-centric workflows
  • Optimized cloud footprint โ€“ Reduce infrastructure usage with centralized UI and a lightweight, scalable backend
  • Event-driven core โ€“ Trigger custom workflows using system-generated events
  • Native Git integration โ€“ Automate CI/CD pipelines with built-in support for your Git provider CI/CD

Project & resource managementโ€‹

Effortlessly create, share, configure, and deploy projects with streamlined processes, allowing you to spend less time on administration and more on engineering innovation.

  • Project structure โ€“ Organize and manage your data projects with improved structure and clarity
  • Environments โ€“ Configure and maintain development, staging, and production environments with software development best practices
  • Parameterized everything โ€“ Reuse and adapt pipelines with flexible, comprehensive parameterization
  • Deployments โ€“ Roll out pipelines consistently across environments with simplified deployment workflows Deployments

Industry-leading securityโ€‹

Protect your data and resources with enterprise-grade security features, ensuring comprehensive access control and secrets management across your organization.

  • Enterprise-grade authentication โ€“ Secure instances, projects, and pipelines with OpenID Connect (OIDC)
  • Centralized vault system โ€“ Manage secrets, credentials, and sensitive configurations securely across your entire platform

Builder experienceโ€‹

Discover how our builder experience enhancements simplify Component creation and improve user interaction with a modern interface.

  • Simplified Component spec - Write Components with less boilerplate and more intuitive syntax
  • Components:
    • Partition strategies - Flexible data partitioning for optimal performance
    • Data & job deduplication - Intelligent handling of duplicate data and operations
    • Incremental Components - Process only new or changed data efficiently
    • Views - Create and manage virtual tables efficiently
    • Generic tasks - Support for versatile task types, including SQL and Python scripts for complex operations
  • Data applications - Build complex data transformations from simpler reusable building blocks and templates
  • Testing support - Test Components easily with built-in sample datasets
  • Modern interface - Navigate an intuitive UI designed for improved productivity
  • Dark mode - Switch between light and dark themes with enhanced visual comfort and accessibility
  • Navigation - Access projects, Components, and resources through streamlined menus

Navigation and building experience

Data integrationโ€‹

Our data integration improvements ensure seamless connectivity and performance across major platforms, enhancing your data processing capabilities.

Performance improvements across all data planes**:โ€‹

  • 70% reduction in infrastructure costs*
  • 4x faster ingestion speed*
  • 2x faster runtime execution*
  • 10x increase in concurrent processing*

Data planes: Enhanced connectivity and performance across major platforms.โ€‹

Snowflake - Full platform integration including:

  • SQL support
  • Snowpark for advanced data processing
  • Complete access to Snowflake Cortex capabilities BigQuery - Comprehensive SQL support including:
  • BigQuery SQL integration
  • Built-in support for BigQuery AI features

Databricks - Complete lakehouse integration featuring:

  • SQL and PySpark support
  • Full access to AI/ML models in data pipelines
  • Support for both SQL warehouses and clusters
  • Unified compute management

* Performance metrics based on comparative analysis between Gen2 and Gen3 platforms

Data qualityโ€‹

Enhance your data quality management with automated checks and customizable validation rules, ensuring data integrity across your projects.

  • Automated quality gates - Validate data within Components, including read and write Components
  • Reusable rule library - Create and share standardized data quality rules across your organization
  • Python-based validation - Write custom data quality checks using familiar Python syntax

Flow managementโ€‹

Optimize your data flows with advanced planning and execution capabilities, supporting high-frequency and concurrent processing.

  • Gen3 flow planner & optimizer โ€“ Improve pipeline performance with intelligence planning and execution
  • Flow runs โ€“ Manage and monitor individual pipeline executions with enhanced controls
  • Concurrent & high-frequency flow runs โ€“ Execute flows in parallel and at higher frequencies
  • Semantic partition protection โ€“ Preserve computed results across code changes to avoid unnecessary reprocessing
  • Optional Smart backfills โ€“ Backfill data flexibly with advanced control over reprocessing. Smart Backfills

Automationโ€‹

Leverage our automation features to create dynamic workflows triggered by real-time events, enhancing operational efficiency.

  • Event-driven extensibility - Automate workflows dynamically based on real-time platform events and triggers
  • Customizable event triggers - Create custom automation triggers including sensors and events Automation

Observabilityโ€‹

Gain comprehensive insights into your data operations with real-time and historical observability, ensuring full transparency and control.

  • Unified metadata stream & repository - Centralize and track metadata across all pipelines
  • Real-time & historical monitoring - Access metadata on pipeline runs and performance history, including:
    • Live monitoring of active pipeline runs
    • Full execution history with smart commit summaries
    • Performance analytics and trend analysis
    • Complete troubleshooting visibility Observability

AI-powered assistant (Otto ๐Ÿ)โ€‹

Experience the power of AI with Otto, our assistant that helps you create, optimize, and document your data pipelines effortlessly.

  • Component creation & editing - Generate new pipeline Components or modify existing ones with natural language
  • Smart updates & recommendations - Receive intelligent suggestions for pipeline optimization, performance improvements, and descriptive commit messages
  • Automated documentation - Automatically generate and maintain comprehensive documentation for pipelines and Components Otto AI

๐ŸŒŸ Improvementsโ€‹

This section highlights enhancements in Component functionality, connector improvements, and overall system optimization to boost performance and usability.

Component improvementsโ€‹

Data types

  • Timestamp TZ/NTZ - Enhanced timezone handling and support
  • Variant - Flexible data type for semi-structured data
  • JSON - Native JSON data type support

Connector improvementsโ€‹

Read connectors

  • Automatic schema field detection - Intelligent schema inference for all connectors
  • Customizable schema options - Flexible schema configuration options

Files

  • Advanced filtering - Filter files by last modified, created at, and custom combinations
  • Archive support - Native support for zip & tar archives

Warehouses

  • Enhanced data ingestion - Moving beyond single-table limitations, you can now use SQL queries to filter columns, rows, and join datasets at the source
  • Multi-table support - Easily ingest multiple tables in a single Component without complex query writing

Databases

  • Replication strategies - Advanced options for data replication
  • Materialization strategies - Flexible approaches to data materialization

๐Ÿ—ฃ๏ธ Community updatesโ€‹

Discover ready-to-use examples, comprehensive documentation, and resources created by and for the Ascend community to accelerate your development journey.

Sample projectsโ€‹

๐Ÿ Otto's Expeditions - Ready-to-use examples for:

Documentationโ€‹

New content structure

๐Ÿ’ฌ Terminology changesโ€‹

We've updated some terms to better reflect their functionality:

  • Dataflow โžก๏ธ Flow
  • Partitioned Component โžก๏ธ Smart table

๐Ÿ”ฎ Coming soonโ€‹

Stay tuned for upcoming innovations, including enhanced AI tools and comprehensive documentation improvements that will streamline your workflow.

โœจ Expanded AI capabilitiesโ€‹

  • Coding copilot - Intelligent code suggestions and completions
  • Agentic data engineering - Automated pipeline creation and optimization
  • AI-assisted migration - Use Otto to migrate from legacy data tooling like dbt or Airflow with customizable AI agents

๐Ÿข Enterpriseโ€‹

  • Organization management - Hierarchical team structures with flexible resource sharing and access controls
  • Enterprise identity & access - Fine-grained access and permission controls
  • External vault integration - Connect to your organization's existing secret management system