Skip to main content
Version: 3.0.0

๐Ÿ“ข What's new

reminder

If you want help trying our latest features, try asking Otto, your intelligent data engineering Agent!

๐Ÿ—“๏ธ Week of 2025-06-09โ€‹

๐Ÿ› ๏ธ Bug fixesโ€‹

  • Switching Projects in your Workspace settings now clears out old run history, so you won't see ghost runs from your previous Project haunting your current workspace.
  • The status bar on the Explore tab was showing outdated information and wouldn't refresh properly - now it updates in real-time so you always see current pipeline status.
  • Workspace settings URLs were broken and wouldn't take you to the right page - now clicking these links gets you exactly where you need to go.
  • Fixed a bug where the size of a Workspace or Deployment was not being respected - now your size settings stick and your Deployments run with the resources you specified.
  • "Flow runner not found" errors were vague and hard to debug - now error messages include detailed pod status information so you can quickly identify and fix issues such as OOM, timeout, unexpected states, API/connection errors, and more.
  • Flow runners were staying active even when idle, wasting your compute resources - now they automatically shut down when not in use, keeping your costs down and resources available.
  • Git operations could hang indefinitely when repositories became unresponsive - now they timeout automatically so you're never left hanging when a repo goes offline.
  • Application parameters weren't compatible with Incremental logic in SQL transforms, breaking dynamic data processing - now they work together seamlessly so your Incremental pipelines can use parameterized logic.
  • Partition filter and partition value analyzers would interfere with each other during concurrent backfills, causing data processing conflicts - now they run independently so multiple backfills can process different partitions simultaneously without issues.
  • Text parser would crash when given string file paths in Arrow and pandas reads, failing your data ingestion - now it handles string paths gracefully so your file loading works reliably regardless of format (path vs. file object).

๐Ÿ—“๏ธ Week of 2025-06-02โ€‹

๐Ÿš€ Featuresโ€‹

  • ๐Ÿค–๐Ÿ Otto can now pull information about SQL linting via SQLFluff, ensuring your queries are as polished as a shiny new penny.
  • Command-K search now lets you zip straight to settings pages, making navigation as swift as a cheetah on roller skates.

๐ŸŒŸ Improvementsโ€‹

  • The UI now boasts improved rendering of errors and warnings, making troubleshooting as smooth as a freshly paved road.
  • Polars DataFrames are now supported as a valid input/output format in Python transforms, adding more flexibility to your data wrangling toolkit.
  • Parquet processing now normalizes column names to avoid any pesky case conflicts.
  • Automation sensors now come with timezone support, making scheduling as precise as a Swiss watch.
  • Automation failed status is now shown when there is no associated flow run, keeping you informed at all times.

๐Ÿ› ๏ธ Bug fixesโ€‹

  • Column case sensitivity for Python Read Components on Snowflake is now handled consistently, ensuring your data stays reliable.
  • Profile name, project UUID, and path are now properly persisted into build info, keeping your records accurate and complete.
  • Transformations with upstream Incremental Read Components now resolve to the correct table name, keeping your data pipelines on track.
  • Unicode and emoji are now handled gracefully, ensuring your text displays beautifully.
  • Component/Run state no longer carries over from current builds into historical ones, keeping your build history clean and accurate.
  • Timestamps across the UI now include timezone information, ensuring your time references are always precise.
  • Record columns can be resized in the Explore view, making data management a breeze.

๐Ÿ—“๏ธ Week of 2025-05-26โ€‹

๐Ÿš€ Featuresโ€‹

  • Your code editor just got smarter! INI, TOML, and SQLFluff files now shine with beautiful syntax highlighting.
  • AWS Managed Vault joins the party as your Instance or Environment Vault, giving you more flexibility in secret management.
  • ๐Ÿค–๐Ÿ New Otto capabilities:
    • Otto's validation powers have expanded! Otto can now validate YAML files and test your Connections to catch any issues.
    • Otto's gotten smarter about which tools to bring to the party! He'll automatically have the perfect toolkit ready for whatever environment you're working in (Workspace vs. Deployment)

๐ŸŒŸ Improvementsโ€‹

  • Python interfaces in Simple Applications are now streamlined, making it a breeze to migrate your Python Components from regular Flows.
  • Connection test errors got a helpful sidekick: a shiny new copy button that makes sharing error details effortless.

๐Ÿ› ๏ธ Bug fixesโ€‹

  • Multi-line comments in SQL and SQL Jinja files now parse correctly and consistently.
  • The great .yaml vs .yml debate is over! Both file extensions now work seamlessly without compatibility hiccups.
  • Error stack traces were getting a bit verbose and impacting Flow Runs, so we optimized them to be more concise and actionable.
  • PostgreSQL connections now handle SSL verification reliably and gracefully manage empty port configurations.
  • Clickable error links in Component cards within Deployments are back in action, ready to guide you straight to the problem.
  • Arrow keys now work perfectly when renaming files, making the experience feel smooth and polished.

๐Ÿ—“๏ธ Week of 2025-05-19โ€‹

๐Ÿš€ Featuresโ€‹

  • ๐Ÿค–๐Ÿ Otto's skill set just got a major upgrade! Otto can now:
    • Perform comprehensive project file health checks, including YAML linting and intelligent fixes
    • Test your Connections and help resolve any issues that arise
    • List and explore all your Connections to help you create Read Components with ease
    • Run entire Flows or individual Components, monitor completion, and help troubleshoot any errors
  • Component dependencies broke free from their limitations! You can now add any set of Components as non-data graph dependencies to any other Component, giving you ultimate flexibility.
  • Your file browser and tab bar now sport colorful Git status indicators, so you'll always know what's been changed, added, or needs attention.
  • Retry logic is now configurable for all Component types, because sometimes persistence pays off.
  • Deployment Automations can now take a well-deserved break with the new pause-all functionality.

๐ŸŒŸ Improvementsโ€‹

  • The Runs table got a makeover with smarter column widths that actually remember your preferences.
  • Automation forms now flow more smoothly, creating a streamlined experience that just feels right.
  • Component build errors became much more helpful, pinpointing exactly which Component is causing trouble while catching a wider range of exceptions.

๐Ÿ› ๏ธ Bug fixesโ€‹

  • Databricks connections now gracefully reconnect when encountering "Database Invalid session" errors, maintaining stable connectivity.
  • File refresh now works as expected, properly updating both cached and open files.
  • The repo save button now correctly reflects the current state, staying enabled only when appropriate.
  • Branch listing operations are now more reliable and consistent.
  • Zooming in on individual nodes in the expanded Application Component graph tab works like a charm again.
  • Databricks Connections now handle catalog references more intelligently, preventing unexpected behavior.
  • The UI now maintains proper state consistency during rapid save, build, and run operations.
  • MySQL connections with SSL=True now work correctly without throwing exceptions.
  • Table references now use their full, proper names in merge operations.
  • Build failures caused by out-of-memory conditions are now properly detected and handled gracefully.
  • Empty files no longer cause project builds to fail unexpectedly.

Agentic Data Engineeringโ€‹

Ascend is the industry's first Agentic Data Engineering platform, empowering teams to build and manage data pipelines faster, safely, and at scale. With Ascend's platform, engineers benefit from the assistance of context-aware AI agents that deeply understand their data pipelines.

Meet Otto, the intelligent data engineering Agent designed to eliminate repetitive tasks, accelerate innovation, and enable faster development cycles.

Integration with Ascend platformโ€‹

Otto works seamlessly with other aspects of the Ascend platform:

  • Chat with your stack: Engage in natural language conversations with Otto about your entire data infrastructure. Ask questions about data lineage, Component configurations, or pipeline performance, and receive contextual answers that incorporate knowledge of your specific environment.

  • In-line code suggestions: Receive intelligent recommendations as you write SQL, Python, or YAML. Otto analyzes your code patterns, data structures, and pipeline context to suggest optimized transformations, efficient joins, and best practices for your specific data plane.

  • Background agents: Leverage autonomous agents that continuously monitor your data pipelines, detect anomalies, and proactively suggest optimizations. These agents work silently in the background, identifying performance bottlenecks, data quality issues, and optimization opportunities without manual intervention.

  • Custom agents (coming soon): Create specialized AI assistants tailored to your organization's unique needs. Configure agents with specific business logic, data domain expertise, and compliance requirements to automate complex tasks across your data engineering workflows.

By understanding the relationships between these elements, Otto provides contextual assistance that considers your entire data engineering environment.

With Otto, you can:

  • Understand data lineage across your entire pipeline with column-level tracing
  • Transform Components between frameworks with automatic code migration
  • Implement robust data quality tests with intelligent recommendations

Discover these capabilities and many more!

โžก๏ธ Ready to Agentify your data engineering experience? Schedule a demo to see Ascend in action.

Ascend Gen3โ€‹

โ˜๏ธ Gen3 is a ground-up rebuild of the Ascend platform, designed to give you more control, greater scalability, and deeper visibility across your data workflows. It's everything you already love about Ascend โ€“ now faster, more flexible, and more extensible.

  • Ascend's new Intelligence Core combines metadata, automation, and AI in a layered architecture, empowering all teams to build pipelines faster and significantly compress processing times.

  • Git-native workflows bring version control, collaboration, and CI/CD alignment to all teams through our Flex Code architectureโ€” empowering both low-code users and developers to contribute.

  • Observability features expose detailed pipeline metadata so teams have deeper visibility into their system to diagnose problems quickly, reduce manual investigation, and optimize system behavior.

  • Modular architecture empowers data and analytics teams to manage increasingly large and complex pipelines with improved performance and maintainability.

  • Standardized plugins and extension points enable data platform teams to customize and automate workflows more easily.

โžก๏ธ Ready to explore? Join the Gen3 public preview to get early access.

Ascend Gen3 Demo

๐Ÿš€ Featuresโ€‹

Explore the latest enhancements across our platform, from improved system architectures to optimized project management. This section highlights major new functionalities designed to boost performance and flexibility.

Systems & architectureโ€‹

Explore the foundational improvements in our system's architecture, designed to enhance collaboration, resource management, and cloud efficiency.

  • Version control-first design โ€“ Collaborate and track changes with Git-native workflows
  • Project-based organization โ€“ Organize and manage resources with intuitive, project-centric workflows
  • Optimized cloud footprint โ€“ Reduce infrastructure usage with centralized UI and a lightweight, scalable backend
  • Event-driven core โ€“ Trigger custom workflows using system-generated events
  • Native Git integration โ€“ Automate CI/CD pipelines with built-in support for your Git provider CI/CD

Project & resource managementโ€‹

Effortlessly create, share, configure, and deploy projects with streamlined processes, allowing you to spend less time on administration and more on engineering innovation.

  • Project structure โ€“ Organize and manage your data projects with improved structure and clarity
  • Environments โ€“ Configure and maintain development, staging, and production environments with software development best practices
  • Parameterized everything โ€“ Reuse and adapt pipelines with flexible, comprehensive parameterization
  • Deployments โ€“ Roll out pipelines consistently across environments with simplified deployment workflows Deployments

Industry-leading securityโ€‹

Protect your data and resources with enterprise-grade security features, ensuring comprehensive access control and secrets management across your organization.

  • Enterprise-grade authentication โ€“ Secure instances, projects, and pipelines with OpenID Connect (OIDC)
  • Centralized vault system โ€“ Manage secrets, credentials, and sensitive configurations securely across your entire platform

Builder experienceโ€‹

Discover how our builder experience enhancements simplify Component creation and improve user interaction with a modern interface.

  • Simplified Component spec - Write Components with less boilerplate and more intuitive syntax
  • Components:
    • Partition strategies - Flexible data partitioning for optimal performance
    • Data & job deduplication - Intelligent handling of duplicate data and operations
    • Incremental Components - Process only new or changed data efficiently
    • Views - Create and manage virtual tables efficiently
    • Generic tasks - Support for versatile task types, including SQL and Python scripts for complex operations
  • Data applications - Build complex data transformations from simpler reusable building blocks and templates
  • Testing support - Test Components easily with built-in sample datasets
  • Modern interface - Navigate an intuitive UI designed for improved productivity
  • Dark mode - Switch between light and dark themes with enhanced visual comfort and accessibility
  • Navigation - Access projects, Components, and resources through streamlined menus

Navigation and building experience

Data integrationโ€‹

Our data integration improvements ensure seamless connectivity and performance across major platforms, enhancing your data processing capabilities.

Performance improvements across all data planes**:โ€‹

  • 70% reduction in infrastructure costs*
  • 4x faster ingestion speed*
  • 2x faster runtime execution*
  • 10x increase in concurrent processing*

Data planes: Enhanced connectivity and performance across major platforms.โ€‹

Snowflake - Full platform integration including:

  • SQL support
  • Snowpark for advanced data processing
  • Complete access to Snowflake Cortex capabilities BigQuery - Comprehensive SQL support including:
  • BigQuery SQL integration
  • Built-in support for BigQuery AI features

Databricks - Complete lakehouse integration featuring:

  • SQL and PySpark support
  • Full access to AI/ML models in data pipelines
  • Support for both SQL warehouses and clusters
  • Unified compute management

* Performance metrics based on comparative analysis between Gen2 and Gen3 platforms

Data qualityโ€‹

Enhance your data quality management with automated checks and customizable validation rules, ensuring data integrity across your projects.

  • Automated quality gates - Validate data within Components, including read and write Components
  • Reusable rule library - Create and share standardized data quality rules across your organization
  • Python-based validation - Write custom data quality checks using familiar Python syntax

Flow managementโ€‹

Optimize your data flows with advanced planning and execution capabilities, supporting high-frequency and concurrent processing.

  • Gen3 flow planner & optimizer โ€“ Improve pipeline performance with intelligence planning and execution
  • Flow runs โ€“ Manage and monitor individual pipeline executions with enhanced controls
  • Concurrent & high-frequency flow runs โ€“ Execute flows in parallel and at higher frequencies
  • Semantic partition protection โ€“ Preserve computed results across code changes to avoid unnecessary reprocessing
  • Optional Smart backfills โ€“ Backfill data flexibly with advanced control over reprocessing. Smart Backfills

Automationโ€‹

Leverage our automation features to create dynamic workflows triggered by real-time events, enhancing operational efficiency.

  • Event-driven extensibility - Automate workflows dynamically based on real-time platform events and triggers
  • Customizable event triggers - Create custom automation triggers including sensors and events Automation

Observabilityโ€‹

Gain comprehensive insights into your data operations with real-time and historical observability, ensuring full transparency and control.

  • Unified metadata stream & repository - Centralize and track metadata across all pipelines
  • Real-time & historical monitoring - Access metadata on pipeline runs and performance history, including:
    • Live monitoring of active pipeline runs
    • Full execution history with smart commit summaries
    • Performance analytics and trend analysis
    • Complete troubleshooting visibility Observability

AI-powered assistant (Otto ๐Ÿ)โ€‹

Experience the power of AI with Otto, our assistant that helps you create, optimize, and document your data pipelines effortlessly.

  • Component creation & editing - Generate new pipeline Components or modify existing ones with natural language
  • Smart updates & recommendations - Receive intelligent suggestions for pipeline optimization, performance improvements, and descriptive commit messages
  • Automated documentation - Automatically generate and maintain comprehensive documentation for pipelines and Components Otto AI

๐ŸŒŸ Improvementsโ€‹

This section highlights enhancements in Component functionality, connector improvements, and overall system optimization to boost performance and usability.

Component improvementsโ€‹

Data types

  • Timestamp TZ/NTZ - Enhanced timezone handling and support
  • Variant - Flexible data type for semi-structured data
  • JSON - Native JSON data type support

Connector improvementsโ€‹

Read connectors

  • Automatic schema field detection - Intelligent schema inference for all connectors
  • Customizable schema options - Flexible schema configuration options

Files

  • Advanced filtering - Filter files by last modified, created at, and custom combinations
  • Archive support - Native support for zip & tar archives

Warehouses

  • Enhanced data ingestion - Moving beyond single-table limitations, you can now use SQL queries to filter columns, rows, and join datasets at the source
  • Multi-table support - Easily ingest multiple tables in a single Component without complex query writing

Databases

  • Replication strategies - Advanced options for data replication
  • Materialization strategies - Flexible approaches to data materialization

๐Ÿ—ฃ๏ธ Community updatesโ€‹

Discover ready-to-use examples, comprehensive documentation, and resources created by and for the Ascend community to accelerate your development journey.

Sample projectsโ€‹

๐Ÿ Otto's Expeditions - Ready-to-use examples for:

Documentationโ€‹

New content structure

๐Ÿ’ฌ Terminology changesโ€‹

We've updated some terms to better reflect their functionality:

  • Dataflow โžก๏ธ Flow
  • Partitioned Component โžก๏ธ Smart table

๐Ÿ”ฎ Coming soonโ€‹

Stay tuned for upcoming innovations, including enhanced AI tools and comprehensive documentation improvements that will streamline your workflow.

โœจ Expanded AI capabilitiesโ€‹

  • Coding copilot - Intelligent code suggestions and completions
  • Agentic data engineering - Automated pipeline creation and optimization
  • AI-assisted migration - Use Otto to migrate from legacy data tooling like dbt or Airflow with customizable AI agents

๐Ÿข Enterpriseโ€‹

  • Organization management - Hierarchical team structures with flexible resource sharing and access controls
  • Enterprise identity & access - Fine-grained access and permission controls
  • External vault integration - Connect to your organization's existing secret management system