๐ข What's new
If you want help trying our latest features, ask Otto, your intelligent data engineering agent!
๐๏ธ Week of 2025-10-13โ
๐ Featuresโ
- Discover Ascend's capabilities with our new product overview and welcome video, designed to help you get started with confidence.
- Flow configurations now support Flow runner size overrides, letting you specify custom resource allocation per Flow instead of inheriting from the Deployment or Workspace default settings.
- Flow and Component organization gets a major upgrade with nested subdirectories, complete with collapsible directory views and summary statistics.

๐ Improvementsโ
- PostgreSQL authentication gets a security boost by switching to
pgpassfiles with automatic IAM token refresh, keeping your credentials fresh and your database connections stable. - Smart schema is now supported on the Databricks Data Plane, bringing intelligent schema management to more of your data pipelines.
- DuckDB has been upgraded to version 1.4.1, bringing the latest performance enhancements and features to your data processing.
๐ ๏ธ Bug fixesโ
- Before, concurrent DuckDB operations could cause segmentation faults due to missing thread locks with errors like
Flow runner process for my_flow/fr-0199d9de-81cb-79f3-96ca-40b9035368aa exited with code -11. Fatal Python error: Segmentation fault. Now, proper locking protects DDL operations, query execution, and batch insertions, keeping your database operations rock solid. - Previously, Smart Schema components with NULL partitioning versions could fail during concurrent backfills with
Invalid partitioning version: Noneerrors. Now, NULL values are gracefully handled as pre-existing Components, preventing workflow interruptions. - Before, DuckLake Connections on S3 would experience failures after 1 hour due to expired AWS credentials. Now, DuckLake automatically refreshes AWS credentials for S3 Connections every hour through a new credential management system that uses AWS profiles with background thread refresh, ensuring long-running operations continue without interruption.
- Before, the UI had multiple memory leaks from Monaco editors, polling handlers, and stacked window event listeners. Now, proper cleanup ensures your browser stays responsive during long editing sessions.
- Before, prolonged use of the editor tabs in Workspaces could cause memory leaks. We've improved the stability & performance to ensure your browser stays responsive during long editing sessions.
- Before, paused Workspaces would time out trying to load Git status, causing unnecessary 503 errors. Now, paused Workspaces show a friendly message instead of attempting the doomed status check.

๐๏ธ Week of 2025-10-06โ
๐ Featuresโ
- ๐ค๐ New Otto capabilities:
- Otto can now send automated email alerts with optional AI-powered explanations, helping you understand what went wrong and what action to take when your data pipelines hit a snag.
- Otto's conversation limits expand from 25 to 50 maximum turns per request, reducing scenarios where complex queries run out of interaction capacity before generating complete answers.
๐ Improvementsโ
- DuckDB now defaults to
max_combined_sql_statements=1when using DuckLake, delivering better performance and more efficient resource utilization.- This change prevents memory issues and out-of-memory (OOM) errors that were occurring when combining multiple SQL queries simultaneously, while also addressing CPU usage inefficiencies where processing was limited to 1-2 cores regardless of available resources.
- Build performance soars with up to 67% faster builds for large projects through global Jinja2 template caching, enhanced file I/O optimization, and smarter threading that eliminates unnecessary overhead.
- Otto's Bedrock integration now supports prompt caching for Anthropic models, dramatically reducing costs and latency by reusing common system prompts and user context across requests.
- Flow runner resource allocation becomes more flexible with configurable size overrides, letting you fine-tune CPU, memory, and disk allocation per runner while maintaining consistent resource assignment.
๐ ๏ธ Bug fixesโ
- Before, MCP tool call responses sometimes failed due to serialization issues with complex data types like URLs. Now, proper JSON serialization handles all data types correctly, preventing runtime conversion errors.
- Fixed a segmentation fault in DuckLake by implementing a safer check for the existence of the partitioning version column.
๐๏ธ Week of 2025-09-29โ
๐ Featuresโ
- ๐ค๐ New Otto capabilities:
- Claude Sonnet 4.5 model is now the default LLM powering Otto.
- A delightful guided tour will welcome you to Ascend on your very first visit, showing you all the amazing features waiting to be discovered.

๐ Improvementsโ
- File upload memory management gets smarter with single-threaded conversion to parquet, creating natural back-pressure to prevent out-of-memory (OOM) crashes.
- DuckDB DDL operations are now properly locked to prevent race conditions when multiple operations run simultaneously with task-threads > 1.
- The Ascend documentation site got a major glow up! โจ We've sprinkled in some glassmorphic magic, radial gradients, and a whole bunch of other delightful design treats.

๐ ๏ธ Bug fixesโ
- Before, BigQuery Read Components with column casting would throw syntax errors like
Syntax error: Expected "<" but got ")"due to missing fully qualified names. Now, these operations use proper table references and work smoothly. - Before, BigQuery Flows with concurrent components writing to the same table would fail with
Transaction is aborted due to concurrent updateerrors due to BigQuery's recent stricter enforcement of concurrent DML operations. Now, these operations include automatic retry logic with exponential backoff to handle concurrent update errors gracefully and ensure data processing continues without interruption. - Before,
COALESCEexpressions in partitioned Data Plane operations could cause SQL type inference issues due to missing explicit type casting for NULL values, potentially leading to errors with strict type-checking database engines like DuckDB 1.4. Now, allCOALESCEexpressions use proper type casting with explicitCAST(NULL AS {type})syntax, ensuring consistent and reliable query execution across different database engines. - Before, adding columns to existing tables on Databricks would fail with `[PARSE_SYNTAX_ERROR] Syntax error at or near 'EXISTS' errors. Now, column addition works correctly across all Data Planes with proper backfill handling.
๐๏ธ Week of 2025-09-22โ
๐ Featuresโ
- ๐ค๐ New Otto capabilities:
- Otto can now use LLMs across providers with Instance-level model management settings, allowing you to enable or disable specific models and set default models across providers. Configure your settings under
AI & Models.
- Otto works with Jinja macros, parameters, Automations, Tasks, and SSH Tunnels with updated rules that focus on agent-specific guidance and lower token usage.
- Otto can now use LLMs across providers with Instance-level model management settings, allowing you to enable or disable specific models and set default models across providers. Configure your settings under
- Smart schema evolution arrives as the new default schema change strategy for Read Components with object storage and Local File Connections on all Data Planes! Your data partitions now store their own innate schemas and intelligently reconcile type differences without copying data every time the output schema changes.
- This is not a breaking change. Existing Components with these Connection types continue to use the
fullstrategy (previous default). For new Components,smartstrategy will be the default.
- This is not a breaking change. Existing Components with these Connection types continue to use the
- Workspace auto-snooze keeps your infrastructure costs in check by automatically pausing inactive Workspaces after a configurable timeout (5-120 minutes), with smart detection of ongoing Flow runs to avoid interrupting active work.
- Auto-snooze applies to all new Workspaces with a default timeout of 10 minutes.
๐ Improvementsโ
- Focus on a Component in the build panel to jump to its place in the Flow Graph, or on a Flow to jump to its place in the Super Graph.

- Data Plane Connections now appear in your Flow Connections list, making it easier to access the Connections powering your current Flow. Additionally, you can view the number of Connections being used in a given Flow within the build info panel.
- Lists throughout your Ascend Instance now sort alphabetically for a more organized experience including Deployment, Project, Git branch, Profile, and Environment selectors.
๐ ๏ธ Bug fixesโ
- PostgreSQL Read Components now preserve array and JSON column types instead of converting them to VARCHAR strings. Plus, a new
arrays_as_jsonparameter lets you handle non-mappable array data with ease. - Before, incremental merger resets would fail with malformed SQL syntax errors like this:
Parser Error: syntax error at or near '(' DROP TABLE IF EXISTS DuckDbTable(table='postgres__tracking_merge_snapshot_metadata'...)- Now, qualified table names generate proper
DROPstatements across Data Planes, keeping your data transformations humming.
- Now, qualified table names generate proper
๐๏ธ Week of 2025-09-15โ
โ ๏ธ Breaking changesโ
- GCS and ABFS Connections now use more stable metadata fields (
md5Hash,crc32cfor GCS;content_md5,etagfor ABFS) for fingerprinting to prevent unnecessary re-ingests when storage tiers change. This change requires re-ingesting existing data to adopt the new fingerprinting method.
๐ Improvementsโ
- Deployment run pages now link directly to individual Flows instead of the Super Graph when exploring builds and errors, making troubleshooting more focused and efficient.
- Flex Code Flow creation forms are streamlined to include only essential fields (name, description, Data Plane Connection, parameters, and defaults), making Flow creation faster and more efficient.
- BigQuery Data Plane replaced custom UDF logic with native
IFfunctions for better performance and cleaner datasets. - DuckLake on S3 now uses
httpfsas the default Connection method, providing improved performance and stability. - Local file caching support for DuckLake when working with S3, GCS, and Azure Blob Storage, which can significantly improve data access speeds.
๐ ๏ธ Bug fixesโ
- Before, DuckDB's
SAFE_STRINGmacro creation could fail in DuckLake environments, hitting expression depth limits withMax expression depth limit of 1000exceeded errors. Now, a robust Python UDF fallback ensures consistent data fingerprinting even when the SQL macro throws a tantrum. - Fixed a critical issue where long-running DuckLake jobs would lose connection to the metadata database, improving job reliability.
- Resolved a bug that prevented DuckDB-based ingestion from working with multiple PostgreSQL Components simultaneously.
๐๏ธ Week of 2025-09-08โ
๐ Featuresโ
- Drag files directly from your computer into the file tree to upload them instantly - no need to connect to object storage to work with your own data.
๐ Improvementsโ
- The Instance status indicator has moved to the right side of the header next to your user menu for a cleaner, more intuitive layout.
๐ ๏ธ Bug fixesโ
- Fixed
ascend view connection samplecommand failing withpyarrow.lib.ArrowInvaliderror when PostgreSQL tables contain range type columns (date ranges, time ranges, etc.). The command now converts range types to readable string representations for successful data sampling. - Incremental reads now properly honor start values during backfill phases, ensuring your data pipelines only process the records they should and preventing duplicate or incorrect data processing.
๐๏ธ Week of 2025-09-01โ
๐ Featuresโ
- Database Connections just learned some new authentication moves! MySQL and PostgreSQL databases hosted by AWS can now authenticate using AWS IAM roles.
๐ ๏ธ Bug fixesโ
- Before, Databricks SQL connector compatibility broke due to a renamed API with the following error:
Now, the system gracefully handles both API versions with fallback logic, keeping your Databricks pipelines humming along.
`AttributeError: module 'databricks.sql.utils' has no attribute 'ResultSetQueueFactory'. Did you mean: 'ThriftResultSetQueueFactory'?``. - Before,
partition_templatefunctionality wasn't working in blob Write Components, causing data to land in folders with auto-generated UUID names instead of your provided template. Now, datetime templates are properly parsed and interpolated, so your data ends up exactly where you expect it.
๐๏ธ Week of 2025-08-25โ
๐ Featuresโ
-
๐ค๐ New Otto capabilities: Otto can now be used as an Action in Automations to intelligently respond to Flow events, analyze complex data patterns, and execute contextual actions across multiple platforms. This breakthrough capability empowers users to create sophisticated Event-driven workflows where Otto actively monitors pipeline health, dynamically adapts responses based on Flow run history, and delivers intelligent Slack notifications like the example below:
๐ Improvementsโ
- Application Components can now access Flow parameters during build time, enabling conditional Component generation based on Profile and Flow parameter values.
- Applications can now make build-time decisions using Flow parameters that were previously only available at runtime, supporting advanced use cases where Component generation must be dynamically controlled based on Flow configuration.
- Enhanced Automation configuration UI prevents layout issues on small screens and added delete buttons to Sensor, Event, and Action cards for better user experience.
๐๏ธ Week of 2025-08-18โ
๐ Featuresโ
- Users can now create Ascend-managed Repositories directly within the platform, streamlining the development workflow by eliminating the need for external Git repository setup and streamlining immediate Project development.
โ ๏ธ Breaking changesโ
- Fixed Oracle connection configuration validation to prevent runtime failures. Added model-level validation to ensure mutually exclusive Oracle Connection parameters (
dsn,database,service_name) are handled correctly, providing clear error messages when conflicting parameters are provided instead of failing at runtime.- This breaking change requires users with existing Oracle connections using both
databaseandservice_namefields to choose one Connection method.
- This breaking change requires users with existing Oracle connections using both
๐ ๏ธ Bug fixesโ
- Fixed landing table duplicate data issue in incremental persist by implementing de-duplication logic to handle potential duplicate records during incremental reads, ensuring pipeline self-healing and preventing
UPDATE/MERGE must match at most one source row for each target rowerrors.
๐๏ธ Week of 2025-08-11โ
๐ Featuresโ
- Ascend now supports DuckDB via DuckLake as a Data Plane! Follow our guide to get started.
๐ Improvementsโ
Database incremental reads now process up to 50% faster by implementing concurrent download and upload operations using threading, reducing processing time for large datasets while also preventing potential disk space errors during data operations.
๐ ๏ธ Bug fixesโ
- Before, users encountered
Failed to list resourceserrors when browsing Connection data that contained variable placeholders (like${variable_name}) in file paths. Now, users can seamlessly browse and explore Connection data in the explorer interface. - Before, incremental merge queries failed with syntax errors when column names contained special characters or reserved keywords (like "INTERVAL" in BigQuery). Now, column names are properly quoted in merge queries to prevent runtime errors.
- Before, users experienced job failures and silent schema inconsistencies when running Snowflake DML operations concurrently or when data tables had metadata drift. Now, jobs are protected from Snowflake concurrency limits and schema inconsistencies are caught with clear error messages at the data storage and management layer for partitioned components.
๐๏ธ Week of 2025-08-04โ
๐ Improvementsโ
- Building on last week's settings enhancements, the interface now features a unified design with consistent card styling for improved usability and visual cohesion.
๐ ๏ธ Bug fixesโ
- Fixed PostgreSQL Read Component incremental strategy memory issue - Resolved a bug where PostgreSQL Read Components on any Data Plane using incremental strategies (merge and SCD Type 2) were attempting to load entire tables instead of empty tables during initial runs, which could cause out-of-memory errors. The fix ensures that predicates used in min/max queries are properly applied to data fetching queries, preventing unnecessary full table loads.
๐๏ธ Week of 2025-07-28โ
๐ Featuresโ
- Your Flows can now write to PostgreSQL with style! The new PostgreSQL Write Component establishes support for multiple write strategies โ snapshot, full, partitioned, and incremental.
๐ Improvementsโ
- Settings got a makeover! You can now toggle between light and dark mode directly from General settings.
๐ ๏ธ Bug fixesโ
- Fixed an issue where Incremental Read Components with the full-refresh flag enabled would return empty results despite showing success status. Previously, the full-refresh operation only reset the output table but not the metadata table, causing the Component to incorrectly assume all data was already processed. Now, full-refresh properly resets both output and metadata tables, ensuring you get the complete dataset as expected.
๐๏ธ Week of 2025-07-21โ
๐ Featuresโ
- ๐ค๐ New Otto capabilities:
- Otto now provides smart code completion suggestions when youโre developing in the UI, powered by context from rules and files.

- Otto can now be configured with an Ascend-native Automation to send activity summary emails! This feature keeps you in the loop about all the behind-the-scenes action on your Flow runs without manual monitoring. See our how-to guide for detailed instructions.
- Otto now provides smart code completion suggestions when youโre developing in the UI, powered by context from rules and files.
- The Files panel just got smarter! You can now drag and drop existing files directly into place, making file management faster and more intuitive.
- Hold down the option/alt key while dragging files to create copies on the fly.
- The Files panel right-click menu now includes a convenient
Duplicateoption, letting you clone files with just two clicks.
โ ๏ธ Breaking changesโ
- The default Write Component strategy changed from Smart (partitioned) to Simple to align with other areas of the product. Although Smart Components are more scalable, they're also more difficult to get started with and can be slower for small, simple datasets.
- If you were relying on the previous Smart Component default behavior, you can restore it by explicitly setting
strategy: partitionedin your Write Component configuration. See our write strategies documentation for more details.
- If you were relying on the previous Smart Component default behavior, you can restore it by explicitly setting
๐ ๏ธ Bug fixesโ
- Previously, the SSH public key input box was appearing for all secret fields, not just SSH-specific ones, causing confusion in forms. Now, the SSH public key input knows its place and appears only for SSH secret fields where it belongs!
- Before, date indicators in the Flow runs timeline view were jumpy and unstable due to CSS issues. Now, we've ensured a smooth and consistent timeline experience with stable positioning.
- Fixed BigQuery Transform queries that failed when referencing newly created columns in the same query (e.g.,
SELECT '1' as foo, foo as bar). These queries now work by automatically moving column references to a separate CTE.
๐๏ธ Week of 2025-07-07โ
๐ Featuresโ
-
๐ค๐ New Otto capabilities:
-
Otto now supports custom agents, letting you create unique personalities and behaviors tailored to your data workflows. Define your agents in markdown, customize their traits, and even select their preferred language for a personalized touch.
- Tool inclusion for agents supports powerful wildcards: use
[category].*for all tools of a specific type, or*for everything, streamlining agent setup. For example:
- Tool inclusion for agents supports powerful wildcards: use
-
Otto introduces custom rules, enabling you to add special instructions for chat interactions. Your Otto, your rules!
-
Otto now connects to external MCP servers (used in Slack Otto Automations) via simple configuration files, expanding your agents' capabilities.
-
Otto's configuration is more flexible with a new
otto.yamlfile in a dedicatedottodirectory, allowing you to configure agents and MCP server access in one centralized place for easy management.
-
๐ Improvementsโ
- Write Components for blob storage (ABFS, GCS, S3) are now more adaptable: you can configure the chunk size using the
part_file_rowsfield for change blobs to optimize performance based on your data needs and avoid out-of-memory errors. This example shows a partitioned Write Component with a chunk size of 1000 rows:
component:
write:
connection: write_s3
input:
name: my_component
flow: my_flow
s3:
path: /some_other_dir/
formatter: json
part_file_rows: 1000
๐ ๏ธ Bug fixesโ
- Previously, database Read Components (PostgreSQL, MySQL, Microsoft SQL Server, Oracle) converted SQL null values to the literal string 'None' when loading query results into PyArrow arrays, which could cause issues in downstream data processing (e.g., treating 'None' as a string instead of a true null). Now, null values are properly preserved in your data.
- This change affects all pipelines using database Read Components across all Data Planes and applies to any data type that can be null in your source tables.
- If you previously had to add cleanup steps to handle 'None' string values resulting from this issue, you can now safely remove those from your pipeline.