Skip to main content
Version: 3.0.0

Create a Smart Snowpark Transform

This guide shows you how to build a Smart Snowpark Transform that uses intelligent partitioning to process only relevant subsets of data, dramatically improving performance for queries based on partitioned fields.

Snowpark is Snowflake's developer framework that enables processing data directly where it's stored using familiar programming languages like Python.

Snowflake only

Note that Snowpark is only available to Ascend Instances running on Snowflake. Check out our Quickstart to set up a Snowflake instance.

Prerequisites​

Create a Transform​

You can create a Transform in two ways: through the form UI or directly in the Files panel.

  1. Double-click the Flow where you want to add your Transform
  2. Right-click on an existing component (typically a Read component or another Transform) that will provide input data
  3. Select Create Downstream → Transform Creating a Transform from the context menu
  4. Complete the form with these details:
    • Select your Flow
    • Enter a descriptive name for your Transform (e.g., sales_aggregation)
    • Choose the appropriate file type for your Transform logic Transform creation form

Create your Smart Snowpark Transform​

Follow these steps to create a Smart Snowpark Transform:

  1. Import required packages:

    • Ascend resources (Snowpark, ref)
    • Snowpark objects (DataFrame, Session)
  2. Apply the @snowpark() decorator with smart settings:

    • Specify your inputs using refs
    • Add reshape="map" to the input ref to enable Smart partitioning
    • Use event_time for time-series processing, such as smart backfills or time-range runs.
    • Include cluster_by to optimize table setup, ideally aligning it with the partitioning strategy when columns match.
  3. Define your transform function:

    • Your function can access partitioning information through context.partitioning
    • Implement your data processing logic
    • Return a DataFrame with the processed data
What makes a Transform "Smart"

Adding the reshape="map" parameter to your input ref is what transforms a regular Snowpark Transform into a Smart Transform. It helps Ascend track which partitions (based on your cluster_by fields) contain which data.

Example​

Here's an example of a Smart Snowpark Transform:

smart.py
from snowflake.snowpark import DataFrame as SnowparkDataFrame
from ascend.resources import ref, snowpark


@snowpark(
inputs=[
ref("cab_rides", reshape="map"),
],
event_time="pickup_datetime",
cluster_by=["cab_type"],
)
def cab_rides_smart_map_snowpark(cab_rides: SnowparkDataFrame, context):
return cab_rides

Check out our reference guide for complete parameter options, advanced configurations, and additional examples.

🎉 Congratulations! You've successfully created a Smart Snowpark Transform in Ascend.