Lab 1: Agentic Analysis
Track 2: Agentic Analytics · Day 2 breakout lab
In this lab you'll use Otto to build a comprehensive analysis from raw data sources. The business case: GreenTech Manufacturing is looking to tackle a critical sustainability challenge facing their five UK production facilities. The company consumes over 42 million kWh annually running energy-intensive operations, and with UK carbon offset costs at £50/ton CO2, their environmental impact is becoming a significant financial burden.
The key insight: the UK's electricity grid carbon intensity swings dramatically throughout the day. Your mission is to build an intelligent forecasting system that learns from 30 days of historical weather and carbon data to predict these low-carbon windows up to 7 days in advance, then identifies which flexible operations can be strategically shifted from high-carbon hours to clean-energy windows.
You're not writing code — Otto generates it for you. Your job is to ask good questions, inspect what Otto produces, and redirect when it's not quite right. Think of Otto as an analyst who works at the speed of thought — your job is to direct the work, not execute it.
Before you start
- Complete Hands-On Lab: Getting Agentic on Day 1
- You should have an Ascend account and an existing Project
- No SQL experience required — though it helps if you want to go deeper
The plan
- Build the full carbon + operations optimization pipeline with a single prompt
- Verify the pipeline runs successfully end-to-end
The dataset
You're working with five data sources that together tell the story of your manufacturing energy footprint:
| Source | What it provides |
|---|---|
| UK Carbon Intensity API | Real-time and historical carbon intensity of the UK electricity grid |
| Open-Meteo Weather API | Historical and forecast weather data for predictive modeling |
| Facilities (CSV) | 5 UK manufacturing sites with annual energy consumption and location |
| Machines (CSV) | 125 machines with energy draw, type, and whether they're can be scheduled |
| Production Schedule (CSV) | Weekly shift patterns showing when each machine runs |
Step 1: Build the data pipeline
Data pipelines are a way to read and transform data from one or more sources into a single destination. They are a core concept in data engineering and are used to build complex data products and solutions. In this example, we're building a data pipeline that reads data from the 5 sources above and uses SQL and Python code to transform and analyze the data.
You're going to give Otto a single prompt, and it will build the entire optimization pipeline — combining our operations data with live weather and carbon intensity data to find the scheduling windows that save the most.
Open Otto with Ctrl + I (or Cmd + I on Mac), start a new thread, and paste the following prompt:
Watch Otto work
Sit back and watch. Otto will:
- Create Python Read Components for weather and carbon intensity APIs
- Build a weather forecast component for the next 7 days
- Write SQL transforms to join all five data sources
- Build a predictive model for carbon intensity
- Calculate optimal scheduling windows for every machine
- Run the flow and iterate through any failures
Otto isn't generating code in isolation — it's looking at the actual data coming back from the APIs and cross-referencing it with your operations data. If something fails, Otto reads the actual error message and fixes the issue based on what it observed.
This is a complex pipeline with five data sources and a predictive model. Otto will likely hit errors and fix them over 5+ iterations. Each round, Otto reads the actual error logs and adjusts its approach. If Otto is still stuck after 15 minutes, try giving it a specific hint about the error you see in the flow run logs.
Step 2: Verify the pipeline
Once the flow runs successfully, take a minute to spot-check the results before moving on to Lab 2.
Give me a quick summary of the optimization results. How many machines
can we reschedule? What are the projected savings?
[TODO: screenshot of a successful flow run in the Ascend UI]
You just did in 20 minutes what would typically take days — connecting to two live APIs, ingesting 30 days of historical data, building a predictive model, cross-referencing with your operations data, and producing scheduling recommendations with cost savings. Otto handled the Python, SQL, and API calls. You directed the analysis. That's the division of labor that scales.
You've completed Lab 1!
By the end of this lab, you should have:
- Built the carbon + operations optimization pipeline from a single prompt
- Verified the pipeline runs end-to-end
- Spot-checked the optimization results
Need help? Ask a bootcamp instructor or reach out in the Ascend Community Slack.
Next steps
Continue to Lab 2: Verifying Agent Output to explore the data, verify the optimization logic, and build visualizations you'd be confident presenting to an operations VP.
Resources
Questions?
Reach out to your bootcamp instructors or support@ascend.io.