Skip to main content

Durable Skills for the Agentic Era

The pipeline ran overnight without you. The revenue-by-channel flow you built in the last module completed on schedule — the agent generated the SQL against your code standards file, you reviewed the output, caught one edge case in the revenue filter, and approved. It automated from there. You — a data engineer — directed, verified, corrected course once, and shipped. Total decision time: under an hour. Execution time: zero.

New here?

This module follows Your First Agentic Pipeline, where you built a working data pipeline alongside an AI agent. If you're arriving fresh, skip back for the hands-on context — or read on; the framework below stands on its own.

And then the question surfaces: if the agent can do that, what am I actually for?

It's a fair question. It deserves an honest answer — not "don't worry, engineers will always be needed" (reassuring but useless), and not "your job is disappearing" (dramatic but wrong). The actual answer is more specific and more actionable: some of what you do today will be automated. The parts that remain — and compound in value — are identifiable. The engineers who understand this and build deliberately toward those skills will have more leverage, not less.

What you'll walk away with

Name the five durable skills that compound most in the agentic era, rate yourself on each, and commit to one concrete action to start closing your biggest gap.

The honest answer

The automation story in data engineering is real. Teams that have deployed agentic systems often describe agents increasingly handling triage for schema drift (unexpected changes to the structure of source data tables), connector maintenance, pipeline failure investigation, and routine transformation updates — work that is shifting from human-executed to human-supervised (reported as anecdotal patterns across early deployments, not a systematic survey). That work isn't going away — it's shifting.

What isn't shifting: the judgment about what to build and why, the understanding of what the data actually means in the business context, the ability to evaluate whether the agent's output is correct, and the system design decisions that determine whether the whole thing works reliably. These are harder to automate than writing SQL, and they become more valuable as agents do more of the execution work.

Demand for AI fluency grew 7× in two years in US job postings, per McKinsey/MGI research. Workers with strong AI skills are commanding a 56% wage premium in some markets, per the PwC Global AI Jobs Barometer. Gartner projects that generative AI will require 80% of the engineering workforce to upskill through 2027. The trend is clear — and it's not that engineers become unnecessary. It's that the skills that matter most are shifting.

Quick check

In one sentence: which of the five durable skills do you already use when you review agent output or specify a pipeline — and which feels least natural today?

The engineers who thrive aren't the ones who hand everything to agents. They're the ones who know how to design systems where agents and humans each do what they're actually good at — and who can tell the difference between output that's right and output that looks right.

Five skills that compound

These are the skills that increase in value as agents handle more execution work. Each one is learnable and practicable now — you don't need to wait for better agents.

#Skill
1Outcome clarity and execution
2Context engineering
3Critical evaluation
4Systems thinking
5Relentless curiosity

1. Outcome clarity and execution

An agent executes; you define the goal. As agents become more capable executors, the value of precise goal specification increases proportionally. "Build me a customer retention metric" and "Build a rolling 90-day retention rate, defined as the percentage of customers with at least one purchase in the current 90-day window who also had a purchase in the prior 90-day window, at customer-segment grain (grain = the level at which each row represents one entity — here, one row per customer per segment)" are both valid requests. Only the second one reliably produces what you actually want.

This applies to your team's work, too — not just prompts. The data engineer who can translate a stakeholder request ("I want to understand our best customers") into a precisely specified output schema with defined metrics and clear methodology is performing a function that doesn't compress easily into an agent call. That translation — from fuzzy business need to precise technical specification — is the work that remains.

Execution is a commodity; specification is leverage. The more capable agents become at the former, the more the latter differentiates you.

2. Context engineering

Context engineering is the discipline of giving AI systems access to the right information, at the right level of specificity, at the right time. It's the difference between an agent that confabulates (generates plausible-sounding but fabricated output) and one that makes decisions you'd make yourself if you had the same information. A 2025 survey of context engineering marks its emergence as a recognized technical discipline.

Context engineering is the new SQL fluency. A decade ago, the practitioner who understood SQL deeply — not just how to write queries, but when to use indexes, how to structure CTEs for performance, where joins go wrong at scale — had structural advantage over those who didn't. Context engineering is at the same inflection point now. It's learnable, it's high-leverage, and most practitioners aren't thinking about it yet.

In practice: writing the structured reference files your agents read (code standards, data contracts, schema documentation); designing retrieval strategies — how you choose which docs, tables, or snippets the model receives for each task — that surface the most relevant context rather than dumping everything; knowing when to summarize rather than append. These are the skills that determine whether your agents make good decisions or expensive mistakes.

3. Critical evaluation

You ran through verification in the lab: aggregate sanity checks, drill-downs on unexpected values, asking the agent to walk through its reasoning. That's the beginning of critical evaluation — the ability to distinguish between output that is correct and output that merely looks correct.

This skill matters more as agents do more. When humans wrote every line of SQL, the logic was visible and auditable. When agents write the SQL and humans verify the output, the verification discipline becomes the quality control layer. Human oversight remains critical in high-stakes and complex deployments, as a systematic review of human-in-the-loop AI documents — not as blanket skepticism of agents, but as a precise understanding of where their failure modes concentrate.

The practitioners who will be most trusted in data organizations are the ones who can evaluate agent output reliably: spot the plausible-but-wrong metric, catch the join that's one condition too broad, identify when an agent has filled in a missing context assumption with a confident-sounding confabulation. This is not a natural skill — it's developed deliberately through practice.

Quick check

Before you trust a metric or join from an agent, ask: What would have to be wrong for this to look right? That one question trains the verification habit.

4. Systems thinking

Agents are fast and narrow. They execute the task they're given within the context they have. They don't see the organizational dependencies, the data contract implications for the partner team, the three-month business cycle that makes a Tuesday metric look like a data quality issue, or the political context that determines whether a proposed schema change will actually be accepted.

What systems thinking looks like in practice:

  • Tracing how a schema change propagates through dependent pipelines and dashboards before approving it
  • Recognizing when a late-arriving data source creates metric inconsistency across two reports — not just a data quality alert
  • Identifying when an automated remediation in one system quietly creates a side effect in another
  • Mapping organizational dependencies alongside technical ones when proposing changes

You hold the picture — the whole system, the stakeholders, the historical context, the failure cascades — and that domain is where human judgment is structurally necessary. This compounds with experience in ways that agent capability doesn't currently replicate.

As agents handle more execution, the engineers who maintain the system-level view become more valuable to their organizations, not less.

5. Relentless curiosity

The tools change every few months. Teams that develop these skills now will be well positioned as agentic workflows become increasingly standard across data engineering. The practitioners who thrive across multiple waves of change — not just this one — are the ones who keep experimenting, stay close to what's emerging, and develop informed opinions about what's worth adopting and what's hype.

Build a personal evaluation rubric

When experimenting with a new tool, answer three questions: What does it do well? Where does it fail? How does it fail? That last question separates the practitioners who understand tools from the ones who've just used them.

This isn't "always be learning" as a motivational poster. It's a specific professional discipline: build small projects with new tools to understand their actual capabilities and limitations, not just their marketing; follow the research that matters (academic benchmarks, production retrospectives) rather than the breathless takes; develop a personal rubric for evaluating new tools.

Curiosity isn't just attitude — it's a compounding asset.

What this means for your role

The data engineering role is shifting from execution-heavy to judgment-heavy. Less hand-writing of boilerplate code, more system design. Less log-diving, more evaluation of agent-proposed diagnoses. The maintenance work filling team capacity queues is precisely what's shifting toward agents — freeing capacity for architecture decisions, domain logic, and stakeholder conversations that agents can't replace.

What's shifting toward agentsWhat's shifting toward you
Writing boilerplate SQL and connectorsSpecifying the outcomes that SQL and connectors should achieve
Initial pipeline failure investigationEvaluating the agent's diagnosis and approving the fix
Repetitive schema change propagationDesigning the schema governance process the agent operates within
Routine data quality checksDefining what "normal" looks like for a dataset
First-draft transformation codeArchitecture decisions on how transformations compose

The shift doesn't happen overnight, and it doesn't happen uniformly across teams. But it's directional and accelerating. The engineers who invest now in the five durable skills — outcome clarity, context engineering, critical evaluation, systems thinking, and relentless curiosity — will be better positioned than those who wait to see how it plays out.

Exercise: Your Skill Gap Analysis

Exercise: Audit the Output

⏱ 10 minutes

Critical evaluation — distinguishing output that is correct from output that merely looks correct — is the skill this module argues will compound most as agents do more execution work. This prompt gives you a live target to practice on.

Open any LLM — Claude, ChatGPT, or Gemini work well — and paste this:

Here is a metric an agent produced from an orders_daily analysis:

"Repeat customer rate: 73%
Definition used: customers with more than one order in the dataset
Time period: all-time
Top segment by repeat rate: Loyal customers (5+ bookings) — 94% repeat rate
One-time customers: 0% repeat rate (by definition)"

Before you would trust this metric in a stakeholder report, what questions would you ask? Identify at least three things that could make this number technically correct but misleading or wrong for a specific business use case.

What to notice: Notice how many issues the LLM surfaces that aren't obviously wrong at first glance. "All-time" as a time window inflates repeat rates for any growing business. One-time customers at 0% is circular — by definition they can't repeat, so whether to include them affects the overall rate in a way that may or may not match the business question. The 73% figure will look very different depending on how the time window, the population, and the definition of "order" are scoped. This is critical evaluation: not accepting agentic output because it contains numbers, but asking what would have to be wrong for it to look right before putting it in front of a stakeholder.

Where to go from here

ADE Foundations gives you the conceptual foundation and the first hands-on experience. ADE 201 — Systems Design — goes deeper on the questions that become real when you start putting agents in production: how do you design multi-agent architectures that fail gracefully? How do you implement context engineering at scale? How do you govern autonomous action in a data organization? If the lab left you wanting to understand the engineering behind what you built, ADE 201 is the natural next step.

Keep the conversation going

The Ascend community is a good place to continue: practitioners building agentic systems, sharing what's working, and discussing what's not. Whatever tools you use, the patterns — outcome clarity, context engineering, verification — apply across any agentic stack. Join the Ascend Community Slack.

Key takeaways
  • The role is shifting, not disappearing. Demand for AI fluency grew 7× in two years in US job postings, per McKinsey/MGI research. The skills that compound most — outcome clarity, context engineering, critical evaluation, systems thinking, curiosity — are all learnable now, before they become table stakes.
  • Context engineering is the highest-leverage skill to develop today. It's at the same inflection point SQL fluency was a decade ago — teachable, high-value, and not yet widely understood by most practitioners.
  • Verification is the quality control layer in agentic systems. Human oversight remains critical in high-stakes and complex deployments, per a systematic review of human-in-the-loop AI — specifically the ability to distinguish output that is correct from output that looks correct.

ADE 201 — Systems Design — addresses the hard architectural questions that arise once you put agents in production: how do you design multi-agent systems that fail gracefully, implement context engineering at scale, and govern autonomous action across a data organization?

Next: ADE Systems Design →

Additional Reading