Replies: 1 comment
-
|
hi, out of curiosity, do you have 1 instance with more than 100 thousand dags? this is the biggest number I have heard off. I though 10 k was a lot. |
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
-
Hello there,
We recently migrated to Airflow V3 and we already love its new features, well done for all the work!
However we are struggling to understand some behaviors on DAG versioning. Here is our scenario.
We have a DAG that triggers multiple DAGs (~100k). On the trigger DAG we've set
wait_for_completion=False, so we end up with the ~100k runs inqueuedstate (we limit concurrency at DAG level to 100 parallel runs).We then make changes to the DAG code, and pushed them (the change is on a pool name for a task). A new version of the DAG is created, and we could check in the
dbthat the data inserialized_dagtable includes the pool change.At this point, runs going from queued state to running (and then success) show up in the UI that they've run on the latest version. But the task instances kept using the older pool.
We've read different things and conducted different tests, but we're struggling to understand precisely when the DAG version is determined. Is it when the DAG gets queued? Or when it goes to running?
Side question, our pattern of triggering ~100k runs from one trigger seems a bit odd: we have troubles monitoring runs then. I don't think it is optimal with Airflow, do you have any advice? These 100k runs are supposed to run for some weeks.
Thank you for the help,
Beta Was this translation helpful? Give feedback.
All reactions