Pipelines & workflows

Run jobs when your stack is safe — including vendors

ServicePulse isn’t only a dashboard. Use your Personal API from Airflow, Dagster, or Prefect to block or branch when third-party services aren’t operational. The same integrations repo also ships CI gates for GitHub Actions, GitLab CI, and Azure Pipelines, Terraform and Argo CD examples, Node and Go clients, outbound-webhook starters, and short recipes.

Why check ServicePulse inside the orchestrator?

  • One dependency check

    Your DAG already knows when internal tasks fail. ServicePulse is the same idea for vendors: Stripe, Snowflake, cloud APIs — normalized status in one call.

  • Less custom glue

    No per-vendor HTTP probes, HTML scraping, or emergency parsers when a status page changes format. Track vendors in ServicePulse; pipelines call GET /api/v1/tracked-vendors.

  • Fail closed before expensive work

    Skip or fail runs before long batch jobs, ML training, or finance closes when critical dependencies are degraded or in maintenance.

  • Same signal as on-call

    Engineers see the same vendor state in the app, alerts, and status page — pipelines don’t drift to a one-off script nobody maintains.

What we ship

Open-source examples plus a tiny Python client (servicepulse-client) that wraps GET /api/v1/tracked-vendors.

Apache Airflow

Apache Airflow

Custom BaseOperator + example DAG (vendor gate → downstream tasks).

Dagster logo

Dagster

Resource + job + optional sensor when a vendor leaves operational.

Prefect

Prefect

Credentials block + sample flow task that asserts stack health.

Push your pipeline's own status back to ServicePulse

ServicePulse isn't just for reading vendor health — your pipelines can also write their status back. Even when all your upstream vendors are green, your own ETL job, ML pipeline, or data product may be degraded. Use a push endpoint to signal that directly — it factors into the service's displayed status on your status page.

POST https://servicepulse.dev/api/ingest/<token> Content-Type: application/json { "type": "service_status", "serviceId": "<your-service-id>", "status": "degraded_performance", "title": "ETL pipeline latency elevated", "message": "p95 latency exceeds SLA threshold" }

Valid statuses: operational, degraded_performance, partial_outage, major_outage, maintenance. The pushed status is combined with vendor dependency status — the worst of the two is shown. Works natively with Dagster+ alert webhooks too — see the docs.

Git install & Airflow providers

Install servicepulse-client and the orchestrator packages from our public integrations repo using pip and a Git subdirectory= URL. These packages are not on PyPI—you only install; you don't publish anything. You do not need an official Apache Airflow Provider—we ship servicepulse-airflow with a small BaseOperator. See the repo README for Dagster and Prefect.