Sync Stripe to Google BigQuery: A Step-by-Step n8n Workflow for Financial Data
The Challenge: Manual Stripe Reporting Doesn't Scale
For any business using Stripe, financial data is the lifeblood of decision-making. You need to understand revenue trends, customer lifetime value, and churn rates. The problem is that getting this data out of Stripe and into a system for analysis is often a manual, time-consuming process.
Exporting CSV files, cleaning them up, and uploading them to a spreadsheet or database is slow, prone to human error, and can't provide the real-time insights a growing business needs. As your transaction volume increases, this manual workflow quickly becomes unsustainable.
This guide solves that problem. We'll show you how to build a robust, automated data pipeline using n8n to sync your Stripe data directly into Google BigQuery. This creates a single source of truth for your financial data, enabling powerful, real-time analytics and business intelligence.
Why Automate Your Stripe to BigQuery Pipeline?
Automating this process is more than a convenience; it's a strategic advantage. An automated pipeline allows you to:
- Eliminate Manual Data Entry: Free up your team from the repetitive task of downloading and uploading CSVs, reducing errors and saving hours of work.
- Enable Real-Time Business Intelligence: Feed live financial data directly into BI tools like Looker Studio, Tableau, or Metabase that are connected to BigQuery.
- Create a Single Source of Truth: Consolidate all your Stripe data (charges, customers, subscriptions, invoices) into one central, queryable location.
- Scale Your Analytics: BigQuery is designed to handle massive datasets, allowing you to run complex queries on years of transaction data in seconds, something impossible with spreadsheets.
Prerequisites
Before you start building, make sure you have the following in place:
-
An n8n instance: This workflow works on any n8n plan, including self-hosted or n8n Cloud.
-
Stripe Account: You'll need API access to your Stripe account. Find your API keys in the Stripe Dashboard under Developers > API keys.
-
Google Cloud Platform (GCP) Project: You need a GCP project with the BigQuery API enabled. You will also need to create a service account with permissions to write data to BigQuery. Follow the Google Cloud documentation to create a service account and download its JSON key file.
The Workflow Blueprint: From Stripe to BigQuery
Our n8n workflow will follow a classic Extract, Transform, Load (ETL) pattern:
- Trigger: The workflow will run on a set schedule (e.g., every night at midnight) to collect the previous day's data.
- Extract: The first step will connect to the Stripe API and fetch all new charges within a specific time frame.
- Transform: The next step will reformat the raw data from Stripe, selecting only the fields we need and ensuring they match the schema of our BigQuery table.
- Load: The final step will take the transformed data and insert it as new rows into our designated BigQuery table.
Example n8n Workflow: Syncing Daily Stripe Charges
Here’s a step-by-step guide to building the workflow to sync daily charges. You can adapt this same pattern for customers, subscriptions, or any other Stripe object.
Step 1: Set Up the Schedule Trigger
Every n8n workflow starts with a trigger. Since we want to sync data periodically, the Schedule node is the perfect choice.
-
Add a Schedule node to your canvas.
-
Set the Trigger Interval to
Daily. -
Set the Hour to
0and Minute to15. This will run the workflow every day at 12:15 AM, giving Stripe enough time to process the previous day's transactions.
Step 2: Extract Charges from the Stripe Node
Next, we'll fetch the data from Stripe.
-
Add a Stripe node and connect it to the Schedule trigger.
-
Create new credentials by providing your Stripe API Secret Key.
-
Configure the node's properties:
-
Resource:
Charge -
Operation:
List
-
Click Add Option and select Created. This allows us to filter charges by their creation date.
-
In the Created field, switch to Expressions and use a JavaScript expression to define a time range for the last 24 hours. A simple way is to use
{{ $now.minus({days: 1}).toISO() }}for the start date and{{ $now.toISO() }}for the end date. For more precise control, you can construct exact timestamps. -
Enable the Return All toggle. This is crucial, as it tells n8n to automatically handle pagination and fetch all charges that match your criteria, not just the first page of results.
Step 3: Transform Data with the Set Node
Stripe's API returns a lot of data, but we only need a few key fields for our BigQuery table. The Set node is perfect for selecting and renaming fields to match our database schema.
Let's assume our BigQuery table has columns named charge_id, amount, currency, customer_id, and created_at.
-
Add a Set node and connect it to the Stripe node.
-
Make sure Keep Only Set is enabled. This will discard any fields we don't explicitly define.
-
Add the following key-value pairs, using expressions to pull data from the Stripe node:
-
charge_id(String):{{ $json.id }} -
amount(Number):{{ $json.amount / 100 }}(Stripe amounts are in cents, so we divide by 100) -
currency(String):{{ $json.currency }} -
customer_id(String):{{ $json.customer }} -
created_at(String):{{ new Date($json.created * 1000).toISOString() }}(Convert Stripe's UNIX timestamp to ISO 8601 format)
Step 4: Load Data into Google BigQuery
Finally, we'll send our clean data to BigQuery.
-
Add a Google BigQuery node and connect it to the Set node.
-
Create new credentials using the Service Account option and upload the JSON key file you downloaded from your Google Cloud project.
-
Configure the node's properties:
-
Resource:
Table -
Operation:
Insert All -
Project ID: Your Google Cloud Project ID.
-
Dataset ID: The name of your BigQuery dataset (e.g.,
stripe_data). -
Table ID: The name of your table (e.g.,
charges). -
Rows (JSON): Set the expression to
{{ $json }}. This tells the node to use the entire output from the previous Set node as the data to insert.
Now, activate your workflow. It will run automatically every day, populating your BigQuery charges table with the previous day's transactions without any manual intervention.
Advanced Tips and Considerations
- Historical Data Backfill: To load past data, you can temporarily disable the Schedule trigger and run the workflow manually. Adjust the date expressions in the Stripe node to fetch data for a specific month or year.
- Error Handling: In a production environment, you should add an error route. Connect a node (like Slack or Send Email) to the error output of the BigQuery node to get notified if the data load fails.
- Handling Schema Changes: Before running the workflow, ensure your BigQuery table exists and its schema (column names and data types) perfectly matches the output of your Set node. If Stripe adds a new field you want to track, you'll need to update both the Set node and your BigQuery table schema.
- Preventing Duplicates: This daily sync method is simple and effective. For a more robust, real-time solution, you could use Stripe webhooks as a trigger and add a step to check if a charge ID already exists in BigQuery before inserting to prevent duplicates.
Conclusion
By connecting Stripe and Google BigQuery with n8n, you can build a fully automated, professional-grade financial data pipeline. This workflow eliminates manual work, reduces errors, and unlocks the ability to perform sophisticated, real-time analysis on your most critical business data. You can now build powerful dashboards, forecast revenue, and gain deeper insights into your customers—all powered by automation.
Enjoyed this article?
Share it with others who might find it useful