Update - We’re continuing to monitor the ongoing AWS incident. While we’re seeing gradual recovery, Funnel services are not yet performing as expected.

Downloads and exports have started to recover, but some users may still experience delays or errors in the app.

Our team is closely tracking the situation and working to restore full functionality as soon as possible.

Oct 20, 2025 - 16:03 CEST
Update - AWS is reporting continued signs of recovery:

“We continue to observe recovery across most of the affected AWS services. Global services and features that rely on US-EAST-1 have also recovered. We continue to work towards full resolution and will provide updates as we have more information to share.”

As systems stabilize, some Funnel processes may still experience degraded performance, including:

- Access to the Funnel app
- Scheduled exports
- Retrieval of new data

Our team is actively working to restore full performance as quickly as possible. We appreciate your patience as services return to normal.

Oct 20, 2025 - 12:28 CEST
Monitoring - Earlier today, Funnel experienced a period of complete downtime due to a widespread AWS service disruption in the US-EAST-1 region, which impacted multiple AWS services including DynamoDB: https://health.aws.amazon.com/health/status

As a result, the Funnel app was unavailable for approximately two hours.

AWS has since implemented mitigations and reports that most services are recovering. Funnel is now operational again, but you may continue to experience some delays in data source updates and data exports as systems work through remaining backlogs.

We are closely monitoring the situation and continuing to do everything we can to bring Funnel back to its normal performance. We’ll post further updates here as recovery progresses.

We sincerely apologize for the inconvenience and appreciate your patience while we ensure everything is fully restored. (edited)
No components were marked as affected by this incident.

Oct 20, 2025 - 11:57 CEST

About This Site

This is the status page for Funnel. It is used for announcing technical issues and posting updates as issues are diagnosed and resolved.

It was put in place in November 2019, which means you will not find information about prior issues.

Funnel Partial Outage
Operational
Degraded Performance
Partial Outage
Major Outage
Maintenance
Oct 20, 2025

Unresolved incident: AWS Outage Impacting Funnel Availability.

Oct 19, 2025

No incidents reported.

Oct 18, 2025

No incidents reported.

Oct 17, 2025

No incidents reported.

Oct 16, 2025

No incidents reported.

Oct 15, 2025

No incidents reported.

Oct 14, 2025

No incidents reported.

Oct 13, 2025

No incidents reported.

Oct 12, 2025

No incidents reported.

Oct 11, 2025

No incidents reported.

Oct 10, 2025
Resolved - The incident affecting Facebook Ads data imports that began on July 17th has now been resolved ✅

Over the past several weeks, a small number of customers experienced inconsistent errors when downloading data from the Facebook Ads API. The issue was unpredictable, affecting different data sources and customers on different days.

Throughout this time, our Connector Team has been actively investigating the issue, testing multiple workarounds, and collaborating closely with Meta’s engineering team to share detailed findings from our side. While several fixes were introduced along the way, the root cause wasn’t fully clear until last week, when our engineers identified the final workaround needed to restore stability.

A permanent fix was deployed to production on October 6th, and since then, all Facebook data sources have been updating reliably again. Any new errors that occur are unrelated to this incident.

We truly appreciate your patience and understanding while we worked through this. We know how important reliable Facebook Ads data is for your reporting and decision-making, and we’re glad to confirm that everything is now back to normal.

Oct 10, 16:49 CEST
Update - We know many of you rely on Facebook Ads data for reporting and decision-making, and we understand how frustrating it is when that data isn’t reliable. We want to be transparent about what’s happening and what we’re doing to help.

Some customers are currently experiencing errors when pulling data from the Facebook Ads API. Unfortunately, these errors are inconsistent: some sources fail on certain days while others continue to work, and the pattern is not predictable. This makes the issue disruptive, even though it affects a relatively small number of customers and sources each day.

Meta’s guidance is to make smaller API requests, but our testing shows that request size is not always the cause, sometimes smaller requests succeed, and other times they don’t. Because of this, we are implementing alternative ways of fetching data and introducing enhanced authentication methods to improve stability.

We are actively working on these changes now and expect to provide an update next week on their effectiveness. In the meantime, if you experience failures, retrying may sometimes resolve them, but we recognize this is not a long-term solution.

Thank you for your patience while we work to make this more stable. We know how important this data is to your business, and addressing this remains a top priority for us.

Oct 3, 16:36 CEST
Update - We are continuing to monitor this issue and escalate the issue with our Meta contacts. Together we are actively debugging to find a root cause to the issue.

The issue is continuing to behave non-deterministically: affecting different sources each day. The total number of sources affected per day is fluctuating between a bit less than three in 1000 to one in 10,000, but without any straightforward pattern.

Sep 26, 10:55 CEST
Update - We are continuing to monitor this issue and escalate the issue with our Meta contacts. Together we are actively debugging to find a root cause to the issue.

The issue is continuing to behave non-deterministically: affecting different sources each day. The total number of sources affected per day is fluctuating between a bit less than one in 1000 to one in 10,000.

Sep 12, 13:20 CEST
Update - We are continuing to monitor this issue and escalate the issue with our Meta contacts. The issue is continuing to behave non-deterministically: affecting different sources each day.
There is a general pattern with more sources affected on Mondays than on Fridays, which was the opposite when the total rate was higher, indicating that at least one, but not all, of several incipient factors may have been mitigated by Meta.

Aug 27, 12:18 CEST
Update - Today we saw a halving again of the rate of affected sources, putting the rate at less than one in every 10,000 sources. We are continuing to monitor this issue and escalate the issue with our Meta contacts. The issue is continuing to behave non-deterministically: affecting different sources each day.
Aug 15, 19:07 CEST
Update - This week the number of affected sources has been relatively stable at the rate observed at the end of last week: about half the rate seen previous weeks.

Previous weeks we have also seen less sources affected on Mondays than on other weekdays, but that was not true this week. This indicates that a periodic effect has been resolved, while we are still seeing at least one other, non-periodic, effect that is responsible for the residual (and that when the incident began we were observing the combined effects).

Aug 14, 16:29 CEST
Update - Update: Today half as many sources were affected as were the previous two days. We are continuing to monitor for further improvements.
Aug 7, 16:12 CEST
Update - We have received additional confirmation on our support ticket with Meta that they are actively investigating the sustained rise in errors we have been seeing since July 17th.

Following the pattern we observed last week, this Monday morning half as many sources were affected. Tuesday morning we observed around the same rate as we observed most of last week. This is a behavior pattern consistent with a load dependent error pathway.

Aug 5, 17:11 CEST
Update - We have escalated this issue with internal stakeholders at Meta in addition to regular support, to expedite remediation. Today we have less than half as many sources experiencing issues as last Friday, and last weekend many of those sources were able to receive at least partial updates, perhaps due to lower total load on Meta systems over the weekends.

We understand that many of you are putting together end of month reporting, and this has been a disruption to your processes. We are continuing to experiment to see if there are any additional mitigations we can apply from our side, while also continuing to make our normal requests to the Meta servers for updated data.

Aug 1, 17:03 CEST
Update - Update: Less than one in ten thousand sources have been persistently affected by this issue. The other sources affected by this issue (around one in every thousand) have been affected intermittently, experiencing some data updates, but not as many as usual.

We are continuing to request data for all affected sources, but the Facebook Ads API is not returning the data reliably. We are actively collaborating with Meta's support team, providing detailed debug logs and source examples to assist in their investigation.

Of the sources affected on Friday, more than 85% have returned to normal operations, and the total number of delayed sources has been reduced over 30%.

Jul 30, 11:43 CEST
Update - Meta has responded to our ticket, requesting additional information to aid in debugging the issue from their side.
Jul 29, 10:16 CEST
Update - We implemented several mitigations on Friday, and today 76% of the affected sources have returned to normal operations. We are still investigating further mitigations to remediate the remaining affected sources. We filed a bug with Meta on Friday to better understand the API behavior change, but Meta has not yet responded.
Jul 28, 12:03 CEST
Investigating - Starting on July 17th, we noticed an issue affecting less than one in every thousand Facebook Ads sources. The issue results in failed data pulls.

These sort of data pull issues usually resolve on their own within one to two days, however this issue has persisted for over a week. The issue is intermittent, with some sources downloading data for one data pull and not the next.

Greater than 85% of the affected sources have been able to update data in the last two days, and greater than 95% of the affected sources have been able to pull updated data in the last three days, however these updates are not as reliable or consistent as usual for the affected sources, so affected sources are experiencing delays.

We are currently investigating the cause of the issue, but have so far not been able to find any unique traits shared by only the affected data sources, nor differences in our environment between the runs which succeed compared to those which fail for the affected sources to point to the source of the non-deterministic behavior.

Jul 25, 14:54 CEST
Oct 9, 2025

No incidents reported.

Oct 8, 2025

No incidents reported.

Oct 7, 2025

No incidents reported.

Oct 6, 2025

No incidents reported.