-
Notifications
You must be signed in to change notification settings - Fork 440
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Function App Slot swap is failing with 'http ping' failure #8969
Comments
Below is the error when Swap is tried from the Azure Portal
|
We are having similar issue since 6th Dec 2022. Manual swap did not work either. |
@jviau - Any updates on this? The pipelines are broken and its a critical error. Any suggestions to resolve this would be of much help. Thanks. |
@spmanjunath I only resolved this error by dropping and re-creating the slot. In your case, it would the "Staging" slot. I specified |
We are also experiencing this same issue |
@JosiahSiegel- Deleting the Staging slot did not resolve the issue. |
@spmanjunath Please find the below SO link for guidelines, if you still face the issue please write back to us. |
@ramya894 - i had look into the referenced thread before creating this thread. There are at least 3 pipelines that suddenly started failing from 7th Dec, and a day before that the same pipeline was working, and some of these pipelines are in place and working for almost a year now. |
@ramya894 |
My team is also impacted by this issue. Our pipelines have been running for over a year and now all of a sudden they are failing due to this issue. The Azure Portal shows the deployment succeeded and our code was swapped correctly; it appears the issue may be a communication issue between Azure Devops and the Azure Portal. |
Several of our teams are reporting this issue and we can also confirm that the linked solution regarding |
We have tried all proposed solutions and are also having this issue, though sporadically. |
This started happening on 8th Dec for us and thought it was linked with the App Service planned maintenance that was ongoing, but the issue is still persisting and it became a major issue. It looks like it happens for resources in certain subscriptions, it's working fine in others, all deployed to the same location (North Europe). |
Are all functions in a given subscription having this issue for you? Not so for us, some work fine, some do not, I've compared any differences there might be and am unable to find any consistency. |
For us, not all function apps within a subscription are producing this issue, but if one is, it is producing the error consistently. We are mostly seeing the error in East US. |
Didn't try all of them (we have like 15+), but when we tried to deploy like 7 of them they all failed. |
We have 3 of our Function apps deployed in North Europe and all of them are failing, they are all Windows OS, couple of them deployed on V3 host and one is in V4 version. |
Same for us, all our Functions in East US are reporting this issue. |
Currently seeing this problem in NE region. |
Folks, this issue has been identified and the team is actively deploying a patch to address the problem. We'll provide updates here, but this should be addressed in a number of regions, so a retry is recommended. |
In which regions is this fixed? Retried this morning and still happening in North Europe for us. |
@fabiocav Is the patch applied to North Europe region? In my tests the slot swap is failing. |
@fabiocav is the patch applied for East US2? |
When logged into Azure Portal you can see what regions have issues yet: The problem is we consumers don't know when the impacted regions will be patched because of rolling it out in a "staged fashion". Hopefully soon!? |
Thank you @ranandLandmark . |
still not working for |
Please provide schedule update. Still not working in US East as of Tuesday, Dec 12th, 2pm UTC. |
I'm still having problems with isolated instances in East US. |
Yes, still not resolved for us in East US |
This issue has been automatically marked as stale because it has been marked as requiring author feedback but has not had any activity for 4 days. It will be closed if no further activity occurs within 3 days of this comment. |
I'm still seeing this issue: Linux Consumption Plan EP1 I think this problem should be reopened. |
Same, seeing on isolated instances in East. |
I came across this thread when I was experiencing a similar issue in a Devops Release pipeline.
Turned out the build pipeline used to create the release had the following in it,
Despite specifying the version (and creating a build with no errors), the logs showed,
Notice how the actual version installed is After updating the Docker pipeline step to
Both the Build and Release pipelines worked correctly and the error went away. I hope this helps someone. |
With Azure DevOps Build Pipeline, we have the same issue since [email protected]. Bash Task with Inline Code for our Hosted Agent for Docker 20.10.22 (x64) chmod 777 /azp/_work/_tool/docker-stable/20.10.22/x64/docker Please create an Hotfix for DockerInstaller with an higher Version than 0.214.0, or add Version 0.209.0 to the "Task Version" Field at the Azure DevOps Release Task. Alternative change "Task Version" Field from SelectBox to Input Field so that the default value "0.*" can be overridden by the user. |
I have several Azure pipelines that was using the below task to perform Slot swamp successfully, but these tasks are suddenly failing, this i noticed yesterday (7th Dec, 22). The swap was working till couple of days before that, i suspect there is some breaking change, or, a bug that is causing the issue. Please suggest solution.
Error:
Investigative information
Please provide the following:
Thanks.
The text was updated successfully, but these errors were encountered: