Stickyness is the key and the trickiest - especially if you use multiple host names pointing the same codebase to get around HTTP 1.1 limitations around multiple connections.
For example, you roll out a new canary on your main domain and sticky a %age of traffic to it. Fine, requests come in, but you forgot - you have a cdn.example.com subdomain which serves your styles/js etc - but this is not stickied.
Result is your either serve old content to the canaries, which isn't great (would fall under client errors in the blind spot I suppose), but the other option is your global CDN caches old content for your new cdnkey/cache buster (you know because that key came from the initial canary request)...so now you turn on the canary and everyone is getting old styles/js from the CDN. Boooooo!
GCP offers stickiness if you use their HTTPS load balancer with a feature called session affinity[0]. And the CDN solution GCP provides also uses the HTTPS load balancer, so that sort of problem shouldn't happen (at least for your example, but it would be easy to architect it in a way that session affinity wouldnt fix the problem you described).
It doesn't look like k8s has as nice of features around session affinity, it seems only to support client IP affinity. (I'm not as familiar with k8s, so feel free to correct me)
There are ways out there to support this, may just depend your infrastructure.
I prefer the file name because it fails obviously: use the wrong value and you get a 404 whereas many implementations around the query string meant you’d get a different version than expected.
I’ve seen that cause a lot of confusion because people would look and think that it was working correctly until they tested what the backend was actually serving. I like the approach of adding a hash to the file name (e.g. foo.<SHA>.css) so that cannot happen but related files are grouped together.
Any decent environment should make that simple: your code references foo.css and it’s automatically replaced with the expanded value.
I get it, I meant are there any advantages to this approach over using a query string? I guess one advantage of changing the filename is you can easily find all the places in your code that is referring to the filename without the hash. If you forget the query string for query string approach, the code will still look like it's working which is worse.
It's just allowed us to completely sidestep the whole "cache" and "version number" issues.
our build system will generate the filenames, so that's not as much of a problem for me, but what is a problem is cache dates that are set incorrectly, or set correctly but I need to overwrite sometimes. And versioning which can easily "lie" whether intentionally or unintentionally.
With a "content addressable web" style, you don't need to worry. If a filename is named `bde1ca6a5d7cefc8108c75fdaad29ed6.js` then you know that it will always be named `bde1ca6a5d7cefc8108c75fdaad29ed6.js` if it has those contents.
If you break a build and need to roll back, you will get the same filename, you can ensure builds are fully reproducible, and you won't run into problems if you forget to bump a version one time, or you roll back to an older version but hotfix a bug and don't set the version number correctly, or even the "my cachebuster RNG gave me the same random number 2 times in a row and it caused a few users to error out" that I actually hit once.
And of course it has the advantage of forcing you to use it, so like you said you can't include the file without it, or try to manually patch it in "just this once" (which always leads to more and just causes problems).
Yes, and that's the problem....think about the request flow between a canary deploy, that hits the canary home page html with a reference to a cdn subdomain with a new version string, but that CDN subdomain goes to the old, non-canary version....
User 1 goes to homepage.com and gets served the canary html.
User 1 sees they need the js file included on the homepage, identified by it's hash $hash, and so it requests the file $hash from cnd.homepage.com/$hash.
User 1 gets $hash and everything loads fine.
User 2 goes to homepage.com and does not get the canary, so their browser requests $old_hash, and gets the old version of the file from cnd.homepage.com/$old_hash, and everything is fine?
Unless you are talking about rolling out a new version of the cdn server along with the main website I don't see what the issue is here?
Static content, including js files: like everybody says in this thread, you should use cache busting (add hash of file to filename) regardless of canary/blue-green pattern.
Back end services: use a cookie or a header. Once a user is selected for canary'ing a device, they get a special value in a header, and your service router sends their requests to the right set of servers.
For example, you roll out a new canary on your main domain and sticky a %age of traffic to it. Fine, requests come in, but you forgot - you have a cdn.example.com subdomain which serves your styles/js etc - but this is not stickied.
Result is your either serve old content to the canaries, which isn't great (would fall under client errors in the blind spot I suppose), but the other option is your global CDN caches old content for your new cdnkey/cache buster (you know because that key came from the initial canary request)...so now you turn on the canary and everyone is getting old styles/js from the CDN. Boooooo!