π€― Inter-region network transfer can be a real PITA.
My latest GoogleCloud Invoice for was ~40% related with inter-region transfer. πΈπΈπΈ
- Why ? π§
- I'm glad you asked π!
A thread. https://t.co/p6TZCjiDAj
I've updated continuous delivery pipeline yesterday to push images to the 3 locations (EU+Asia+USA) instead of one (EU) and I already see improvements ππ₯ https://t.co/48LrGuSQNo
3 improvements to QRCodes: colors, SVG output, no watermark! https://headwayapp.co/image-charts-changelog/3-improvements-to-qrcodes-colors-svg-output-no-watermark!-152316
Big numbers of 5xx are only 501s.
Nothing wrong with that, that's the recommended way implemented in to respond to Microsoft Office Protocol Discovery requests :) https://t.co/WqVYCtrQGX
Oops π I should upload π³ docker image not only to EU registry but also to America and Asia registries.
Last month invoice could have been reduced π. https://t.co/mzhVom4X0m
. nightly report
β 299 days (9 mo) without downtime
π~2.1 deployments per day over the last 90 days
β‘οΈ <300ms latency worldwide
βοΈ 3 clusters, ~46 nodes at peak time
π Large test suite (unit, integration, visual, acceptance, sec)
π Continuous deployment https://t.co/sIxGZLs6kh
πI'm looking for a freelance content writer (with SEO & technical background) that could write a blog-post series on "How to integrate SaaS with X".
The technical side (add-in & code) is alreadyβ
Do you know such person? ππ»
(πͺ if you RT)
"Auto scaling" in real-life π€¦ββοΈ. A thread.
At I must deal with some burst of 80 000 requests/second that can come from anywhere around the π and answer them in less than 500ms β‘οΈ (95th)
. must sometimes handle burst of 80K req/sec.
It take the 3 Kubernetes cluster 6 minutes to *completely* x4 their node pool size to handle such load π«. Too. Long.