Exactly, that's how business works. You calculate the cost it will take to deploy in the new region, and tell them, in order for your business to move there, instead of using AWS or GCP or any competitor, we need 3-years at this rate. And they will do it.
One of the things I am working on is moving all our monitoring to grab and store into Azure Log Analytics.
This tool is rather smart, if it can hook into all the services then let me funnel it to Log Analytics that would be cool.
Not sure the value for others but using multiple platforms, our logs are everywhere. Would be nice to connect them all to Microsoft LAW then slowly replace each integration when possible.
Yeah essentially this. Then have something crawl your database and find IP's that are crawling your dummy pages, and block those. Most of it is EC2/GCP instances and Azure VMs that people spin up with stolen cards, so you have to block a lot of 3rd party vendors. OVH and some others came up often. Lot's of crawling companies were using end user VPNs, so those are harder to block.
The best thing I found was dummy pages to block IPs of bad actors. Also, serving different urls with JS enabled versus disabled, but showing your page as something that works without JS.
Unfortunately, as good as CloudFlare is, their layer-7 isn't going to help you if someone is targeting you.
Cloudflare's layer-7 protection is crap, but it's still orders of magnitude more effective than anything Linode or Hetzner can pull off.
Any major cloud or datacenter can block an old-fashioned UDP flood these days, but botnets have evolved too. Now they speak TLS and HTTP/2, and can send (relatively) small amounts of traffic to select endpoints to generate a large load.
In addition to blocking layer-3 and layer-4 floods, the DDoS mitigation service needs to MITM all your layer-7 traffic in order to determine which requests are legit. Cloudflare can do this (to some extent). AWS WAF can do this. Regular hosting companies can't, unless you use their load balancer and let them manage your TLS keys for you.
If I saturate your uplink with UDP, none of your TCP is going to get through. Before you have a chance to drop it at your firewall. You have to get your ISP to do that for you, and hope there isn't too much traffic for their uplink.
Btw if you are thinking of doing this, how I have done it with other vendors, is use Terraform to export their config, then convert the TF data to my intended system. Rather than writing code to import based on their API because the APIs change and the Terraform modules are updated pretty fast to export the required data with SaaS vendors
I completely agree that it has problems. I use terraform a lot. Compared to nothing, I love terraform. However, it's so overwrought that it has to be hidden from anyone not in infra for a living.
1. IaC description should be format agnostic and transformable (eg definable in yaml, json, whatever).
2. Something about provider interfaces here, but it's already super messy and not sure if it's an improvement or just a shift
3. State files were wild west last time I checked. And there should be a default database interface provider at minimum. Maybe there is now?
4. Forcing the apply->statefile cycle as the default requires all of compute, interface, and a human. This should have been an abstraction on a raw interface for automated use.
While I agree with you about TF having a lot of issues, the comment isn't helpful. What would you suggest otherwise? Kind of a moot point now that the license is fubar'd, but what could be improved to make it better? If you could have a do-over, what would that look like?
Right now there is pulumi as a alternative that supports different clouds. Otherwise AWS CDK or Azure Bicep come to mind.
If i could to a do-over I'd want the solution to look and feel like AWS CDK but without the cloudformation in the background, and support for GCP and Azure.
I've worked with CDK for 2 years now and being able to define your code in Typescript is quite handy and drastically reduces the effort it takes for new people to learn how our deployment work. It's also quite nice to be able to directly bundle and deploy the application together with the infrascructure with very little effort.
How? I've always viewed TF as good at anything except metal; the best I would know to do is remote-exec but at that point you might as well drop to raw shell.
I mean that the only way I can think to use terraform to provision bare metal is to remote-exec a shell script (ex. to `apt install foo`), at which point you might as well skip terraform and `ssh targethost apt install foo` or `scp ./my-install-commands.sh root@targethost: && ssh root@targethost sh my-install-commands.sh`
Sure. That's effectively what Ansible does as well. You could even just have TF call that and be done.
The point that I'm trying to make is that I see a disconnect between deployment and provisioning.
I want both in a single tool (ala: Pulumi), even with bare metal. Ideally, in a programming language like TS or golang that is easy to get up to speed with and wraps up the complexity of getting servers up and running (as well as maintaining them over time).
Currently, how are your clients setup? What are their www and root records pointed to?
For load balancing, all you need to do is CNAME your customer to your firewall/load balancer. So you aren't using A records for this. For example, in Azure, if you spin up a traffic manager, you would get an cname like "mytrafficmanager.trafficmanager.com" and your CNAMe for www.mysite.com would point to mytrafficmanager.trafficmanager.com.
However, in this case, you would also want your customers to point to something like customer.mysite.com so that if you move from GCP/Azure to something else, you can handle that record and migrate them during a failover, incident, or any other reason.
Edit: And have customer.mysite.com point to the "mytrafficmanager.trafficmanager.com"
True, spinning up nginx and setting that up is the cheapest option I have come across with the best enterprise support. It's also available in Microsoft to deploy instead of using their tools. However, some people like being cloud native.