So how does E2 offer "31% savings compared to N1"?
Whatever it is, there seems to be some kind of disconnect as it's not obvious from their pricing pages and they should provide better transparency on exactly where these savings are.
That should have been made more clear, sorry about that. On a per-second basis without a sustained use discount (so for burstier workloads, autoscaling, etc.) the E2 is ~31% cheaper (therefore the marketing "up to").
Clarifying your example: n1-std-2 is currently 9.5c/hour (which would be $69.35/month without Sustained-Use Discounting), but e2-std-2 is 6.7c/hr regardless of how many hours per month you use it. Which is about 30%.
From what I understand about the E2 instances, the point is that there's often a lot of idle CPU cores compared to the vCPUs that have been allocated by users. That observation in combination with live-migration and larger host machines, it becomes possible to over-sell CPU cores in a way that users won't notice except in tiny ways in around 1 in 100 or 1,000 occasions. That makes great sense. Larger instances mean that you'll be more likely to handle spikes and live-migration means that you can balance people who are actually using the CPU a lot.
What's the payoff for me? Paying less! Except that without sustained-use discounts, I'm not paying less. What strikes me as odd is that people getting the sustained-use discount are probably the people you'd want ton E2 instances. If someone is running a web server and ends up leaving you with a lot of idle CPU, that's really good for Google. That means that they're leaving a lot of unused space where you can schedule other VMs. At least to me, it seems like the people most likely to have empty space are the sustained-use people.
Who is likely to have the least unused space? The people paying by the second/hour. If I spin up a VM to do a video encoding task and then terminate it, I'm not leaving a lot of empty CPU that can be filled by other VMs.
That's why I find it so curious that there's no sustained-use discount for the E2 instances. People leaving relatively idle VMs running in a sustained way seems ideal for this kind of scheduling. Lots of companies are going to have workloads that, well, are less than efficient. For example, a task worker that gets around 6 tasks per hour and takes a minute per task. It's leaving around 90% of the requested CPU idle.
I guess the question is: why would anyone running with a sustained-use discount switch from an N1 instance to an E2 instance? The blog article sounds amazing: "we've found lots of CPU you aren't using that we re-use and pass the savings on to you!" Then it seems less fun: yea, you know how you're running a web server that's idle a lot? We'll re-use all that idle CPU, but you won't get a discount.
The weird thing is that Google is offering a ~30% discount for everything except sustained-use. You've noted the on-demand discount. A 1-year committed E2 price is 30% off the 1-year committed N1 price. The 3-year commitment is the same. So, it doesn't seem to be the case that Google found that there was a lot of idle CPU in on-demand VMs, but not in sustained-use VMs. It could certainly be that my expectation that long-running VMs would use less average CPU than short-running VMs is wrong, but Google isn't pricing it that way for the 1 and 3-year committed pricing.
Given that N2 instances lowered the sustained-use discount from 30% to 20% and the E2 instances have no sustained-use discount, it seems like Google is re-thinking whether it wants to offer sustained-use discounts. That's a pity to me. Sustained-use discounts drew me to Google Cloud over AWS. Google Cloud's offering said to me: "we get that a lot of people are using our VMs in a sustained way for long times and we'll automatically apply a discount for you without requiring you to sit in meetings determining how you want to allocate things." Committed-use discounts were great on top of that, but making sure that people didn't end up paying the on-demand price just because they didn't spend their time pre-allocating capacity was just such a consumer-friendly move and a key pricing differentiator with AWS.
It's also a bit odd that it means the committed-use discounts are so much higher over sustained-use. Like, I save ~10% by going with a 1-year commitment on an N1 and ~36% with a 3-year commitment (compared to just leaving them on). So, there isn't a huge benefit to going with a 1-year commitment on N1 instance (not that 10% can't be very beneficial). On the E2s, it's quite big - a ~37% discount for a 1-year commitment and a ~55% discount for 3-years. Frankly, those seem like AWS reserved-instance numbers and mean I'd really want to pre-allocate if I were going with E2 instances.
If the E2 instances got a 30% sustained-use discount like the N1 instances, Google could be undercutting AWS by around 50% for that use case. No pre-planning, no commitment, no meetings where people worry about buying something they won't use. Just half-price. It's basically the same savings you'd get if you did a 3-year commitment at AWS (with zero upfront), but without the pre-planning.
Instead, it makes me wonder if Google is really committed to the sustained-use discounts or if I'll have to start doing extra planning.
I'm not a fan of manual inefficiencies of AWS's Reserved Pricing but at least their pricing is clear.
I see all these different prices being floated around but I'm still not clear on how much GCP's compute cost for the most popular scenario of running a website 24/7 would be.
It looks like AWS's t3.large may be the most comparable with 2x VCPU/8GB RAM which costs $426 for 12 months / $35.50 month.
Or for m5.large (2x VCPU/8GB RAM) the cost is $501 for 12 months / $41.75 month.
What would the n1-std-2 and e2-std-2 compute cost for 12 months be?
Sorry for the lack of clarity (I've tilted at this windmill, and failed).
E2-std-2 != t3.large. A t3.large only has a "baseline performance" of 30% [1]. That's more like our e2-small though they have more memory.
Instead, I'd compare e2-std-2 to m5.large like you started to do so. An m5.large on-demand is $.096/hr => $70/month, while the e2-std-2 is $48/month. I think your $41.75/month is from the 1-yr Standard RI (a 40% discount). For that, the most direct comparison would be to use a Committed Use Discount on our side which comes with a similar percentage discount (I can't find this on my phone right now) so that's like $29/month.
As stated I'm only trying to get a comparison of the cost for the very popular use-case of running a "Website 24/7 for 12 months". I use this simple basic core metric as a baseline for comparing hosting costs amongst different hosting providers.
This is trivial to work out in AWS, I just go to their Reversed Pricing Page [2], look for the total cost for an m5.large (2x VCPU/8GB RAM) instance for 12 months which is $501 (12 x 41.75).
I'd like to be able to do the same for GCP, I see the committed usage page [2] but I don't see any easy way to work out the cost for 12 months / 24/7, it mentions things like "discount is up to 57% for most resources like machine types or GPUs", what does "up to" mean? Is that the discount for running 24/7? So is the E2-Std-2 monthly price of a $48.92 * (1 - .57) = $21.04?
All I see back in GCP's compute pricing page related to "committed usage" is "1 year commitment price" of "$10.03 / vCPU month" but this says it's for "E2 custom vCPUs and memory", does this apply to E2-Std-2 instances? So is the cost for 2x VCPU = $10.03 x 2 + 8GB = 8x $1.34 = 10.72 for the total monthly cost of $30.78?
If it's not how am I supposed to workout what E2-Std-2 cost of 12 months / 24/7 is? It's frustrating that there's no clear/easy way to determine the pricing of a simple and popular hosting scenario like this.
> This is trivial to work out in AWS, I just go to their Reversed Pricing Page [2], look for the total cost for an m5.large (2x VCPU/8GB RAM) instance for 12 months which is $501 (12 x 41.75).
Well, nowadays, you don't buy reserved instances; you buy compute savings plans. And be sure to compare the strengths and limitations of an EC2 Savings Plan versus a general Compute Savings Plan; they have differing characteristics concerning instance type convertibility, regional transfers, and applicable products.
And with Compute Savings plans, its not like an RI where you say "I'm paying for one instance upfront, give me 30% off". You instead commit to a level of spend, in Dollars per Hour. Then they convert that spend into fungible credits that have differing exchange rates depending on the instance type, region, and even compute product. Then, through the magic of the AWS Billing System, you save money.
Very rarely do you come out the other end of AWS Compute consumption with a good understanding of the exact trace of "dollar spent to which compute product?" With products like Fargate, its even worse. Don't get me started on Fargate and its billing characteristics.
I'm nitpicking here. But only because: Nothing is ever as simple as it seems. GCP is just different; I wouldn't classify it as more or less complex.
Yes, and Yes. Though, as far as I know, there is no financial reason to buy RIs at this point. Check out this comparison table [1]; you get the same savings, but far more flexibility.
AWS very rarely removes features. People may still buy RIs because they have corporate or technical processes in place where they make sense. But, from a pure financial standpoint, RIs are inferior to Savings Plans.
Is it, really? Last time I checked, AWS charges separately for network traffic, so unless you know exactly how much traffic you will serve over the next 12 months, you can't know how much it will cost.
That's one of the reason I like simple VPS providers as the first 1-2 TB/month (typically) is free/included in the base instance price.
I agree that our tables are inconsistent for E2 (I sent an email about this internally, we'll try to get it fixed).
tl;dr: You did calculate it right. $10.3/vCPU/month x 2 vCPU + $1.34/GB/month x 8 GB for a 1-year commitment aka $30.78/month (and $22/month for a 3-year commitment). All of the predefined shapes are just "how many vCPUs times $/vCPU + how many GBs times $/GB".
Ignoring the lack of columns for commitment pricing in the predefined shapes tables (which some instance types have and some don't... grr), I want to explain the reasoning. One thing that's different for GCP vs AWS is that we actually don't have a SKU for e2-standard-2 or e2-standard-16. We instead do "Resource Based Pricing" [1], and there's only "E2 Instance Core" and "E2 Instance Ram" or whatever.
So if you have 100 e2-standard-2s and 100 e2-standard-16s running for a month, your bill will have two line items:
- 1800 vCPUs times 2.6M seconds of Predefined E2 vCPU
- 7200 GB times 2.6M seconds of Predefined E2 GB Memory
The price for E2 is apparently the same ($10.03/vCPU/month) for both Custom vCPUs as well as Predefined vCPUs [2] (meaning the price for your vCPUs is the same whether you make a custom E2 with 8 vCPUs and 29 GB of memory, or use an e2-standard-8 which has the same 8 vCPUs and 32 GB). Memory is the same (your $1.34/GB/month for a 1-year commitment).
Hope that makes sense. We'll try to improve the docs. Sorry.
This is also trivial to work out on GCP: their cost quote when creating a VM shows the discount, as well as the breakdown of what you're paying for. I'm not sure how much more trivial this could be. That's actually a great thing about GCP - their pricing and resource usage is front and center in the UX, you don't have to chase it down.
I'm trying to compare hosting costs across hosting providers, not at the creating VM's stage. I expected to be able to work out determine costs from their pricing page, which wasn't clear. But I've since been referred to GCP's pricing calculator which does make it easy, where 1x e2-standard-2 always running instance with a 12 month commitment will cost $30.82 /month ($369.84 total) [1].
> A t3.large only has a "baseline performance" of 30%
> Instead, I'd compare e2-std-2 to m5.large like you started to do so.
Yes, a "baseline performance" of 100% would more or less be an M-family instance, but with additional overhead to manage noisy neighbors etc. (and thus T-family is slightly more expensive for the same constant performance than M-family). T-family is specifically for non-constant workloads where CPU over-commit provides value, but without memory over-commit, which can result in highly-variable performance.
Neither T-family or M-family do "memory stealing" aka memory overcommit, like GCP's E2 seemingly does (but the pricing page doesn't explicitly state this, and your detailed comparisons omit this crucial difference).
So, apples and oranges; you shouldn't really "compare" without benchmarking ...
What is the "baseline" memory GCP E2 instances get?
So is the main point of these new E2 instances to provide VM's for users who do NOT benefit from sustained use discounts? Does "general purpose VMs" basically mean "VMs that do not benefit from sustained use discounts?" I'm confused because the article itself says "For all but the most demanding workloads, we expect E2 to deliver similar performance to N1, at a significantly lower cost." So the article is claiming that the whole point is to save money if you do not need big memory or CPU (what I want!). However, what you write above and the prices seem to say that the whole point is save money if you do not benefit from sustained use discounts, which is a totally different thing. Help!
I spend nearly $10K/month on Google Compute Engine and get substantial committed use discounts, so these are very important questions for me.
"most demanding" here refers to things like "I want to be sure I'm on a Cascade Lake" or "I want a 3.x GHz processor" or "I want a 96-vCPU instance with hundreds of GBs of memory".
We do expect it to be a good fit for folks doing web serving, and so on. I don't think it's a great fit for you, Bill :).
As I mentioned below, they also have significant savings on the commitment pricing. So if you're spending 10k/month and keeping around the same instances every month (which it seems like you are based on getting commited-use discounts) then you can save additional money by buying commitments of e2 instances.
I'm never a fan of "up to" marketing discounts, since they almost never apply to me. Clothing goes on sale at a store, "up to" 40%? It's never the garment I was looking at. That one barely moved in price. In this case, I typically run long-lived processes (servers accepting network requests), so not much of a savings here.
And I'm past the Edit window, but for 24x7 workloads you can also use a 1-year or 3-year Committed Use Discount to save ~31% as well for E2 vs N1. The pricing matches for 24x7 w/o a commitment but is otherwise strictly better.
Since you're here:
We currently have a lot of workloads running on N1 machines, which seem comparable cost-wise to E2. Are we likely to see performance improvements shifting to E2, or is the only concern with N1 the fact that it's a previous generation and will presumably suffer limited availability at some point?
The opposite. If you're okay with your current N1 perf, and can use small instances (up to 16 vCpus), you can probably save up to 30% by moving to E2. If you want a perf boost, you can move from N1 to either N2 or the speedy C2.
well from the blog it looked way more dynamic. i.e I tought that e2 is some way of over commitment, where I pay less when I do not use my full power.
i.e. we use 3x n1-std2 with sustained use discount but we basically do not use any power over night. i.e. at night we use ~25% of memory and cpu. it's sad that it's not a cost saving for us. (i mean it's also impossible to sell/trade our sustained-use discount (switch from n1-2 to e2)
I'm not familiar with GCP, what's the difference between Price vs Preemptible price vs Commitment Price? and where did you get the Commitment Pricing from?
Rough comparison. GCP preemptible instances only live 24 hours max and have a set rate, no bidding. GCP commitments are based on CPU/RAM combinations rather than instances. GCP also has automatic sustained use discounts on the normal price if you run an instance > 80% of the month.
Maybe a little off topic: AWS also has savings plans now, which gives a discount based on a commitment of hourly spend on compute. A bit easier to manage than RIs.
You can see the Commitment pricing [1] and a Commitment is a block of vCPUs or RAM you can buy ahead of time and reserve for a year in return for a discount [2].
I just dont see any of savings here. Let me give you a little list i have put together just to compare. This was put together in 5 minutes, just to get some price:
$74.99 MSI B450M PRO-VDH Max
$189.99 AMD Ryzen 5 3600 6-Core, 12-Thread Unlocked Desktop Processor with Wraith Stealth Cooler
$38.99 Ballistix 2x8GB Sport LT DDR4-3200 CL16
$63.99 WD Blue 3D NAND 500GB Internal PC SSD - SATA III 6 Gb/s, M.2 2280, Up to 560 MB/s - WDS500G2B0B
$48.99 Thermaltake Smart BX1 450W Bronze
$59.99 SilverStone Technology Micro-ATX Glass Computer Case PS15B-G
=$476,94
In 10 months you have a new server paid. You have huge gap to 2 years of guarantee for components to bring in networking costs and electricity. It just doesnt make sense to pay google for this.
Add $100 for UPS. And as I said, 5 minute list. I dont really understand what you guys are doing but the price of self hosting is almost 3 times cheaper. It is crazy that you pay for that.
Are you also assuming the datacenter is free too, or are you planning to keep this in a closet? I hope you have HVAC too. And who is going to monitor this hardware? Who is going to assemble it? And sure the components are under warranty, but what happens when it breaks? Do you need to keep hot spares, or are you happy to wait for a replacement to be shipped out? I'm assuming you know instantly what the problem is too? Compare with the cloud version where you can have a new instance up and running in seconds.
I don't know about you, but I can remember the situation 15 years ago as a new startup where a problem with a server meant driving to the datacenter. I can tell you, paying more for a EC2 instance was the definition of a no-brainer.
Of course, if you need only one machine the cloud is probably not for you. If however you can make use of all the hosted options (CloudSQL, Pub/Sub, ML, etc.) and need scalability (in terms of number of machines used) these services are actually useful.
Example, say you run an online shop that has more demand in certain timeframes (like black-friday or December) you can easily scale your website by spinning up machines, which you can easily stop using after the demand flats out. With own hardware/colo/traditional web hosting you can't do that and holding like 10x hardware for short-term traffic spikes makes no sense.
The only thing that is a problem is that one physical machine can still handle 10 times the spike of "vcpu" which is shared between multiple customers. For much lower price. Very strong computers are available for real bargain but nooo, lets use the cloud. Its a buzzword so it needs to be good
It is funny that no one used the most realistic "excuse". That you just have no clue how to do it (install, setup software) and as you cant do it yourself, you need to hire someone. This is the only case where google does make sense. It is path into idiocracy as you will be able to do less and less yoursel with outsourcing know-how (we didnt learn anything from past China expiriences right?), but we wont worry too much about it too much about it, untill then you will all be zillionaires and it will no longer matter.
The online shop was an easy enough example. I could have talked about research projects using thousands of machines for a few hours per experiment. Maybe you understand that holding thousands of machines for effectively using them once in a while doesn't make much sense.
But hey, being judgmental towards a whole industry running large portions of the web is way more fun.
Rather than a guaranteed core and RAM as with N1/N2, resources for the underlying host can be dynamically balanced through live migrations, which GCP has already been using for years. Cool solution, and should work to save money for most workloads.
It’s a lot easier and safer to scale hosts horizontally than vertically. You can predict the limits and behavior of each host, the VMs/processes on each host don’t need to deal with fundamental resources changing, etc. For services I own that are high availability, require GC tuning, etc., these hosts with dynamic resource adjustments (also T2/T3 in AWS) are a nightmare because the behavior can change at runtime under load, exactly when I want it to behave predictably.
Oh definitely there are valid use cases for these, was just sharing my experience with them for my use cases.
We moved off of T2’s and back to C’s because of the unpredictable behavior under load. IIUC, T3s by default just bill you more instead of CPU throttling, which is a bit better for our use cases, but we haven’t tried them yet.
T3 look cheaper and better than E2 then, my only problem is region placement where Iowa and Taiwan are more central than anything AWS offers (still no central US region!?).
I'm in the MMO business, so very specific requirements.
T3 is pretty different (even in unlimited mode) than E2. As an example, t3.xlarge (4 vCPU, 16 GB, $.167/hr, so $.042/hr/vCPU roughly) only has a baseline performance of 40% (so 1.6 vCPU). If you cross that threshold in unlimited mode you pay an additional $.05/vCPU/hr (so more than doubling your cost). By comparison an e2-standard-4 is $.134/hour even if you run it flat out.
We take on the statistical multiplexing over the datacenter and move VMs around, instead of pushing it to you as an economic or performance-throttling risk when you need it most. If you want a burstable type, we do have an e2-{micro, small, medium} that only guarantees you 12.5%, 25% and 50% of your 2 guest-visible vCPUs. But that's more fit for dev workstations and so on.
Sorry if I was unclear. In unlimited mode, if you sustain greater than your baseline percentage, you pay for it (the key point of the sentence you’re quoting is that we take on the risk). One reason for this happens to be because AWS doesn’t do migration (yet?), but instead does an awesome job of doing in-place upgrades (see their talks on Nitro, for example).
We have many tools in our toolbox at our disposal: non-disruptive in-service updates moves live migration from a "must have to operate compute cloud service at all" to "helpful in some scenarios when the workload and/or situation warrants the impact to performance during precopy / potential post-copy phases."
But I would not assume that EC2 does not have that particular tool in the "fully production, and used" toolbox.
I have my doubts, in the past I've received decom-notifications that EC2 was going to be shutting down my instances in the near future due to underlying hardware failure (very helpful, since I was in the middle of triaging why the instance was behaving strangely). Seems like a poor customer experience to reap running instances if live migration is on the table.
T3 instances provide hyperthreadded vCPUs to EC2 instances, and the Nitro Hypervisor uses a core based scheduler (coscheduler) to ensure that cores are never shared between two EC2 instances.
Upstream Linux kernel changes that are based on some of the changes in the Nitro Hypervisor were posted to lkml in 2018: https://lwn.net/Articles/764482/
I hope to see the GCE team contributing more to the ongoing discussion on core based scheduling!
That doesn't really answer my question, if I have a t3-micro (which cores do not fill an entire physical core, so they are shared with others) am I guaranteed both of the cores for the instance are running on separate physical cores so that my two cores don't share one physical core?
This in order to allow for my server to continue operation if the steal rate of one core goes through the roof because some other instances running on my shared physical core are taking too many resources unexpectedly.
And how does Amazon explain still not having a central region in the US? I mean the multiplayer share of your revenues must be at least 10% by now?
I just managed to get a IONOS instance running in Kansas City (same distance from east/west-coasts) for low-and-behold 1€/month with unlimited data (18GB SSD and 512MB RAM). How is AWS going to compete with that?
A t3.micro has two vCPUs, where each vCPU is backed by a hyperthread of a physical core. Because the scheduler used by the Nitro Hypervisor core based scheduling (see [1]), the two vCPUs will always map to the two threads of a physical core. You will not run on two separate physical cores are the same time if you have only 2 vCPUs allocated to your T3 instance.
The scheduler can move where your vCPUs run based on available resources.
I can try to explain virtual machine CPU scheduling, but I can't explain when or where AWS will build new regions that have not been announced. :-)
Every search result I can see says that EC2 doesn't do live migration. You can try to balance things but you can only do so much if you can only move a VM when it happens to reboot by itself. (And there's no evidence I can find that they even do that.)
CPU hotplug has been supported for a long time. I once managed some Sun boxes that allowed replacing/upgrading CPUs without shutting down... They don't build em like that anymore.
Yes, but most workloads are fairly unprepared for this sadly. And they're really not ready for memory unplug. (I also miss the days of my multi socket boxes and plugging in CPUs and memory).
What do VM-guest memory-ballon drivers do right now when the host suddenly attempts to reserve more memory than the guest has free? I'd presume the kernel would just consider itself to be in an OOM condition, and start killing processes to free up the memory until it can return OK to the balloon driver, no?
Because, from what I understand, that's closer to the scenario we're talking about here: you're not abruptly yanking DIMMs (like physical memory hotplug); rather, you (the hypervisor) are gracefully letting the guest know that some memory is about to go away, and since you (the hypervisor) have your own virtual TLB, you can let the guest OS decide which "physical" memory (from its perspective) is going away, before it happens.
Linux and Windows have both supported it, but use tends to be at the fringes on mainframe/datacenter machines that are validated for it and so those paths aren't tested on a very wide variety of hardware and running applications. And adding CPUs and memory is one thing but removing is another.
CPU cores being hotplugged on & off was actually super common for a few years, and still is in a lot more devices than you'd expect.
It used to be a corner stone of power management on mobile devices. The Nexus 5, for example, would regularly run with just a single core online, hotplugging the other 3 off until hit with a load and then brought cores back online 1 by 1 as needed.
That behavior still is in some corners of the mobile world, but increasingly less so.
So the CPU hotplug path is as a result actually a lot more battle hardened than you'd expect, and a lot more consumer software than you'd think ran just fine in that setup without noticing.
I presume that this means that E2 instances won't have access to local scratch NVMe, since making use of local scratch NVMe disks currently prevents any feature that requires a live migration, like auto-migration on host maintenance, or modifying the VM's specs while stopped (as you can't stop VMs with local storage, only terminate them permanently.)
"Compute Engine can also live migrate instances with local SSDs attached, moving the VMs along with their local SSD to a new machine in advance of any planned maintenance." [1]
It looks like the play here is to get a bunch of small, committed workloads that GCE can move around where they've got spare capacity. On-demand pricing is very similar to the existing n1 type, but 1yr committed discounts are 30%+ cheaper.
I wish it was simply flat 30% cheaper. It is very misleading that 0.99% of month will be 30% cheaper than a full month, considering that Google Cloud is advertising sustained usage discounts everywhere.
Since commenters are saying that the technical post is more interesting, we switched to that from https://cloud.google.com/blog/products/compute/google-comput..., which is the announcement post. Maybe we'll keep the original title so it's clear it's a new thing.
The accompanying technical blog is more interesting than the announcement. It implies they may have ported or adapted Borg’s antagonistic workload scheduling features to cloud. Huge if true, as they say.
They say that the performance is similar to n1 and the price is lower, but they are not talking about preemptible price, that's more or less the same for both types.
I see these machine types have a virtio balloon memory driver so the host can reclaim memory from the VM.
What's at it for me financially to allow that? Why should I give up memory I've paid for unless I get a discount/refund? That memory will be useful even as caching of disk pages, so giving it up is making my application slower for no financial benefits.
Your mental model is correct: vCPU means hyperthread (except for shared core things like the f1-micro, g1-small, etc.).
We had a different measure of "relative performance" called GCEU (GCE Units) but stopped publishing that as it's pretty meaningless for most people. We do our platform qualifications at Google to ensure that for users that "don't care" which CPU platform that they're on, that they get improving performance/$ and so on. But for GCE, we clearly document instead the platforms and base/all-core/single-core frequencies we use [1].
tl;dr: if you want to choose your processor, stick with N2/C2 and our upcoming AMD machine types. If you're okay with us deciding for you and want a big discount, give E2 a spin!
I'm imagining the average workload within GCP VM's to be 95 percent idle time. From oversized VM's, to machines sized for peak loads, to machines where the developer has just used a standard machine size for a 3 seconds per week cron job, to machines that are forgotten about and idle, to machines spun up as a hot spare, to machines part of build infra which are idle between builds and every weekend. There's a lot of idleness.
If machines really are idle 95 percent of the time, why is the price only discounted 30%?
Can somebody help me figure this out.
I want a VPS, 2 cores 2-4gb of ram in europe. How much would it cost per month?
Also how much does storage cost?
And if i were to say put a minecraft server there. How would it be able to dynamically ramp the machine up and down if needed? Only via the interface? or after trying to connect to a specific port? I am not the typical target audience for these kind of server deals. But i want my small cheap server for myself.
Strangely, the E2 type seems to be available when checked with "gcloud beta compute machine-types list", but not with "gcloud beta compute machine-types list --zones". Launching also doesn't work.
If anyone from GCP is watching, are there updates about the following?
- Global load balancer for UDP.
- GCS signed urls for a prefix instead of only per object.
- Better latency between Europe and India.
I don't wanna distract from the E2 launch, but we definitely have gotten the message on all of those and they're at various stages of in-flight / complete. As an example, Policy Documents should let you do prefix-based matching for GCS: https://cloud.google.com/storage/docs/xml-api/post-object#po...
As someone who just setup services on Google Cloud, I could not be more disappointed and outraged at their billing and performance. It's outrageously high for even small services (and I'm comparing it to Heroku of all places), and the documentation is even worse. Yes, there are examples but the docs are outdated, and make it almost impossible to relate what you're paying to what you're doing until you get the bill.
The $300 credit promo they offer is a joke, it's not $300 in the sense you'll get to try it out, it's that you're likely to rack up at least $300 is bullshit charges before you're even aware...
Google cloud pricing and prediction is a complete mess I’m moving away from gcloud all our instances just because of this reason. Their billing prediction can jump up/down 1000% in few days with no use change. And they don’t dare to say even sorry for that. But yeah they will invite Gwen Stefani and will drive you to Alcatraz spent tons of money instead of hiring engineers who can calculate basics for billing. Their CEO will tell a fairy tail about best AI for billing... and after one year it doesn’t work at all. I just don’t feel I’m ok with this customer approach and priorities.
They should be available to GKE as soon as they are available for the usual VM instances. If you can't launch a node pool, I bet you also can't launch a VM.
new - e2-micro $0.0083 - 1GB 2 cores @ .125 fraction per core?
Pretty cool if your code can use multiple cores efficiently, specially if each virtual core is guaranteed a separate physical core then this is really good! = if one core gets a congestion peak maybe the other won't.
For less than a buck/month you get better parallelism and .4GB RAM!
Sadly still no pre-purchased committed usage discount for the shared CPU instances!
----
For those with big budgets:
old - n1-standard-1 $0.0475 - 3.75GB 1 core
new - e2-standard-2 $0.06701 - 8GB 2 cores (0.00001 really?)
I also wish we had some computation power comparison metric so that we could stop looking at apples and bananas without committing.
Replying to myself again: So you will NOT get two physical cores, both HyperThreads will be on the same physical core so no real benefits to having these smaller shared instances give you "2 cores"?
See my comments for feedback from AWS engineer. I'm guessing since both AWS and GCE run the same hypervisor now that GCE will have the same "feature"?
4 socket boards (or 8) make a lot more sense than those abominations, though. Two entirely separate CPUs in one package for a vastly increased price...
>Flexibility: You can tailor your E2 instance with up to 16 >vCPUs and 128 GB of memory. At the same time, you only pay >for the resources that you need with 15 new predefined >configurations or the ability to use custom machine types.
I didn't think so, but this sentence almost seems to imply that you pay for the performance ceiling when you're using it, but not when your application is idle. Would be nice to have this clarified if that's not what this means.
Or in other words, when your ARM without SMT stalls, the execution resources are wasted, as Xenu intended. Letting another program use the functional units unused by your program is an abomination before Xenu.
No, it's the opposite. An ARM core is much cheaper than an x86 core so you can get more cores for the money. And an ARM core is cheaper than an x86 thread while providing more consistent performance.
For one thing, everyone is painfully aware of Intel's monopoly pricing a few years ago before AMD came back to life. To prevent this, 3 supplier of CPUs seems minimal, so there can be dynamism of check and balance. (2 is insufficient, as seen from Intel vs. AMD)
It doesn't, that's what's great about it. But ARM equipment is generally better value than x86 and I would expect Google to offer it as a more affordable target config (like AWS does).
It seems like this cements Google Cloud's lead in the hardware/infrastructure side. But the real problem with Google Cloud - the lack of software feature parity with AWS - is not addressed. If only there was a provider with AWS services and reliability and Google Cloud infrastructure.
Whatever it is, there seems to be some kind of disconnect as it's not obvious from their pricing pages and they should provide better transparency on exactly where these savings are.
[1] https://cloud.google.com/compute/all-pricing