Become a fan of Slashdot on Facebook

 



Forgot your password?
typodupeerror
×
Cloud IT

37 Signals Says Cloud Repatriation Plan Has Already Saved It $1 Million (theregister.com) 82

David Heinemeier Hansson, CTO of SaaS project management outfit 37Signals, has posted an update on the cloud repatriation project he's led, writing that it's already saved the company $1 million. The Register: Hansson has previously revealed that his company spent $3.2 million a year on cloud computing, most of it at Amazon Web Services. His repatriation plan called for the company to spend $600,000 on eight meaty servers that each pack 256 virtual CPUs, and have them hosted at an outfit called Deft. That plan was projected to save $7 million over five years. In his Saturday post, Hansson wrote he now thinks he can find $10 million of savings in the same period.

"Our cloud spend is down by 60 percent already... from around $180,000/month to less than $80,000," he wrote, qualifying that the number excludes the cost of Amazon Web Services's Simple Storage Service. "That's a cool million dollars in savings at the yearly run rate, and we have another big drop coming in September, before the remaining spend will petter out through the rest of the year," he added. The CTO revealed that the 37 Signals ops team remains the same size even though it now tends its own hardware, which cost "about half a million dollars."

This discussion has been archived. No new comments can be posted.

37 Signals Says Cloud Repatriation Plan Has Already Saved It $1 Million

Comments Filter:
  • by rh2600 ( 530311 ) on Monday September 18, 2023 @07:18PM (#63859052) Homepage
    This is a solid outcome, and I'm sure 37signals have factored in scaling with growth. AWS and its ilk are great for early unpredictable growth, and I guess at some point of stability, the odds can work out in switching to something inhouse.
    • by ctilsie242 ( 4841247 ) on Monday September 18, 2023 @08:23PM (#63859128)

      With cloud storage and backups, there is a tipping point around 1-2 PB where it becomes more efficient to just use tape than it is to use a cloud storage provider. Mainly because the cloud provider's storage fees per month only increase as data is stored, and the amount of data a company stores in the lifetime of their backups (which could be 5-7 years, even with just monthly fulls) can be a lot more than expected, especially if the data changes a lot. With tape, once the data is written onto the cartridge and the cartridge placed on a shelf, the reoccurring cost to store a tape is mainly HVAC and electricity. Even factoring in the costs of a large data media safe for fire protection. If one uses another local vendor, the cost of transporting tapes offsite can be reduced, as there is competition that offers just as good security for less, in some cases.

      Now, this is just backups. Having primary object storage that is often accessed is a different item. However, if all the services accessing data are not on the same data center as where the data is stored, the ingress/egress fee can be expensive.

      • by sodul ( 833177 )

        Have you look at the Deep Archive pricing? It is $0.00099 per GB/month, or about 1cent per GB per month, or around $12.5k/y for 1PB (slightly more depending on your average file size). The recovery time is under 12hs. Overall this is quite competitive compared to offsite tapes and a lot more flexible/faster. They even have a Virtual Tape interface to work with your existing backup software. If your data is already in AWS there is no extra networking cost, within the same region, but if from on-prem then you

        • by Njovich ( 553857 )

          Have you look at the Deep Archive pricing? It is $0.00099 per GB/month, or about 1cent per GB per month

          $0.00099 is 0.1 cent per month. That's pretty affordable, but at large storage amounts, tape will still beat that on price even if you just store it for 1 year.

          • Yes, DeepArchive is a good price. However, the time it takes to pull data out, coupled with egress fees, couples with the data that is used for the restore which is copied to a "hot" bucket can be expensive. I have seen a single restore cost thousands when using that.

            DeepArchive is "insurance". You have your primary backup copy on disk, a second backup copy on tape, an offsite copy on tape or cloud, and you have DeepArchive as the absolute last resort.

      • by dbialac ( 320955 )
        Tape has the advantage that you have a backup that is very easy to restore and difficult to erase or encrypt should your company get hacked or be a victim of ransomware.
  • by oblom ( 105 ) on Monday September 18, 2023 @07:20PM (#63859054) Homepage

    The fact that AWS is a very profitable business should tell you that there's a built in margin into the operation. Despite a competitive price obtained via production of scale, the main advantage of AWS is its flexibility. Meanwhile, a business with a specific need can optimize its operations by not wasting time on unnecessary features.

    • by jhoegl ( 638955 ) on Monday September 18, 2023 @07:55PM (#63859092)
      It literally costs 3x the amount per year, of a 3 year server build.

      So, imagine your server build cost ~350k one time. The cloud cost would be 3x that or more. ~$1 mil, but every year.

      So, it would cost $3 mil over that 3 years instead of 350k up front.

      Cloud was never cheap, what was cheap was the CTO/CIO direction who bought into the sales tactics.

      AI uses the same hype as "cloud", by the way. So have fun with that bullshit.
      • And you seem to miss his point that it DEPENDS on the business.
        • Bingo. Cloud makes sense for the small to maybe medium operator where the 1 offcl costs and staff costs blow your budget. If youre big enough to employ some skilled staff and run enough hardware at 2 sites for sufficient DR, this will be your best option provided you have the right culture to attract the right staff who can and will do it right.
      • Cloud pricing was based on the idea that things only ran during the 8 hour working day/40 hour work week. If you needed to run something longer than that, it was always cheaper to do it in house on purchased hardware.

        The problem was that the marketing campaigns made it such that all the management and upper management bought hook-line-and-sinker that it would save them money. Anyone with a brain could run the math and show that it didn't in a 24hr production shop for anything other than a short term need.
        • by stooo ( 2202012 )

          >> Cloud pricing was based on the idea that things only ran during the 8 hour working day/40 hour work week.
          No service shuts down after 18:00

      • If you can build a server and call it a day then AWS wasn't a good decision for you in the first place.

        The Cloud's benefit was never in being cheaper than just buying a bunch of servers. It was in flexibility. Your server is a colossal waste of time and money if I only use it for 4 days a year. Your server is a colossal waste of time and money if it doesn't scale for sudden customer demand. Your server isa colossal waste of time and money if it provides connectivity at one endpoint but your product/service

        • by EvilSS ( 557649 )
          Yep, just doing a lift-and-shift of VMs to cloud, them managing them like you did on-prem is a recipe for a massive cloud bill. Unfortunately that's what a large number of companies end up doing.
      • by AmiMoJo ( 196126 )

        The server buy cost is only a small part of it. To match what you get in the cloud you need

        - Skilled IT staff to manage and maintain it
        - A hot spare or two
        - Strong corporate security policy that is actually enforced
        - High quality backup system that is regularly tested

        It gets even more expensive if you have more than one location, and need to have a high availability internet connection and VPN for access.

        Cloud servers are not just a box you rent.

        • Comment removed based on user account deletion
          • by AmiMoJo ( 196126 )

            It's not a box. What makes the cloud different to just renting a box is that cloud services are designed to not be tied to any particular machine. If the server that was hosting your application fails, the VM and all the front end configuration changes over to another one.

            The ultimate version of it is microservices, but even for stuff like databases the idea with the cloud is to implement it in a way that handles all the replication and balancing in a seamless way.

            You can reproduce it yourself with your own

        • by dbialac ( 320955 ) on Tuesday September 19, 2023 @09:29AM (#63860120)
          There's more to it. You lose performance because everything presented to you is on a VM with less memory than the host machine. A better solution I've seen is having your core processing/traffic handled by hardware you own and supplementing it with cloud services that spin up when needed. This gives you the advantage of a cloud service and the cost and performance of bare metal under typical loads.
      • by mjwx ( 966435 )

        It literally costs 3x the amount per year, of a 3 year server build.

        So, imagine your server build cost ~350k one time. The cloud cost would be 3x that or more. ~$1 mil, but every year.

        So, it would cost $3 mil over that 3 years instead of 350k up front.

        Cloud was never cheap, what was cheap was the CTO/CIO direction who bought into the sales tactics.

        AI uses the same hype as "cloud", by the way. So have fun with that bullshit.

        This.

        I called "declouding" a thing a few years ago when I really looked into pricing at Azure and AWS. Companies sick of the cloud bill will be looking to go back to datacentres they can control.

        I compare cloud providers to budget airlines. If you do everything the airlines way it can be cheap. This means no baggage, the middle seat, no food or drinks, online check in. advertising and lottery tickets hawked at you. If you want anything different, it starts to cost money. Cloud providers are the same,

      • by dbialac ( 320955 )
        In the early days of AWS I quickly discovered it wasn't a good deal. My first AWS bill for just a mysql instance and a small server was $80, more than I was paying for bare metal.
      • Housing that server, or say hundreds of them in globally distributed, secure data centers with redundant power and network access costs quite a bit more than the hardware itself. Our dc bills+hardware costs were definitely higher than the AWS costs, otherwise we'd have stayed in the data centers and kept buying new servers every 10 years or so. When we lost 80-90% of traffic due to the pandemic, the AWS bill also went down to a minimum. Leasing microcores is way cheaper than buying new servers and idling th
      • Cloud computing has always been about rapid setup, OpEx over CapEx, and elasticity. For many operations, it means they don't have to wrangle with their own DC at all. This is valuable for startups. Nothing new here, anyone at an established business who thought that teh_cloud would be able saving money in the long run was fooling themselves.

        And with AWS, the billing is so complex and unpredictable that spacetime curves back on itself.

  • by Voyager529 ( 1363959 ) <voyager529@yahoo. c o m> on Monday September 18, 2023 @07:20PM (#63859056)

    The cloud (tm) doesn't *always* make sense. There are a few particular cases where it does:

    --E-mail. Barring some highly specific circumstances or a principled opposition, e-mail is rarely worth self-hosting from either a cost or productivity standpoint.
    --Basic Website Hosting. If a website is the functional equivalent of a Yellow Pages ad (5-page HTML/CSS, Wordpress, Joomla), there's no reason to host it on-prem.
    --Seasonal business. If a business spikes substantially during the holiday season or something, cloud hosting *can* save money if the elasticity is actualized.
    --Nascent startup. If you don't have customers yet, paying monthly to AWS is way more preferable to trying to guess about how many servers to purchase.
    --Geographic redundancy. While this is commonly a nice-to-have, if geographic redundancy is a huge need, it's way easier to use AWS than multiple datacenters.
    --We're-So-Small-That-SaaS-Covers-Everything. Servers aren't worth it for a hair salon or a single-location restaurant or retail boutique.

    If it's outside of this scope, on-premises infrastructure can still compete on cost in most cases.

    • Re:Unsurprising... (Score:4, Informative)

      by jsonn ( 792303 ) on Monday September 18, 2023 @07:34PM (#63859062)
      Can we please stop making hiring a service the same as "using the cloud"? There is a fundamental difference between an all-inclusive hosted solution and renting server resources for running the same on your own. Most importantly, the latter doesn't actually save you the admin. That's one of the problems with actual cloud offerings.
      • Indeed. A SAAS-provider is totally different then cloud.
        I don't use cloud, because I see no benefit for it. But I totally hire server-space (VPS) and I pay a SAAS email provider to handle my email for me.
        Way too much effort to host my email myself.
    • Email is not normally a good fit for "the cloud" because heavy random I/O database workloads are expensive to host. Most of the time what large email systems will do is host the front-end on a cloud service but keep the database on premise.
      • Email is not normally a good fit for "the cloud" because heavy random I/O database workloads are expensive to host. Most of the time what large email systems will do is host the front-end on a cloud service but keep the database on premise.

        I don't think we're quite talking about the same thing. I'm referring to something like Microsoft365 or Google Workspace, rather than rolling your own e-mail server in AWS with a dovecot/postfix implementation. Yes, M365 and GW get unwieldy at a certain scale, but I'd imagine that the overwhelming majority of accounts on either platform are 1,000 mailboxes or less. Colleges and regional governments are probably the most common cases that have more, but by time those services start having scaling issues, you

    • by sjames ( 1099 )

      Even some of those cases are marginal. If you need IT in the first place (as opposed to outsourcing the whole thing), even a very small operation may benefit from being in-house. If you're that small, a single server sitting on a desk may be all you need for day to day. You can get a good refurb 2U pretty cheap and you already need a business internet connection. Cloud might be a decent backup plan or a contingency plan if your needs grow fast, but with the understanding that it will be a stop-gap while mor

      • Maybe for DR. Seasonality, not so much. The problem is, seasonality ⦠isnâ(TM)t. At least not if youâ(TM)re doing it right.

        Youâ(TM)re spending the rest of the year doing integration testing, load testing, QA testing. Gathering metrics and results and making sure you have the horsepower for those seasonal loads.

        And by the time youâ(TM)ve done all those, youâ(TM)ve got the supposedly-seasonal resources up and running for all those tests. Each one, burninâ(TM) dol

    • Re:Unsurprising... (Score:4, Insightful)

      by StevenMaurer ( 115071 ) on Monday September 18, 2023 @07:56PM (#63859096) Homepage

      To add to your list:

      -- Stress testing. If you want to stress test your enterprise app with realistic loads prior to real world use, use horizontally scaled cloud testing to do so.
      -- Prototyping. Certain product initiatives can be tested - even A/B tested - without heavy upfront costs. Just IAC.
      -- C-Suite Warm and Fuzzies. It's often a lot harder to get a new vendor approved than just adjusting what you're paying your cloud provider.
      -- Opportunity cost. Equipment setup only saves money if the people you have doing the work wouldn't deliver better value doing something else.

    • Re:Unsurprising... (Score:4, Insightful)

      by Rosco P. Coltrane ( 209368 ) on Monday September 18, 2023 @08:05PM (#63859102)

      The cloud (tm) doesn't *always* make sense.

      Even when it does make financial sense, one consideration is data sovereignty and control over one's IT infrastructure.

      It should be supremely important to most companies: entrusting any company data to Amazon, Microsoft or Google should be just about as alarming to any CEO as entrusting photos of their kids to Gary Glitter. One would think common sense would have most sensible companies pay extra to prevent any data ending up on their servers. But no: sadly the bean counters always have their way.

      • Or not even allowed.
        As the laws currently stand it is illegal for European companies to put privacy sensitive data on American cloud providers (GDPR)
        • That's a large oversimplification. Schrems II certainly added some roadblocks to transferring the personal data of EU users to the US, but it's not "illegal". EU businesses need to do their due diligence before transferring data to a US cloud provider, but none of what needs to be done to overcome the roadblocks is groundbreaking stuff (do a compliance assessment, encrypt your data where possible, put contractual protections in place, etc.).
    • Re:Unsurprising... (Score:5, Interesting)

      by StormReaver ( 59959 ) on Monday September 18, 2023 @08:17PM (#63859116)

      Barring some highly specific circumstances or a principled opposition, e-mail is rarely worth self-hosting from either a cost or productivity standpoint.

      I self-host my email, and it costs me...carry the 0...zero dollars a month and zero minutes of my time for most months. Whenever I need to add a new email address, it takes me under a minute. It takes under ten seconds if I'm already logged into the server.

      If a website is the functional equivalent of a Yellow Pages ad (5-page HTML/CSS, Wordpress, Joomla), there's no reason to host it on-prem.

      That is exactly the best reason to host it on-premises. The only exception being if you have shitty Internet service, or if you somehow manage to be both small/insignificant and high traffic. For the more common scenario (small/insignificant and low traffic), a small Raspberry Pi or similar small server can handle the job with a 1mb/s network partition. You probably wouldn't even need to bother with the partition.

      If you don't have customers yet, paying monthly to AWS is way more preferable to trying to guess about how many servers to purchase.

      This scenario screams on-premises hosting. If you don't have any customers, you need exactly one small server (see the startup scenario above). And you don't need to pay for a service you don't need.

      Geographic redundancy. While this is commonly a nice-to-have, if geographic redundancy is a huge need, it's way easier to use AWS than multiple datacenters.

      If geographic redundancy is a huge need, then you probably already own buildings in distributed locations. That means you're already paying the bulk of the cost of geographic distribution. Adding a large server to each location is peanuts by comparison, even when you consider the staff needed to run the servers. You staff you already have to handle all the cloud problems that crop up are probably already qualified for the job.

      I'm going to stop here. Suffice it to say that our opinions are diametrically opposed.

      • Re:Unsurprising... (Score:4, Interesting)

        by Voyager529 ( 1363959 ) <voyager529@yahoo. c o m> on Monday September 18, 2023 @08:58PM (#63859184)

        I'm up for a bit of friendly deliberation, if you are...

        Barring some highly specific circumstances or a principled opposition, e-mail is rarely worth self-hosting from either a cost or productivity standpoint.

        I self-host my email, and it costs me...carry the 0...zero dollars a month and zero minutes of my time for most months. Whenever I need to add a new email address, it takes me under a minute. It takes under ten seconds if I'm already logged into the server.

        So, a few things to unpack here. Fundamentally, my starting point was more business-centric, rather than individual-centric. If you, personally, host your own e-mail, awesome! I always love seeing people self-host whatever they can!

        Self-hosting e-mail on residential IPs tends to be a losing proposition; it's extremely common for residential IP blocks to end up being summarily sent to spam. Now sure, you can relay through sendgrid or smtp2go or mailjet...but still not fun. Also, while you might be blessed with an ISP that doesn't CGNAT you and allows you to have inbound port 25 traffic, others don't.

        If a website is the functional equivalent of a Yellow Pages ad (5-page HTML/CSS, Wordpress, Joomla), there's no reason to host it on-prem.

        That is exactly the best reason to host it on-premises. The only exception being if you have shitty Internet service, or if you somehow manage to be both small/insignificant and high traffic. For the more common scenario (small/insignificant and low traffic), a small Raspberry Pi or similar small server can handle the job with a 1mb/s network partition. You probably wouldn't even need to bother with the partition.

        I'd submit that you're generally assuming business-class internet with static IP service...which is fine, but if your ISP is anything like the ones near me, the cost of that vs. standard dynamic IP internet service is higher than basic shared hosting and e-mail service over at Namecheap or Godaddy. Now, if you're hosting your website and your mail server, you'd need either five static IPs or a reverse proxy. Easy enough; nginx isn't terribly demanding, but you're at three VMs/servers and it's still more expensive than generic hosting even if your server *and* your time were free.

        If you don't have customers yet, paying monthly to AWS is way more preferable to trying to guess about how many servers to purchase.

        This scenario screams on-premises hosting. If you don't have any customers, you need exactly one small server (see the startup scenario above). And you don't need to pay for a service you don't need.

        I'll concede here slightly in that I shifted the goalposts in my thinking without stipulating that shift. My thinking in this case was a bit closer to a mobile app developer, maybe a game or a niche social network or an app-enabled IoT appliance or something like that - something where customers' ability to access the servers is a core tenet of what is being delivered. A game with 10 players has different resource needs than a game with 1,000 players, or 100,000 players. You don't want to be the next iteration of (ironically) Amazon's New World that had constant server resizing issues any more than you want to assume you'll have a million players and end up with a dozen. We'll agree in terms of a Main Street business that still has non-SaaS options, though.

        Geographic redundancy. While this is commonly a nice-to-have, if geographic redundancy is a huge need, it's way easier to use AWS than multiple datacenters.

        If geographic redundancy is a huge need, then you probably already own buildings in distributed locations. That means you're already paying the bulk of the cost of geographic distribution. Adding a large server to each location is peanuts by comparison, even when you consider the staff needed to run the servers. You staff

      • by Anonymous Coward

        I tried self-hosting email for our small business (~10-20 addresses) and the time spent dealing spam (and spam filter software) made it completely untenable. Paying a few bucks a month made a lot more sense in our case.
        I have a static site I just host on cloudflare. Costs me nothing, not even the price of a rasppi. I don't have to set any hardware up for it.

        If geographic redundancy is a huge need, then you probably already own buildings in distributed locations.

        Lots of businesses don't own, they lease. Also that's a doozy of an assumption. I *want* geographic redundancy for low latency to our nationwide custome

      • While I generally agree with you, let me toss out a counterpoint.

        I ran a small (not IT) KMU. Email hosted at our ISP, because I didn't have the knowledge or time to deal with it. It's not as trivial as you imply, especially if your address ever gets spoofed by a spammer, causing trust issues.

        For similar reasons, I put our webshop on EC2. I am perfectly capable of running a physical server, but reliability is worth a lot. If EC2 hardware dies, or needs replaced, the end customer notice or care. It's one l

      • I self-host my email, and it costs me...carry the 0...zero dollars a month and zero minutes of my time for most months.

        You're the first IT person I've met who works for a company for free. Why would you do that?

    • You forgot random scaling up / down. There's no need to invest in a beefy machine when I only need to run a heavy simulation a few times a year.

  • by Rosco P. Coltrane ( 209368 ) on Monday September 18, 2023 @07:56PM (#63859098)

    Our cloud spend is down by 60 percent already...

    And they save a bundle on gerunds too.

  • by NoWayNoShapeNoForm ( 7060585 ) on Monday September 18, 2023 @08:21PM (#63859124)

    Long ago in the days of the mainframe (and yes, my neckbeard is showing so stop staring at it) ...

    Perot Systems and EDS were 2 very big companies that specialized in outsourcing customer's mainframe operations. That was "the cloud" back then.

    Sometimes that outsourcing took place on the customer's location and sometimes the customer's programs were relocated to EDS or Perot owned mainframes in off-site data centers owned by EDS and Perot.

    Outsourcing mainframe stuff was a profitable business to be in back then. Most customers did not want to pay for all of the staff, licensing, and support costs required to operate a mainframe. The customers that owned their own mainframe hardware, and few bothered going that far, still faced the licensing, staff, and support costs.

    By outsourcing their mainframe operations those companies turned an asset that they may have once depreciated into an annual cost. And then there is the cost savings of not having all of the associated mainframe staff, licensing, and support - not to mention the more hidden costs of electricity & cooling and what-not. One argument commonly used to argue in favor of outsourcing the mainframe to EDS or Perot was "Gain more control over your annual mainframe operating costs."

    EDS and Perot made lots of money outsourcing customer mainframes. EDS and Perot could afford to obtain big honking mainframes where they could timeshare the needs of multiple customers. As long as the customer's contracted performance objectives were met, and thus justified the monthly bill to the customer, EDS & Perot could focus internally on how to cut their internal costs of meeting the contract goals.

    After all, in the mainframe outsourcing business it was All 'bout Da Benjamins Baby !

  • by ctilsie242 ( 4841247 ) on Monday September 18, 2023 @08:31PM (#63859148)

    In bad times, a company can sort of bend rules, be it not renewing support contracts on hardware and hoping nothing breaks, using cheap solutions like cobbling together a cast-off desktop machine, stuffing a bunch of drives in it, adding RAID, and using that for a backend server (SPoF and all), skimping on backups, not upgrading to newer versions and staying with obsolete (possibly insecure) revisions of software... maybe even playing fast and loose with licensing. Yes, it will hurt a bit once times get better and vendors demand recertifications for their equipment, if not a round of new purchases, but at least the company made it through the dead times.

    With cloud computing, if a company cannot afford their cloud bill, around three months later, the lights get turned off, and they lose access to everything. From what I know, because cloud providers know that they control the keys to the city, they won't accept anything than a full payment of back dues in order to allow access to resources used. So, for all intents and purposes, all the stored data, all the computing, all those servers are lost. This will cause a company that might have been able to get by with ebay parts and junker servers, had they had their stuff in-house, to completely collapse with no way it can ever return to a productive state.

    That "cloud cliff" is going to be painful, especially when companies have been going all OpEx and dumping CapEx, even though going OpEx means a 2-10 times greater expenditure each quarter.

  • by BytePusher ( 209961 ) on Monday September 18, 2023 @08:34PM (#63859150) Homepage
    Storage, not S3, but your EBS volumes attached to your EC2 instances. EBS at reasonable IO speeds/limits is very expensive. If you use on instance storage at all, you're either wasting CPU cycles waiting for IO or spending money on higher tier EBS storage. Likely, Amazon figured out they can over-subscribe servers if the virtual machines are spending a lot of time in throttled IO operations and they can over-charge for higher IO throughput. They win twice, you lose.
  • If you have fixed/baseline compute - host on your own. You should at least get some of the AWS' margin (granted their margin might be better due to economies of scale). For reminder of compute use cloud to scale demand as needed. This will not work for all. In smaller companies you will incur adtionall cost of maintenance, DBA, SysAdmins, etc.
  • by cloud.pt ( 3412475 ) on Monday September 18, 2023 @08:50PM (#63859178)

    600k on servers... Let me tell you about a small company I have worked in: they bought a used dual-socket xeon scalable silver - nothing more than 16 cores. for 1000 british pounds on ebay. It came bundled with 4+4TB disks, a 512GB SSD and space for another m.2 drive, and a bunch of PCIe slots.

    This company placed proxmox on the server and ran at least 5 different projects on and off using it, along with a backup server for company stuff (TrueNAS-based), a wiki, jenkins runners, bunch of VMs and LXCs for this and that... It costs around 30 USD of electricity to run per month, and of course, probably 5 times that in maintenance from the people that set it up and manage it. But it is CHEAPLY expandable, not only adding more stuff through PCIe (which they did, for instance - a quad m.2 adapter for 4x more SSDs, a high-ram GPU for inference/ML, and they were considering adding a cheap Mellanox card for a disk rack....), but they can just buy another server or 2, set it up with proxmox, and have High Availability. Yes, it's not a complex VMWare cluster with mission-critical HA and scalable redundancy, but it sure as hell beats paying AWS a cool 500+ per month for about HALF the services used in there. And I don't even count the ML stuff.

    • by labnet ( 457441 )

      I've just purchased 5 x R630's with 2x12Core CPUs, 256G RAM, 10G SFP+ for 500USD ea.
      All up and running on a Prod and Management Proxmox Cluster and a redundant Synology SAN.

      Runs about 40 VMs onsite. Build Bots, SVN Servers, ERP, GitLab etc etc.
      The SAN was the most expensive part. And it all runs a treat!

  • by peterww ( 6558522 ) on Monday September 18, 2023 @09:19PM (#63859206)

    The reason companies like these end up spending millions on the cloud is they have no idea how to set and maintain a budget, and no idea how to save costs in the cloud. They just hand the keys over to entire teams and then close their eyes. And, hey, it turns out that if you are able to turn on an unlimited amount of services, you will get billed an unlimited amount!

    With savings plans at any major cloud provider, the cost of infrastructure is only negligibly higher than hardware + service contracts. The higher costs are often from things like managed services and egress.

    So, yeah, if you just buy 8 servers, your billing and cost outlay look really fucking cheap and simple. Because you literally don't get to have any more than that, and you're paying for bare-bones stuff with virtually no support, no API, etc. You get what you pay for: cheap shit that doesn't scale and very few options.

    • This exactly.

      And also because when they have their own hardware, they refuse to maintain or upgrade it when they should. That, of course, saves money from the bottom line, but also turns into a maintenance nightmare when they find themselves stuck behind a boatload of ancient servers and server software, that they can't afford, and don't have the time, to upgrade.

      • I think this is a big part of it. If you have your own hardware, it runs, that's great. A few years down the line, it's time to replace that hardware. A one-time expense, and management hates big one-time expenses.

        A monthly bill, on the other hand, just becomes a standard part of overheads. Never mind that 36x or 48x a monthly bill may actually be quite a lot more than the periodic upgrade/replacement cost.

        Still, you do have additional staffing costs. Someone has to monitor the hardware, and upgrade it,

  • by jtara ( 133429 ) on Monday September 18, 2023 @09:27PM (#63859212)

    I worked on the first AWS deployments that were done by Sony for Playstation game backends. Console games typically have some backend parts, even if not multi-player games - e.g. at least for leaderboards, marketing site, etc. Services are used for both the console itself and game-specific website.

    It was a big deal when they first did this. Game releases are hit-or-miss. Prior to using AWS, they would order a bunch of servers in advance, and if the game was a hit, they needed to scour the country for more (not always easy) or else they would have depreciating assets laying out. So, AWS helped with capacity and probably saved money.

    A few years later, I was at a Ruby conference, and there was a session presented by a Sony employee. They said that they had gone largely back onsite but this time with an "in-house cloud".

    IBM has had the ability to deploy the same services as IBM Cloud onsite for a few years now. And AWS has also started down that road, but later than IBM.

    Seems it's time for the Great Rehoming for those large organizations that haven't done so yet.

    • by EvilSS ( 557649 )

      They said that they had gone largely back onsite but this time with an "in-house cloud".

      IBM has had the ability to deploy the same services as IBM Cloud onsite for a few years now. And AWS has also started down that road, but later than IBM.

      Azure can do this as well. It's nice because you keep the same skillsets but you own (or lease) the hardware. You can also usually move workloads between on-prem and cloud, or burst into cloud very easily. The downsides are some of the cost models. Azure has two flavors of this and one requires specific hardware and the costs involved in running workloads on it are just stupid. The other, newer offering, is more akin to the traditional on-prem hypervisor model and makes a lot more sense financially.

  • The cost to make the move is probably higher than any savings he'll realize in the life of the capital expense.

    • Cloud is a bit like a marriage. Getting in is fairly easy, but getting out usually costs you an arm and a leg, and if you're not careful, you're probably gonna lose at least half of your stuff.

  • 256 vCPUs? Thats nothing.

    By my calculations I can build a 2x more powerful setup for half a million.

    But we are not going to tell him that :)

  • Not apples to apples (Score:4, Interesting)

    by Tony Isaac ( 1301187 ) on Monday September 18, 2023 @11:10PM (#63859292) Homepage

    Did 37 Signals implement DR, such than data is stored in three distinct locations, one of which is in a different geographic region? Do they have redundant power sources, including utility power, and redundant network connections? Do they constantly keep every server OS updated and patched? When your server becomes overloaded, can you easily upgrade? Does their plan include regular hardware updates? Will hardware updates happen invisibly to customers?

    I don't know, maybe the answer to some of these questions is yes. But I doubt they have all these bases covered.

    • by RedK ( 112790 ) on Monday September 18, 2023 @11:31PM (#63859304)

      > Did 37 Signals implement DR, such than data is stored in three distinct locations, one of which is in a different geographic region? Do they have redundant power sources, including utility power, and redundant network connections? Do they constantly keep every server OS updated and patched? When your server becomes overloaded, can you easily upgrade? Does their plan include regular hardware updates? Will hardware updates happen invisibly to customers?

      I mean, neither does AWS/Azure do all these things though. If you want georeplicated data, that's extra. Heck, just extra AZs or Availability sets for active/active VM setups, that's extra. Vnets in different AZs and multiple redundant networking path ? Extra. OS updated ? ahahaha, you betcha that's extra if you use their automation tools. Invisible hardware upgrades ? Someone isn't subscribed to Azure Service Health, more like "invisible" with downtime that you can either plan or that they'll just "plan" for you.

      Holding people to a higher standard than your typical blueprinted Web app on the marketplaces is kinda inane. Anyone who worked with cloud providers knows you get nickled and dimed for all those "cloud advantages". Anyone who's worked for a moderately sized enterprise knows that all that stuff is down in the data center too, management is just getting sold on the cloud doing what they've been doing for years.

      Cloud pricing is absolutely expensive. About the only advantage is skipping the acquisitions department and HP and Dell's insane shipping time on hardware.

      • The article doesn't say what level of service they were using from their cloud providers, and it doesn't say what they are implementing on their own hardware, so I can't tell you whether your statements are correct. Yes, many of these features are "extra," but most businesses that use cloud services specifically want and pay for these extras, because it's literally their DR plan. If 37 Signals wasn't paying for these "extras," they were stupid. And if they're not implementing it now on their own hardware, t

    • IT people often talk about "other people's computer" while ignoring that they work for free as well. Building a server and ignoring it is a great way of getting hacked, dataloss, outages, etc. The cost of the server compared to the cloud needs to include the cost of your lost time building and managing the server.

      Sometimes that works out, other times it doesn't. Scroll up a bit and you see someone talking countering a benefit of the cloud with DIY and saying his email server costs him nothing. He then proce

      • by RedK ( 112790 )

        > IT people often talk about "other people's computer" while ignoring that they work for free as well. Building a server and ignoring it is a great way of getting hacked, dataloss, outages, etc. The cost of the server compared to the cloud needs to include the cost of your lost time building and managing the server.

        How is it different in the Cloud though ?

        That EC2 you spawned, it's not magically going to run dnf update, it's not magically going to not add a user account without a secure password, it's no

    • by njvack ( 646524 )

      Did 37 Signals implement DR, such than data is stored in three distinct locations, one of which is in a different geographic region? Do they have redundant power sources, including utility power, and redundant network connections? Do they constantly keep every server OS updated and patched? When your server becomes overloaded, can you easily upgrade? Does their plan include regular hardware updates? Will hardware updates happen invisibly to customers?

      I don't know, maybe the answer to some of these questions is yes. But I doubt they have all these bases covered.

      I can't answer these questions in detail, but I do know 37 Signals has a very solid ops team. They've written on the company blog from time to time. And built and tested the tools and procedures referenced in the OP.

      They do host their servers at actual data centers, so network and power are outsourced; it's not like their computers are "on-prem" as in "stuffed in a data closet in some Chicago office" or anything.

      From my "running a SaaS" experience: we're as yet much too small for even a half rack at a datac

      • If 37 Signals is as robust as you say, then that's great, but it certainly isn't typical. Still, with their own hardware, they'll be more tempted financially not to upgrade when they should, because those change from monthly, stable expenses, to one-time, very large expenses. Most businesses that own their own equipment don't stick to a best-practice upgrade schedule.

        As for migrating off cloud, yes, this is hard. Any migration is hard, including migrating from your own old server to your own new server. And

  • I work for a very small company. 7 people total. When I joined them 5 years ago, they were paying about $3K/mo to rent an underpowered server from a service provider in Arizona. Half of it was unnecessary licensing costs. I immediately kicked them to the curb and moved to Azure. They were very unhappy to see us leave. That $3K was almost all profit. This brought the cost down to about half. A couple of years later, I rented a full cabinet with 1Gbps fiber connection at a local data center for $800.

    • Yes, it is not just cloud vs in-house. There are solutions like OVH and Hetzner that are much more cost-effective than AWS. They provide the hardware, and you manage the software.
  • Oh it's a gndn web dev company that provides nothing of value

    Interesting fun facts started in 2004, in 09 they claim a value of 100 billion dollars... and in 2021 this, still considered a startup do nothing money pit made the needs after a third of its workforce walked out cause of bickering about t politics

  • But not a panacea. And like a tool, you should know when to use it and, at least as important, when not to.

    Cloud services serve a great purpose for three things: Testbeds and services with very dramatically changing workloads, and emergency fallbacks. As a testbed, it's great. Spin up a machine, test your stuff, test what you need, run it for a while, spin it down. No initial hardware costs, no worries about what the hardware ends up at. And if you have your own container farm, it's even (mostly) a simple t

    • I think this is a really good run down. The only other addition I'd make is if you've got something that's true serverless. I've played with Cloudflare Workers and their ability to stage javascript code and pay per execution is quite impressive. You could absolutely build an application that runs entire in this kind of environment and for certain types of workload it could be significantly cheaper. Still it's very hard to migrate an existing product into that architecture and I'd be cautious about the vend
      • That's pretty much what AWS Lambda is also offering. This is one of the things that benefit from a "pay for the workload" model, where you can have insane workloads at some points and zero at others without the overhead of having to provision processing power for the whole stretch.

        Such a model is great if you have a momentary need for processing power, but it becomes really expensive really fast if you have a moderate workload that doesn't deviate from the average much. In such a case, you're most of the ti

  • ...I am not as worried about public cloud threatening our private cloud offerings as my boss is.

    I still say public cloud is great as an overflow container for workloads and storage but you don't want any non-basic services running permanently on it that aren't very elastic anyway.

    I am sure there are use cases where a company profits from putting it all onto AWS or Azure... but I'm also very sure that the percentage of those does not reach two digits.

    What potential customers seem to demand is that one click

  • by ledow ( 319597 )

    Dedicated servers cheaper than rented cloud services, news at 11.

    So long as you're prepared to manage them, isolate them geographically, and ensure that their failover, backup etc. is always in place, then that's fine.

    But I would guess that within a couple of years they'll make even more savings on something like staffing to update those machines, and then it will all go horribly wrong.

    And, I'll be honest, I actually don't like cloud and would far rather deploy in-house and dedicated remote servers any day.

  • This cloud environment could have been a raspberry pi...
  • "We've upped our standards, now up yours."

    This is a dumb example, all we know is he's "saving money" presumably accounting for growth or decline. We don't know the entire business outlook and other factors that drive these decisions. We do know his management wants to save money so he'll get a gold star and maybe a trip to Aruba out of the deal.

  • "The CTO revealed that the 37 Signals ops team remains the same size even though it now tends its own hardware"

    He's saving money by making his employees manage the hardware for free on top of the jobs they already had.

  • I wonder how they handle latency - or if all their customers are in the US. That's one of the things that cloud providers are useful for - distributing servers so they are close to the users. At least in theory.

  • "Glad to say, our cloud reparations program is running smoothly!"

Keep up the good work! But please don't ask me to help.

Working...