ReadMe powers the API documentation for 5000 companies with only 20 engineers. Of those 20, only 1 engineer—Ryan Park—focuses on infrastructure. A big source of leverage is the managed services they build upon, including Render. Learn why, after 8 years on Heroku, ReadMe chose to migrate to Render—and how they did it with under 2 minutes of downtime: https://lnkd.in/eHaCtCrz
Render’s Post
More Relevant Posts
-
🚀 Ever wondered how to run multiple versions of Rails on Heroku? From understanding the importance of buildpacks to deploying with Gemfile.next—we've got you covered. Dive in:
How to Run Multiple Versions of Rails on Heroku - FastRuby.io | Rails Upgrade Service
fastruby.io
To view or add a comment, sign in
-
Everybody using Terraform should read and signs this https://opentf.org/ #terraform #hashicorp
The OpenTF Manifesto
opentf.org
To view or add a comment, sign in
-
Founder, CEO @ Mydbops (MySQL | MariaDB | PostgreSQL| MongoDB | TiDB Solutions and Managed Services Provider)
Terraform is forked to form OpenTF. OpenTF's goal is to ensure Terraform remains truly open source. This open-source initiative already has 4.1K stars on GitHub. #opensource #terraform #hashicorp #communities #infrastructureascode
OpenTF created a fork of Terraform!
opentf.org
To view or add a comment, sign in
-
I decided to change the backend setup for the post-scheduling feature of my first SaaS project. Previously, I had AWS Lambda functions with EventBridge running every minute to query the scheduled posts and send them to LinkedIn with its API. As for the separation of concerns, the setup had two lambda functions, one for posts only with text and the other for posts with text and images. Although AWS is easy to setup and maintain, the monthly cost would account for a moderate sum. Since my first SaaS project is a side project, I have to bootstrap, have to think about the costs of running it. My initial goal is to run it under $80/month, this includes, hosting, db, emails and everything in between. Therefore, spinning up a VPS and having Cron jobs running the post-scheduling functions was the way to go. It's a bit of work setting up the VPS and securing it by adding a limited user, hardening SSH access, editing SSH Daemon options, etc. Also since there's no CloudWatch with this setup, added logging to each post-scheduling function. Added logrotate to rotate the logs once everyday, keeping a history of 7 days, so the logs don't get out of hand over time and it's easy to maintain. Bootstrapping a solo side project is fun! 🔥🚀 Thanks for reading. #buildingpublic
To view or add a comment, sign in
-
Co-Founder, CEO/Head Of Platform Solution @ Code Factory HU | CKA, CKAD, CKS, RHCE, AWS Certified Solutions Architect/Sysops
What's your take on HashiCorp's new license? What do you think the future holds? Will there be any changes? Are you considering a switch from Terraform to another IaC tool? #terraform #license #BusinessSourceLicense #bsl https://lnkd.in/dQzp7MBE
HashiCorp adopts Business Source License
hashicorp.com
To view or add a comment, sign in
-
🚀 Building a Scalable Three-Tier Web Application with Terraform and AWS 🚀 I've developed a Terraform configuration for building a three-tier web application infrastructure on AWS. You can check out the details and code on my GitHub repository. 🛠️ Project Overview In this project, I utilized Terraform to automate the provisioning of a robust three-tier web application architecture on AWS. This architecture includes the following components: VPC with public and private subnets EC2 instances for web and application tiers RDS instance for the database tier Elastic Load Balancer (ELB) to distribute traffic Auto Scaling Groups (ASG) for scalability Security Groups to control inbound and outbound traffic 🌟 Key Terraform Features and Functions Here are some of the key Terraform features and functions I implemented: 1. For Loop: I used Terraform's for_each and count to create resources based on the input variables dynamically. This helped in managing multiple instances and resources efficiently. 2. Data Block: Data blocks were used to fetch information about existing resources. This ensured that our configuration was both dynamic and adaptable to changes in the AWS environment. 3. Variables: Variables allowed for parameterizing the configuration, making the infrastructure code reusable and easier to manage. I defined variables for AMI IDs, instance types, and other configurations. ☁️ AWS Resources and Components The project leverages several AWS resources and components to build a scalable and resilient web application: Virtual Private Cloud (VPC): Configured with public and private subnets to ensure secure separation of resources. Elastic Load Balancer (ELB): Distributes incoming traffic across multiple EC2 instances in different availability zones. Auto Scaling Groups (ASG): Automatically adjusts the number of EC2 instances based on the demand, ensuring high availability. Relational Database Service (RDS): Provides a managed database instance for the application, ensuring data integrity and reliability. Security Groups: Configured to allow specific types of traffic and protect the application from unwanted access. 📈 Benefits Scalability: Using Auto Scaling Groups ensures the application can handle varying loads. Maintainability: Terraform’s declarative approach makes managing and version controlling the infrastructure easy. Security: Proper use of VPCs, subnets, and security groups ensures that the application is secure and compliant with best practices. I'm excited to continue leveraging Terraform and AWS to build efficient, scalable, and secure infrastructure. If you have any questions or feedback, feel free to reach out! https://lnkd.in/g8attUrw #Terraform #AWS #CloudComputing #InfrastructureAsCode #DevOps #CloudArchitecture #WebDevelopment
GitHub - ThawThuHan/three-tier-web-app-terraform
github.com
To view or add a comment, sign in
-
well rounded and balanced discussion of apply/merge precedence, as usual, other factors impact the final decision
Over the past few weeks, I've seen quite a few different opinions about using apply-after-merge or apply-before-merge workflows in Terraform and OpenTofu projects. Matt Gowie has yet again unleashed another avalanche of opinions with his recent LinkedIn post, showing how divided the community is about what approach to use 🤗😎 While I believe every organization should assess its circumstances to be able to make a choice, I can easily say that I've rarely seen the apply-before-merge workflow working out without teams running into massive issues over time. For example, PR-level locks are a terrible concept that can only ever work if you don't work with monolithic state files. While this might work for small teams and setups, the problems introduced at scale shouldn't be underestimated. We took some time to summarize our thoughts in our latest blog - we'd love to hear your thoughts on it!
Mastering Terraform Workflows: apply-before-merge vs apply-after-merge
terramate.io
To view or add a comment, sign in
-
The time is now!! OpenTF Announces Fork of Terraform ❤️💪 Here is my contribution of joining the manifesto: https://lnkd.in/eRg9yMC9 Two weeks ago, HashiCorp announced they are changing the license to all their core products, including Terraform, to the Business Source License (BSL). In an attempt to keep Terraform open source, we published the OpenTF manifesto, and the community response was huge! Over 100 companies, 10 projects, and 400 individuals pledged their time and resources to keep Terraform open-source. The GitHub repository for the manifesto already has over 2.5k stars, and the number is growing quickly! The manifesto outlined the intent of the OpenTF initiative in two steps — the first was to appeal to HashiCorp to return Terraform to the community and revert the license change they were making for this project. The second, in case the license was not reverted, was to fork the Terraform project as OpenTF. #opensource #terraform #community #devops #infrastructure
OpenTF created a fork of Terraform!
opentf.org
To view or add a comment, sign in
-
Open Source is really at the heart of business operations today. HashiCorp has recently decided to follow suit of Red Hat and tried to find a balance between a commercial license for competing offerings and still benefiting from #OSS adoption. Well, it looks like these strategies don't work well for infrastructure level software. Now a group of companies calls for a potential fork and a foundation to support it. Learning from Debian, this is a great model. They've been at it for 30 years and it's still going strong! BTW, here's a great episode of Changelog https://lnkd.in/e3k6gChy about it. Will be interesting to see if it really comes to that. Terraform is the #1 revenue generating product for Hashicorp. However, the market pitch has been all around building adoption through OSS for all parts of their stack (Vault, etc). Many businesses/projects use their libraries a lot, as well, and the recent licensing decision has created a lot of desire to abandon their ecosystem. So, will they go back? Or risk creating short term support for their share price but loose the adoption (standardization) support mid to long term? Or will something like BSL become a new normal? https://lnkd.in/e-xB4csQ
OpenTF: Disgruntled HashiCorp Rivals Fork Terraform
https://thenewstack.io
To view or add a comment, sign in
5,734 followers
🚀 𝗖𝗼-𝗙𝗼𝘂𝗻𝗱𝗲𝗿 𝗮𝘁 𝗠𝗮𝗿𝗸𝗼𝘃𝗠𝗟 | 🤖 Transforming Knowledge Work with AI - Effortlessly 🌟
4moawsome!