Reddit DevOps. #devops Thanks @reddit2telegram and @r_channels
Are there any good Cloud playgrounds? I find many of them are highly restrictive on what you can build
For example, look at KodeKloud. There are so many restriction that you can't do the real world architecture
https://redd.it/1ebh7y0
@r_devops
How are you guys dealing with GitHub "monorepo" releases?
I joined a profitable company about 1 month ago and one of my first tasks was to improve the release process. As most late 2010's company, microservices was a thing there and they had 8 git repositories that had to be released once every 2 weeks and the release manager used to struggle a alot. The team is also made of 5 devs and 2 QA, so 8 repositories is really quite an overhead.
To make things easier I proposed we move everything to a single git repository with 8 root folders and each project would just be a folder within the repository. Not really trying to build any complexity with shared node modules or anything, just 8 folders instead of 8 repositories. Release manager loves me for it because now he has 1 repo to manage instead of 8.
Another thing about it is that they have develop, staging and production. It used to be 3 branches but they complained a lot about getting git conflicts while moving things from dev to staging and from staging to prod. I worked on making use of git tags instead of branches so it dropped the conflict by 99% and devs are happy with it.
New set of issues arrive:
- no way to release only what has changed. Every git tag needs to release all 8 projects. GitHub seems extremely lacking in it's ability to compare a new "staging" tag with the previous staging tag to figure out which folders has changes and release only those workflows.
- no easy way to promote staging tag to production tag. GitHub release page only allows to select an existing branch or recent commits as the tag. Since the staging release is cutoff one week ahead of production release, by the release day it's no longer a recent commit. We are forced to grab the tag with the latest staging release, make a branch out of it and then tag production using that branch. This is less of an issue as it's just annoyance (compared to high expenditure of the previous issue)
- no way to require approval / review of a git tag. Basically whoever has access to create a tag on the repository could just as easily create a personal branch, delete a bunch of code, push that branch to GitHub and then tag it as the latest release for production.
Overall, these are somewhat new issues for me because I spent most of my career working with only develop and main branch (dev and prod) and PR with review required for main. Git conflicts didn't happen because everything was in develop at all times. The introduction of a 3rd stage and the lag between releasing to staging and production creates some git history shenanigans that are annoying to cope with and git tags seems really terrible to manage releases (from develop and DevOps perspective).
What are you guys doing to manage 3 release pipelines?
https://redd.it/1ebi0e2
@r_devops
knowledge of kube, and a handful devs who worked in an already established cluster, we have nothing. And no time for learning it: we gotta deliver by next calendar year.
My guts tell me we're going head first into a wall, and that it's probably gonna be my job to run around everywhere with a huge roll of tape, but I've come round to realizing that's what a DevOps is to most managers and devs.
I'd still like to hear more experienced views on the matter, though: Am I gonna make it out alive without kube ?
https://redd.it/1eb7dj0
@r_devops
Rant Is devops always so tedious and annoying?
Hello!
This is rant, there will be mistakes in this text because I typed it fast and English is not my primary language.
I'm always trying to debug some obscure random errors with little to no documentation. I find devops so annoying. I'm currently hosting my websites on Azure Static Web Apps (basic React app with Vite and a few Azure Functions) and there are always bugs. I'm using their official GitHub Action pipeline to auto deploy when I push in the main branch or open a PR.
Examples of bugs:
- Exceeding the maxium app size. Ok, maybe it's my fault, but I'm only making a simple React website. I know React (and javascript development in general) is bloated, but come on, why do they offer to deploy React app if everyone will exceed the limit so easily?
- Random error saying "Failure during content distribution". It fixed itself after a few hours, but holy shit that message is so unclear, I searched for hours to find if I did something wrong.
- Sometime the pipeline works, I push code, BAM it now fails (I only added React code, nothing changed in the pipeline nor the config)
Other things I don't like
- Everything is so hard to learn. There's too much documentation (ddouble edge sword, it's good but not for begginers), I find myself reading doc after doc after doc and after a few pages I read something like "If you are using Vite, read [this\]".......... :/
- How can I know if I'm doing something right?
- Should I go with Azure, AWS, Vercel, Cloudflare, Github Pages, or even self host? How can I know what's the best for my use cases???
Thanks for reading my rant. I feel comfortable programming pretty much anything, but when it comes to devops, I feel like a pirate navigating a freaking desert with no water in sight.
I would love to hear where and how you deploy your apps. What services do you use? Do you split the app in a "website" service and a "functions" service? Do you use a backend or only Serverless Functions? What am I doing wrong? What should I do differently?
I'm open to recommandations, feedback and discussions, feel free to comment, I'm not angry at you nor this community, I'm angry at myself for having a hard time with anything devops related...
https://redd.it/1ebc3cd
@r_devops
SRE/DevOps IDE
Hi! Imagine the perfect SRE/DevOps IDE for your tasks. In your opinion, what is the most important feature it should have? What specific technologies, stacks, integrations, and scenarios should it support? Is there anything else you would like to include?
https://redd.it/1eb9qsf
@r_devops
No Vault TLS for Production cluster
Hi, i'm trying to set up a Vault production cluster for our company.
The issue i'm having right now is that the browser doesn't recognize my CA certificate. I have created it with this command:
#generate ca in /tmp
cfssl gencert -initca ca-csr.json | cfssljson -bare /tmp/ca
#generate certificate in /tmp
cfssl gencert \
-ca=/tmp/ca.pem \
-ca-key=/tmp/ca-key.pem \
-config=ca-config.json \
-hostname="vault,vault.vault.svc.cluster.local,vault.vault.svc,localhost,127.0.0.1" \
-profile=default \
ca-csr.json | cfssljson -bare /tmp/vault
As i understood this a self signed certificate that's valid only inside my cluster. Used this method as the Vault setup requires tls-server and tls-ca. I can generate the tls-server in my Cloudflare account or use the cert-manager to create one for myself but it doesn't want to work as intended.
extraEnvironmentVars:
VAULTCACERT: /vault/userconfig/tls-ca/tls.crt
extraVolumes:
- type: secret
name: tls-server
- type: secret
name: tls-ca
standalone:
enabled: false
ha:
enabled: true
replicas: 3
config: |
ui = true
listener "tcp" {
tlsdisable = 0
address = "0.0.0.0:8200"
tlscertfile = "/vault/userconfig/tls-server/tls.crt"
tlskeyfile = "/vault/userconfig/tls-server/tls.key"
tlsminversion = "tls12"
}
storage "consul" {
path = "vault"
address = "consul-consul-server:8500"
}
# Vault UI
ui:
enabled: true
externalPort: 8200
I was thinking may be to have another certificate to cover the ingress exit only and to use for local cluster a the self signed certificates, but won't work like that too.
Here's the ingress i try to create the connection:
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: vault-ingress
namespace: vault
spec:
rules:
- host: vault.company.com
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: vault-ui
port:
number: 8200
tls:
- hosts:
- vault.company.com
secretName: default-workshop-example-tls
ingressClassName: nginx
I'm trying to get my head around this for a week, but i can't. Any help would be welcomed! 🙏
The questions are:
How to generate a valid CA certificate? As i understood i can't do it.
How to enable TLS in Vault?
Is my config may be wrong?
https://redd.it/1eb273e
@r_devops
I am a complete noob to devops, and was offered an IaaC role. I am terrified to take it but I really think it can be a great opportunity.
Hi guys, I am currently an a cloud/network engineer supporting a live financial application. I've written SQL scripts, PS scripts, built a few network automation scripts through python, built a few playbooks with Ansible, and learned OOO with C++ in college. However, I have been offered an IaaC engineer role (no production code involved, yet) and I am extremely nervous to take it. I only have about 5 years of true experience in IT but I think this role can be a great segway for me into automation, which is what I've always wanted to focus on rather than pure infrastructure side of things. Im extremely nervous, and I would love to succeed in this role but I do not have much help except this community. Please offer me any advice you have!
https://redd.it/1eb3e3x
@r_devops
Does anyone have internal CLI tools they have built?
I've started building a CLI tool for our team to use to perform regular actions or search logs in a way that is more aligned to how to how we deploy our applications (think get logs <some-api-we-have>
and it'll return back a sensible time ordered collection of logs from various k8s pods, queues and such)
Does anyone else have similar tools? What do they do? Do you find them useful?
https://redd.it/1eb0ni4
@r_devops
how to do proper canary deployment for mutli-region application?
hello, i am in charge of designing canary deployment for our microservices. In the same region, it's relatively simple, I use a weighted route53 and wrote a lambda to control the weight while listen to alerts for rollbacks.
How do i do proper canary for applications that's active-passive in two AWS regions? The application has limitation that it can't be active-active due to data consistency concerns. My current idea is to canary one region, then do the other region, but it seems not efficient, so i am here asking for industry best practice. Thanks!
https://redd.it/1eaqadx
@r_devops
Injecting files securely into container during runtime.
Hi. I have a file for django (local_settings.py) that has lots of secrets/passwords in it and right now I'm keeping that file locally on my server and copying it into place before building the Dockerfile, which does the copy into the container. I'm wondering how folks are copying files from a secure location into the container and then protecting it if it has a lot of passwords in it.
https://redd.it/1eagosd
@r_devops
tags = {
Example = local.name
}
}
module "vpc" {
source = "terraform-aws-modules/vpc/aws"
version = "~> 5.0"
name = local.name
cidr = local.vpccidr
azs = local.azs
publicsubnets = cidrsubnet(local.vpc_cidr, 8, 0), cidrsubnet(local.vpc_cidr, 8, 1), cidrsubnet(local.vpc_cidr, 8, 2)
privatesubnets = [cidrsubnet(local.vpccidr, 8, 3), cidrsubnet(local.vpccidr, 8, 4), cidrsubnet(local.vpccidr, 8, 5)]
databasesubnets = [cidrsubnet(local.vpccidr, 8, 6), cidrsubnet(local.vpccidr, 8, 7), cidrsubnet(local.vpccidr, 8, 8)]
enablenatgateway = true
singlenatgateway = true
# onenatgatewayperaz = false
createdatabasesubnetgroup = true
mappubliciponlaunch = true
publicsubnettags = {
"kubernetes.io/role/elb" = 1
}
privatesubnettags = {
"kubernetes.io/role/internal-elb" = 1
}
}
module "eks" {
source = "terraform-aws-modules/eks/aws"
version = "~> 20.0"
clustername = "${local.name}-cluster"
clusterversion = "1.30"
clusteraddons = {
# aws-ebs-csi-driver = {}
coredns = {}
kube-proxy = {}
vpc-cni = {}
}
vpcid = module.vpc.vpcid
subnetids = module.vpc.privatesubnets
createcloudwatchloggroup = false
eksmanagednodegroups = {
bottlerocket = {
amitype = "BOTTLEROCKETx8664"
platform = "bottlerocket"
instancetypes = "c5.large"
capacitytype = "ONDEMAND"
minsize = 1
maxsize = 3
desiredsize = 1
}
}
tags = local.tags
}
resource "awskeypair" "terraformec2key" {
keyname = "terraformec2key"
publickey = "${file("terraformec2key.pub")}"
}
module "ec2" {
source = "terraform-aws-modules/ec2-instance/aws"
version = "~> 5.0"
name = "bastion-${local.name}"
ami = local.bastionamitype
instancetype = local.ec2instancetype
subnetid = module.vpc.publicsubnets1
vpcsecuritygroupids = [module.ec2securitygroup.securitygroupid]
keyname = "terraformec2key"
}
https://redd.it/1eahid2
@r_devops
Best PagerDuty Alternative? Lets be honest PagerDuty is expensive and full of feature bloat.
My team has been using PagerDuty for a bit, but we are now looking for an alternative as the system itself is a bit confusing, the scheduling sucks, and the pricing is ridiculous for what we are looking for.
Rather than spend weeks testing and trialing everything on the market, we thought we would ask the group what oncall management/alerting tool you all have had the best luck with.
We are truly just looking for on-call scheduling, alerting, and possibly call routing, as well as the ability to integrate with some common systems we utilize.
What are everyone's thoughts on a better alternative to PagerDuty? Thanks in advance!
https://redd.it/1eahol3
@r_devops
What should I know when going from a bigger team to a team where I'm the only DevOps engineer?
I'm in talks with some potential employers and all of them have a small number of DevOps engineers (1-3 people) or they need only one DevOps engineer for the position.
At the moment I'm in a team of around 10-15 DevOps engineers (it's mostly DevOps with a mix of SecOps Engineers, DBAs) If I'm stuck with something I have the option to ask someone else on the team for help.
What should I know if I switch to a mixed team that has developers/QAs and I'm the only DevOps engineer?
https://redd.it/1ea4c1y
@r_devops
Networking for DevOps
Hey there,
I'm a junior backend engineer with experience in both Python and Go. I'm interested in gradually transitioning into the DevOps field and was wondering how much networking knowledge is required for an entry-level DevOps position. Are the study materials for Network+ (or A+) sufficient, or do they contain too many unnecessary details, or should I aim for higher-level certifications? Also, do you have any course recommendations?
https://redd.it/1eabyne
@r_devops
Running a Sidecar container as a cron job
Googling this topic shows a few methods of achieving this but I'm not sure which way would be best for my needs.
In my current setup I'm spinning up a pod with 2 containers:
- Main container (Thanos Ruler)
- Sidecar container (just my Python script)
This is the Helm values file:
ruler:
enabled: true
logLevel: debug
clusterName: local-ruler
alertmanagers:
- http://prometheus-kube-prometheus-alertmanager.prometheus.svc.cluster.local:9093
extraFlags:
- --rule-file=/synced-rules/*.yml
sidecars:
- name: rule-syncer
image: python:3.12-alpine
imagePullPolicy: IfNotPresent
command: ["/bin/sh"]
args:
- -c
- |
echo "Starting rule-syncer sidecar"
pip install requests pyyaml --quiet
echo "Running script"
python /scripts/ruler_syncer.py
volumeMounts:
- name: synced-rules
mountPath: /synced-rules
- name: rule-syncer-script
mountPath: /scripts
extraVolumes:
- name: synced-rules
emptyDir: {}
- name: rule-syncer-script
configMap:
name: rule-syncer-script
defaultMode: 0755
extraVolumeMounts:
- name: synced-rules
mountPath: /synced-rules
Instead of running my script in a `while True` loop, I'd rather just run it as a cron job. My script needs to be mounted with the volume used in the main container.
What would be the ideal way to achieve this? I'm planning to build an image for the script/sidecar, but once that's done, how would I run it periodically?
Any help would be appreciated. Kind of new to Kubernetes.
https://redd.it/1ea8z43
@r_devops
How do you set up alerts in Dynatrace?
Hi Everyone, I wrote a metric with DQL language in dynatrace. Now having that graph metric, lets say if it goes below some threshold, how to set up alert to send it to lets say pagerduty? I saw the documentation but I cannt find a way to select the specific graph metric I created.
https://redd.it/1ebjf1e
@r_devops
Starting a new job next month as a DevOps engineer. What have I gotten myself into?
Like the title says, I'll start in about three weeks. DevOps engineer at a large org, the team is essentially a "DevOps Center of Excellence" for the entire company. They admin the (self-hosted) GitLab platform, Jfrog, and something called SonarQube. A lot of the work is coaching, training, troubleshooting, and generally ensuring that the engineering teams at all of the different divisions of the company are adopting DevOps best practices and standards. It sounds like there's quite a bit of resistance from the older and more entrenched engineers.
I've been in IT for nearly three decades, but the last ten years doing architecture on the Cloud almost exclusively. Prior to that I was a developer and software engineer mostly for Windows apps and backend web stuff.
What do I need to know to get a head start on this job? I've never worked with a CICD pipeline, or wrote automated testing, or done any of the cool DevOps stuff that took over while I was fiddling with EC2 instances (I guess the IaC stuff I did counts but only barely).
Looking for books, videos, projects, tutorials, or anything else to get me started. Already going through some of the GitLab University material and a Udemy course by Valentin Despa.
Thanks!
https://redd.it/1ebes9j
@r_devops
Is a full CI/CD pipeline for a containerized application possible without kube?
Hi r/devops
The context around why I'm asking this is a bit long, so bleither bear with me or you can just skip the context part for the technical question.
## Context:
I'm a long time tinkerer/lurker of general dev stuff (built a home Nas and hosted a webserver on it, like to play script kiddo on several old pcs with Linux) who recently jumped into it professionally and was hired as a DevOps intern.
I was hired as the sole (besides my boss and some frontdev contractors) developper in a startup a little more than 6 month ago, and boy, has it been --hard-- instructive.
My boss is a great manager and we frequently have debates over what tool to implement and other infra questions that I enjoy a lot.
He has experience mainly as a front developer though, and I found that it really shows. When coupled with his general optimistic attitude "We'll figure it out, don't worry!", which in itself a good thing, it can lead to some... unreasonable expectations.
For example, when I was hired, the "app" consisted in a fully fledged react app hosted on firbase... Which turned out to be an empty hulk in terms of data and functionalities alimented by python scripts ran locally by my boss.
My first task was to "Deploy it" (the backend, ie jupyter notebooks) then "connect it to the front", which I successfully completed, although if I had to do it again, I would certainly make different choices (mainly use firebase as it was intended instead of twisting it into working with a traditional backend)
We are now at a state where the app (front, back and db) talk to each other and somehow work, but it is honestly kind of a frankestein monster. Any software architect worth their salt would probably have a heart attack looking at the repo, and the questionable decisions made by my even less experienced self have already been problematic, as going for Google App Engine for the backend has proven troublesome when it came to orchestrating, for example, a (long running) data pipeline. It is, still to this day, a simple cron on my nas triggering a python script on the remote backend because of weird GCP and GAE limitations.
All that to get the back of the problem: since the app works, and it's been done mostly by a single developer, we've unlocked funding and are now working towards unifying many other apps which were until now proofs of concept into a single unified behemoth. We've hired several people, including a "senior devops" which... well let's just say he's been with us for 2 weeks now and still has trouble getting his python venvs to work.
All that to get to where I'm mow: I am factually the only ops-ish person in a now 10 devs strong startup, each working on one to a couple apps.
For now, I somehow get it together, as so far the devs who worked fast were also of the not-too-blunt type, and I managed to help them make their part work on GCR (lamdas for you AWS people) but I can feel it becoming overwhelming quickly, especially since serverless isn't gonna cut it for a few data and computation-heavy apps we're working on. For these, I can spin up an instance real quick using terraform/pulumi and ansible.
But what about then? What about when it doesn't work in prod? "But it works on my machine" had made my boss tell me to "build a CI CD" but I've come to realize it's not that simple.
## The actual question:
Where do I store all the secrets/configs for so many apps on a monorepo ? How do I inject them safely along the ci cd ? A single giant .env/secret manager ? If so, what about servive accounts/credentials files? How do I handle database connections locally for tests ?
How do I even tackle Ci CD on a monorepo with trunk-based development without branches ?
The answer I've found so far online to all these concerns is always kube, which also seems to solve further issues like scaling and conf management.
If feel like a big multi service app is unmanageable without kubernetes, but my boss refuses to hear about it, as except me and my bootcamp level
Daily Work Problems Faced by Engineers in Cloud DevOps and SRE
Hello people.
I know that this may sounds like a very general ask but bear with me. I am looking for problems or process improvements in the Cloud DevOps and SRE work domain. What's something that you as an engineer (or any other employee in this area) face on daily basis or have faced in the past and would like to be solved or made a tool for? My intentions are to start an year long project (so the problem should be big/small enough) that will span my whole senior year in college and the end product being something that helps or solves the said problem.
P.S I would really prefer if it's something that can use some ML to enhance it.
https://redd.it/1ebbo6y
@r_devops
Serverless observability tools (new relic etc.): are my expectations off?
Recently I tried new relic with AWS Lambda (python) and was surprised by how awkward "the basics" of logging and metrics seemed to be compared to my previous experiences with other tools (datadog, elasticsearch, grafana, riemann). Most of my experience with those tools is not with serverless, though.
I'm wondering if that's more a new relic issue or a general problem with poor support for serverless? Am I expecting too much? Am I doing it wrong?
What I expected:
* Searchable, correlated logs across multiple services
* Application metrics (e.g., products sold per hour) and infrastructure metrics (e.g., 50x responses per hour)
* Alerts based on these metrics, integrated with slack and out-of-hours tools like pagerduty
* Performance tracing
What I got from their lambda extension (there are other integration options - e.g. Opentelemetry - but it seems they all have limitations and are a little work-in-progress):
* Quirky Lambda extension with documentation / usability issues
* Logs: currently not clear to me that it's useable to search across multiple Lambdas/services (?!)
* Custom metrics: fine for what I needed, but with caveats (e.g., no tags - no "dimensional metrics")
* Alerts: seems fine. I didn't try it with slack/pagerduty though
* Performance tracing: I didn't need this in my test, but again hindered by documentation issues
How do other tools do on serverless? (datadog, honeycomb, etc.)
https://redd.it/1eb7ddg
@r_devops
CrowdStrike Preliminary Post Incident Review
CrowdStrike put out their official PIR on the incident. I hope whoever wrote this was banging their head against a desk when they had to basically write out "our only testing for this was an automated test that didn't even officially pass".
Here's the link for anyone interested: https://www.crowdstrike.com/falcon-content-update-remediation-and-guidance-hub/
https://redd.it/1eb40oo
@r_devops
Just in time (JIT) AWS escalation tool?
Looking for some tool or service that is:
- cheap / free
- not awful to set up
- can be used with one account/organization
- allows approval and review for temporary audited access to elevated AWS access
I read through this AWS TEAM tool but it requires a second federated organization and my team doesnt want to set up another org in our AWS account.
Any suggestions?
https://redd.it/1eb2ew8
@r_devops
Start-up DevOps
I just joined a start-up
They have few GoDaddy web hosts.
Where
Multiple websites are hosted.
1 was windows server with multiple Databases and.
Net projects.
Should I tell the CEO that it's cheaper to use lambda/Linux servers for some of the services
https://redd.it/1eawj7c
@r_devops
Telegraf / Sensu
Evening, first post here.
Has anyone any experience with using telegraf and sensu together.
Our sensu set up, we have complete control of writing subscriptions but no access to the servers or anything via ssh.
Telegraf, ive installed this on a server, followed standard install guide from them, basic config, inputs atm are just cpu for testing purposes. Output is sensu api url.
In sensu the event appears, however ive no idea how to transform the data to a useful alert/monitor.
I.e if i was sending 10 different inputs, and i wanted to grab metrics around disk space...how do i do that.
Thanks in advance
P.s not using sensu isnt an option 😩😆
https://redd.it/1eagoz8
@r_devops
Do you abstract and reuse common IaC patterns?
In the middle of sort of a philosophical discussion. I'm curious where you all stand. Say with something like CDK. You notice the same pattern of resources being implemented multiple times. For example an SQS queue triggers a lambda function. The same lines of code are written over and over again to create the queue, lambda, event source, alarms, slack notification, etc. Or maybe it's the same API Gateway to lambda setup. Or it could be a little more complicated like a dynamo stream filter and event bridge. Point is you keep seeing the same code copy/pasted.
Does the repetition bother you? Do you think it should be swapped out for a custom built (shared) construct that creates all of those resources instead of everyone copying/pasting the same code over and over? How do you decide? Is there a threshold of complexity that makes you lean either way?
Pros/cons for building a reusable package? Pros/cons to just keep copying and pasting?
https://redd.it/1eamgo0
@r_devops
Need Help with Terraform EKS Cluster - Cannot Access API Endpoint from Jumphost
Hey everyone,
I'm currently facing an issue with the EKS infrastructure I set up using Terraform. Everything seems to be standing up correctly, but I'm having trouble accessing the cluster.
Here's a brief overview of what I've done:
1. I wrote the infrastructure in Terraform to create an EKS cluster and associated resources.
2. Everything deploys without any errors.
3. I set up an SSH tunnel to a jump host to access the EKS API server.
However, when I try to access the API endpoint, I get a timeout. Here’s what I’m doing:
>connect to jumphost via ssh
curl --insecure
`https://eks-api-endpoint:6443`
Despite the tunnel being established, the curl
command times out. I've double-checked my Security Groups and VPC configurations, and everything appears to be in order. Is there anything I'm missing or doing wrong? Any help or pointers would be greatly appreciated!
My main.tf looks like that:
locals {
name = "some"
region = "eu-north-1"
vpccidr = "10.0.0.0/16"
azs = slice(data.awsavailabilityzones.available.names, 0, 3)
bastionamitype = data.awsami.amazonlinux23.id
ec2instancetype = "t3.small"
tags = {
Example = local.name
}
}
module "vpc" {
source = "terraform-aws-modules/vpc/aws"
version = "~> 5.0"
name = local.name
cidr = local.vpccidr
azs = local.azs
publicsubnets = cidrsubnet(local.vpc_cidr, 8, 0), cidrsubnet(local.vpc_cidr, 8, 1), cidrsubnet(local.vpc_cidr, 8, 2)
privatesubnets = [cidrsubnet(local.vpccidr, 8, 3), cidrsubnet(local.vpccidr, 8, 4), cidrsubnet(local.vpccidr, 8, 5)]
databasesubnets = [cidrsubnet(local.vpccidr, 8, 6), cidrsubnet(local.vpccidr, 8, 7), cidrsubnet(local.vpccidr, 8, 8)]
enablenatgateway = true
singlenatgateway = true
# onenatgatewayperaz = false
createdatabasesubnetgroup = true
mappubliciponlaunch = true
publicsubnettags = {
"kubernetes.io/role/elb" = 1
}
privatesubnettags = {
"kubernetes.io/role/internal-elb" = 1
}
}
module "eks" {
source = "terraform-aws-modules/eks/aws"
version = "~> 20.0"
clustername = "${local.name}-cluster"
clusterversion = "1.30"
clusteraddons = {
# aws-ebs-csi-driver = {}
coredns = {}
kube-proxy = {}
vpc-cni = {}
}
vpcid = module.vpc.vpcid
subnetids = module.vpc.privatesubnets
createcloudwatchloggroup = false
eksmanagednodegroups = {
bottlerocket = {
amitype = "BOTTLEROCKETx8664"
platform = "bottlerocket"
instancetypes = "c5.large"
capacitytype = "ONDEMAND"
minsize = 1
maxsize = 3
desiredsize = 1
}
}
tags = local.tags
}
resource "awskeypair" "terraformec2key" {
keyname = "terraformec2key"
publickey = "${file("terraformec2key.pub")}"
}
module "ec2" {
source = "terraform-aws-modules/ec2-instance/aws"
version = "~> 5.0"
name = "bastion-${local.name}"
ami = local.bastionamitype
instancetype = local.ec2instancetype
subnetid = module.vpc.publicsubnets1
vpcsecuritygroupids = [module.ec2securitygroup.securitygroupid]
keyname = "terraformec2key"
}
locals {
name = "some"
region = "eu-north-1"
vpccidr = "10.0.0.0/16"
azs = slice(data.awsavailabilityzones.available.names, 0, 3)
bastionamitype = data.awsami.amazonlinux23.id
ec2instancetype = "t3.small"
Roast this Github app I built, DevOps use case?
Hi folks 👋
I'm sharing my Github app called Pull Checklist. Pull Checklist lets you build checklists that block PR merging until all checks are ticked.
I created this tool because:
1. I found myself using checklists outside of Github to follow specific deployment processes
2. I worked at a company where we had specific runbooks we needed to follow when migrating the db or interacting with the data pipeline
Would really appreciate any feedback on this and whether there's a good use case for DevOps teams.
https://redd.it/1eahtmu
@r_devops
DOCKERS in JENKINS
Trying to study up on Dockers and few things I don't understand so far. Firstly, why when you instantiate a docker, you need a DB connection with your data base. If you are using Java project, you may have zipped libraries in your JAR file to connect with DB, but DB itself is never even on GIT repo of a Java project to begin with. Secondly, am I right that for a pipeline, you need only one Docker image. It will then determine where to send your code
https://redd.it/1eac89g
@r_devops
Which Sheet should I follow for my Intern Preparation?
I am unsure about which sheet should I follow? Striver's A2Z or SDE. I have been suggested to A2Z as it is more beginner friendly and I should use SDE for my revision. But I do not have much time. Companies have already started approaching in my campus. I want to know opinion of you guys.
https://redd.it/1eaa7xg
@r_devops
CI with JENKINS
I am a QA and all companies I have been at QA's don't even use Maven, let alone Jenkins, but I am trying to understand the CI process. Here is the way I see it. Correct me where I am wrong. Firstly, I think that CI is only used, if you have automation testing, since with manual testing there is nothing to integrate dev code with. Also, you can have dev without qa (though your app will be riddled with defects), but you can't have qa without dev. That is the reason Jenkins connects with dev branch on GIT. After packaging, it sends JAR to docker container. Which then destributes the code to various environments. IT goes to PROD environment only when you do release. Build is any update in the code and one release is comprised of multiple builds. Still some unanswered questions, but is all that correct?
https://redd.it/1ea7fj2
@r_devops