Reddit DevOps. #devops Thanks @reddit2telegram and @r_channels
Product - Deployment Strategy for different clients
Hi Folks,
I’m seeking your input on the following scenario regarding our deployment process.
# Product Stack:
Backend: Golang
Frontend: React
CI/CD: GitHub Actions
Infrastructure: Oracle Cloud
Containerization: Docker-Compose
Container Management: Portainer
# Current Workflow:
1. Manual Configuration:
We maintain a backend configuration file (`backend.yml`) that contains client-specific URLs.
For each deployment, we manually update the endpoints in backend.yml
for different clients, which is time-consuming and error-prone.
2. Build and Deploy:
Using GitHub Actions, we create a Docker build.
We then update the Portainer stack using a Portainer webhook, which pulls the images from GHCR and updates the stack.
# Objectives:
Automate the configuration management for multiple clients to eliminate manual edits.
Deploy a single branch for all clients while ensuring each client gets its respective backend configuration.
Any idea how efficiently we can achieve this
https://redd.it/1e8hjxh
@r_devops
Looking for Jira alternatives for non-tech related business
Hello,
I work in a field where we are constantly meeting new people (sometimes already know them) and need to put them into a database of some sort. It needs to have a way to log "incidents" related to these people. Attachments are a must, and it needs to have a way to put certain incidents in a "status".
I have been using Jira for this, but I know it is not designed for that. I am wanting to see if there is something else out there that would better fit my purposes. I have also tried Azure DevOps and YouTrack.
Sorry if this isn't the place to ask, it's kind of a weird question that doesn't fit in anywhere.
Thanks
https://redd.it/1e8dcy9
@r_devops
I want to build my own Vercel domain manager
Many PaaS services offer a “connect your domain with deployment” feature. Typically you simply add a CNAME DNS entry and then everything works automatically. It’s possible to reconnect the deployment in seconds.
How does it work technically? What services are involved.
I’d like to build something similar. Basically what I want is have one entry point and then a database that redirects the traffic based on the hostname to another server. As far as I know, Vercel is not using Kubernetes.
https://redd.it/1e85x1q
@r_devops
Is this course a good introduction to the field? IBM DevOps and Software Engineering Professional Certificate
I'm starting my first job as a DevOps Engineer in a few weeks and I want to get a complete picture of what a role like this might involve in terms of tools & tech, methodologies etc.
Does this provide a good overview of the field? https://www.coursera.org/professional-certificates/devops-and-software-engineering?
Note: I'm a CS graduate without prior DevOps role experience.
Any other suggestions for the intended purpose will be greatly appreciated!
https://redd.it/1e854m0
@r_devops
GitLab with Argo CD
Are there alternative approaches to updating Argo CD Helm chart values in GitLab CI/CD?
script:
- git clone git@gitlab.com:guestbook-toshiro/argocd.git
- cd argocd/$SERVICE
- envsubst < base-values.yaml > values.yaml
- git add .
- git commit -m "Update $SERVICE to version $IMAGE_TAG"
- git push
https://redd.it/1e81uc9
@r_devops
Tips for a new learner for terraform / Kubernetes / docker
Hey everyone , i’am new to devops , where do i learn terraform / docker / kubernetes and ci/cd for free with hands on practice ? Thanks everyone
While you’re at it , how do i become really good and knowledgeable in this field ?
Thank you so much everyone
https://redd.it/1e7unz7
@r_devops
Managing Kubernetes with K9s
For those that have been using k9s (or equivalent) to monitor your Kubernetes clusters in the cloud, how do you ensure some form of version control?
For example, increasing memory/cpu request and limits, scaling of replicas, updating some yaml file, can all be done using k9s.
But how do you ensure some form of version control?
The reason for this is bcos i recently joined a non-tech company with only one engineer who joined around 2-3 months earlier than me. We’ve been trying to maintain a data pipeline done by external vendor, so we found k9s really useful to tell us live updates of the cluster.
But recently, the other engineer has been fine-tuning the memory/cpu instances. Sometimes he messed up the yaml file while editing which causes some of the pods to not be able to restart due to insufficient memory allocation.
Deep down i feel like this may not be the best practice, thus would like everyone’s input on how is it done for other tech companies?
https://redd.it/1e7nnca
@r_devops
How to manage dozens of gitlab tokens in CI jobs?
Scenario: gitlab on-prem driven CI with many repos working together to provide a single infrastructure:
So we have a lot of tokens to manage. As gitlab now enforces a 1 year max token lifetime I've just had the realisation that hunting through CI variables in dozens of repos, recreating new tokens in other repos that that CI needs to access, with the appropriate permissions is not a sustainable approach.
So apart from better READMEs in each repo or a big spreadsheet, how do people manage dozens of tokens with varying permissions that need to renewed yearly and update the secret stored in the correct CI variable?
Unhelpfully gitlab deletes expired tokens and I don't see a convenient UI to list all project tokens across the entire account.
Curious... I assume this is a common problem with gitlab/github driven CI?
Many thanks in advance for any suggesstions, ideas, pointers... 👍😀
https://redd.it/1e7f5ot
@r_devops
A confused developer searching for answers
I am quite confused about how the deployment and maintenance of complete web applications online actually work in practice. I have quite a few notions, but there are so many of them that I have a hard time forming a complete and simple picture.
For example, let’s say I have a NextJs application for my frontend, and a backend with a headless CMS (like Directus, for example).
Just for that, I have heard of many different ways to deploy.
For the frontend:
• Either use NextJs, which seems to be the simplest
• Or deploy on your own server either via a reverse proxy or with Docker using Traefik to orchestrate the flow
Speaking of Docker:
• From what volume is Docker no longer sufficient (if I want to make a SaaS, for example) and a cluster is needed? Via Kubernetes, for example?
• Where do you deploy your Docker containers? On Docker Hub? But in that case, they are public
For the backend, I have the same questions.
Then, for the database, I also thought about using Docker, but being alone, I can’t see myself managing my data, doing backups, etc. In that case, I would like to use a managed database, but there are so many offers that I don’t really know which one to choose.
Then, to store my files (images, etc.), I used Cloudinary in the past, but I find it terribly expensive.
Finally, when should I use Cloudflare? I have a few applications running on Vercel and I have no issues, so I wonder what the real benefit of using Cloudflare is and if there are any alternatives.
I have so many other questions about hosting, for example, but it’s starting to get very long.
I would like to get your opinions/feedback on the technologies you use for small, medium, or large applications.
Sorry for this very long message.
Signed, a somewhat lost dev.
https://redd.it/1d18xuv
@r_devops
emuhub
EmuHub is a tool that simplifies the testing of Android applications. It leverages Docker and NoVNC to provide developers and QA engineers with easy access to multiple emulators via web browsers. This innovative solution streamlines the testing process by allowing the deployment of emulators over CI/CD environments. EmuHub's Docker integration enables effortless creation and management of emulators, while NoVNC ensures seamless access to emulated devices directly from web browsers. Its compatibility with CI/CD environments facilitates automated testing, reducing testing time and maximizing efficiency.
https://github.com/mohamed-helmy/EmuHub
emuhub
https://redd.it/1d14yqk
@r_devops
OWA with adfs and my custom idp, custom database settings problem no UPN in request
I have a problem while performing authentication in OWA using adfs and my own IDP. The SAMLResponse from idp contains:
<saml:AttributeStatement>
<saml:Attribute FriendlyName="UPN" Name="UPN" NameFormat="urn:oasis:names:tc:SAML:2.0:attrname-format:basic">
<saml:AttributeValue
xmlns:xs="http://www.w3.org/2001/XMLSchema"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:type="xs:string">admin
</saml:AttributeValue>
</saml:Attribute>
But im getting error in owa: /owa/auth/errorfe.aspx?msg=UpnClaimMissing with adfs-error: x-adfserror: No UPN claim was found.
I have configured claim rules according to https://learn.microsoft.com/en-us/exchange/clients/outlook-on-the-web/ad-fs-claims-based-auth?view=exchserver-2019#step-2-deploy-an-ad-fs-server
with: c:Type == "http://schemas.microsoft.com/ws/2008/06/identity/claims/windowsaccountname", Issuer == "AD AUTHORITY" => issue(store = "Active Directory", types = ("http://schemas.microsoft.com/ws/2008/06/identity/claims/primarysid"), query = ";objectSID;{0}", param = c.Value);
and
c:Type == "http://schemas.microsoft.com/ws/2008/06/identity/claims/windowsaccountname", Issuer == "AD AUTHORITY" => issue(store = "Active Directory", types = ("http://schemas.xmlsoap.org/ws/2005/05/identity/claims/upn"), query = ";userPrincipalName;{0}", param = c.Value);
https://redd.it/1d0zmtx
@r_devops
In aws CodePipelines the special character & is being printed as & please help
So I'm creating a pre-signed url in CodePipeline post build step, now problem is that the signed url is being malformed like & is showing as & in the logs and because of that it's not working.
I tried using double quotes, sed, printf, echo but still same issue.
I tried to save the generated pre-signed url in a .txt file and then cat the .txt file and still same issue but when i download that .txt file then there was no issue, i mean the & was represented as it is so I'm not sure if this is bash issue or cloudwatch logs issue.
Use case is i want to have a s3 object which we can access directly from CodePipeline using s3 signed url.
Please let me know if you guys have faced similar issue and how did you fix it??
Thanks!!!
https://redd.it/1d0zliv
@r_devops
Seeking Opinions and Advice on DevOps Side Projects
Hey everyone,
I'm a DevOps engineer looking to start a side project that I can eventually monetize, and I would love to get some input from the community. I have a few ideas and would appreciate any kind of feedback about them. Here they are:
## 1. AWS Landing Zone with Terraform Community Modules
Project Idea: I’m thinking of creating a solution for setting up AWS Landing Zones using Terraform community modules. The idea is to streamline the deployment of AWS environments by leveraging Terraform and existing community modules. This would cover essential components like organizational units, accounts, networking, security, logging, monitoring, and shared services. Additionally, I'd like to integrate AWS Control Tower to provide a baseline environment, ensuring governance, compliance, and security.
Orchestration Tools: I plan to use Terragrunt or Terramate for orchestration. I have experience with Terragrunt, but I’m aware that the same company (Gruntwork) already offers similar services, which might be a potential issue.
## 2. AWS Landing Zone with Pulumi (TypeScript and YAML)
Project Idea: This idea is about developing a solution for setting up AWS Landing Zones using Pulumi, written in both TypeScript and YAML. Since there's not much out there in terms of community modules, I'd be writing most of the code from scratch. The flexibility of using Pulumi is attractive, and it would include robust infrastructure as code examples in both TypeScript and YAML. It would require a bigger effort due to the smaller community and lack of existing modules. However, it could integrate well with the SST (Serverless Stack) project, which uses Pulumi under the hood, for serverless applications.
## 3. Hetzner Complete Kubernetes Cluster Setup with Rancher, Epinio, OpenWhisk, and Cloudflare
Project Idea: This project would provide a complete setup for Kubernetes clusters on Hetzner using Rancher for cluster management, Epinio for application deployment, OpenWhisk for serverless functions, and Cloudflare for enhanced security and performance. The goal is to offer a more PaaS-like solution, giving developers a ready-to-use environment. Rancher is a practical and developer-friendly end-to-end solution, allowing for centralized management of multiple Kubernetes clusters, security policies, and consistent policies across environments. By using different Hetzner Cloud projects in conjunction with Rancher clusters, you can achieve a robust and isolated setup similar to AWS accounts.
Cloudflare Services: To enhance this setup, Cloudflare offers services like DDoS protection, web application firewall (WAF), SSL/TLS encryption, CDN for performance, S3-compatible storage for logging and backups, static site hosting, and serverless functions with Cloudflare Workers.
Infrastructure as Code: I’m considering using both Pulumi and Terraform to spin up these environments, but I'm struggling to pick the right IaC tool for the same reasons mentioned above
## Questions for You:
1. What are the potential challenges and pitfalls I should be aware of for each idea?
2. How can I differentiate these projects from existing solutions?
3. Do you see a demand for these solutions? If so, which one do you think has the most potential?
4. What are some effective ways to engage and build a community around these projects?
5. For the Terraform-based ideas, which orchestration tool would you recommend: Terragrunt or Terramate, and why?
6. What are some ways to monetize these projects that would not negatively impact the community?
Looking forward to hearing your thoughts and advice. Thanks in advance for your help!
https://redd.it/1d0wqvh
@r_devops
FTP into a dockerized nginx container
Hi guys,
I am setting up various nginx nodes that should be in permanent synchronization.
I'll handle the synchronization from my app, but I need to ftp into these containers in order to do the magic. With this docker compose is it possible with some adjustments to FTP into these existing containers? For example how would I configure for an ftp username / password?
Thanks
services:
nginx-node1:
image: nginx:latest
containername: nginxnode1
volumes:
- node1data:/usr/share/nginx/html/storage
- ./nginx/node1.conf:/etc/nginx/nginx.conf
ports:
- "8081:80"
nginx-node2:
image: nginx:latest
containername: nginxnode2
volumes:
- node2data:/usr/share/nginx/html/storage
- ./nginx/node2.conf:/etc/nginx/nginx.conf
ports:
- "8082:80"
nginx-node3:
image: nginx:latest
containername: nginxnode23
volumes:
- node3data:/usr/share/nginx/html/storage
- ./nginx/node3.conf:/etc/nginx/nginx.conf
ports:
- "8083:80"
volumes:
node1data:
node2data:
node3data:
https://redd.it/1d0jnhe
@r_devops
I scraped all DevOps Interview Questions for Meta, Amazon, Google, Yahoo... here they are..
Hi Folks,
Some time ago I wrote here on r/devops about my Google's SRE, System Engineering Interview experience and questions they asked.
For past month I was scraping interview questions for Amazon, Google, Meta, Netflix, Yahoo, Cloudflare, Accenture etc.. in various sources, filtering useful questions (imho) and rewriting them in more details with solutions.
publishing it here: https://prepare.sh/engineering/devops (if you'll have issues with login please clean cookies)
Also I will keep adding companies/question to have around 50+ top companies with their interview questions, so its work in progress.. If you find this type of content useful and want to help me with code/content/etc pls dm me :)
https://redd.it/1d0honl
@r_devops
Pipeline deployment strategy
I have a yaml pipeline that currently hard codes the parameters file that is passed against my template.
The build and deploy work fine, but it doesn’t scale to the next parameters file. I want to deploy the same template but pass different parameters.
I am trying to wrap my head around how this is to be done. Is the right strategy to have a pipeline per infrastructure component or is it better to pass the parameters file as a parameter field?
I started down a path of when a new bicep param ends up in a specific folder it triggers the build, but this seems overly complicated.
Is there a better way to handle this
https://redd.it/1e8emm6
@r_devops
What IaC solution should I use for my company's use case?
I am newer to the devops world, and have moved from a developer role into a more technical devops role. Part of my task in this role is to help decide the companies future with IaC.
# Needs/Use Case
* Be able to spin up and tear down mainly windows VMs and potentially other infrastructure in Azure (and potentially also VMWare) using yaml pipelines
* Have the ability to configure these machines into a state for testing of large on premises applications
* this means being able to install applications both 3rd party and internal apps/builds on machines (this often means the VMs need to be able to restart to finish certain installs)
* settings of all sorts of settings (firewall, registry, users/groups)
* Being able to then delete these VMs and all associated resources
* Keep in mind these steps will run on every test run in an AzuerDevops yaml pipeline. (CreateVMs -> ConfigureVMs for functional tests -> Run Functional Test -> Delete VMs)
* so something with low overhead would be great
* Be able to be templatized within AzureDevops pipelines to allow a custom interface to creating Infra, the company does not want to give direct access to all possible infra to developers **(our end users in this case)**
# Current Approach
* Currently someone wrote an internal tool in powershell that leverages az cli to do creation and deletion of infra in azure
* And for configuration there is a whole custom powershell engine that requires a remote agent for each VM that it will configure and does a bunch of custom configurations steps by using PSSession to install and configure things on the remote machine.
* Limitations of this approach (why do we need to change)
* The current Powershell based configuration engine was written by one person who does not actively work on it anymore, and when anything breaks it can be very confusing to know how to fix
* For the same reason, its hard to add more features
* It doesn't scale particularly well as we need one configuration agent for each machine that we are configuring (and there can often be 10s to 100's of machines needed for a full test suite)
# My Questions/Thoughts
* What tool(s) would you recommend for this use case?
* Should we stick with our custom tooling? (Because I come from a development background, I have the ability to rewrite and simplify the engine to make it potentially scale better and be easier to extend going forward)
* I don't have a real understanding if our use case is what IaC would typically be used for, so do tools like Terraform, Ansible, Pulumi, etc support this use case? And if so which would you recommend.
* Whatever we decide I want it to be able to scale for a large number of VMs (not needing an agent for every machine we want to configure) and to be easy to maintain both from the DevOps side, and from the Developers side who need to write their configuration
* Most of our devs are C++ or .NET developers and can really struggle with complex yaml
Thanks in advance for any feedback! I am really just trying to learn what the industry standards for these types of things are so we can be on the "happy path" rather than trying to fight an uphill battle.
https://redd.it/1e8cjah
@r_devops
Environment Inventory
I am looking for a tool that my team can use to house dev and test environment details. Such as links, status, associated database details, and other details a client or engineering team will need to know. Those teams rely on our in house solution for this like a landing page to access environments. Something customizable and self hosted.
Our current in house solution uses a source controlled YAML file per environment. It works but is difficult to maintain.
Anyone use something like this that could recommend an alternative?
https://redd.it/1e87buz
@r_devops
Managing secrets, certs and other sensitive data
What tools are you using for managing secrets, certs and other sensitive data. How did you go about implementing it and what were some of the lessons learned as you implemented it?
https://redd.it/1e83fbo
@r_devops
CI/CD configs IN App Repos?
Do you keep CI/CD configs in the same repo as your application / service code where devs can manage them? A few teams in my org have recently started using CircleCI on their own and set up their own pipelines in each app repo. I can understand if it was just for building or pre-deploy stages that are more application specific, but these are full CICD pipelines. They aren't consistent across the repos now which makes troubleshooting a nightmare, and I've also found that some of our standard SDLC steps like linting, validation, testing, vulnerability scanning, and so on are missing. Not to mention skipping review requirements and dual approval. There is nothing stopping someone from adding a pipeline that just deploys straight to production. I raised these concerns with our head of engineering who argued that it is necessary to empower the devs to ship as fast as possible. Am I making a stink for nothing?
https://redd.it/1e7w0p6
@r_devops
Best docker& Kubernetes course on udemy?
I got an organizational user which means all courses are free to enroll.
I’m a security researcher and looking to get some knowledge and know how so at some point I’d also be able to understand the security aspects of docker and k8s and look under the hood.
https://redd.it/1e7tpx3
@r_devops
Terraform Certifications?
I am looking to learn terraform and possibly get a certification if there is such a thing. Anyone have any suggestions?
https://redd.it/1e7l496
@r_devops
Advice on Running SAST and DAST with Veracode in Azure DevOps Without Access to Client's Source Code
Hi everyone,
I'm working on a project for a client where we need to run SAST (Static Application Security Testing) using Veracode. The client has provided the necessary endpoints for the DAST scan, and that part is straightforward. However, I’ve hit a snag with the SAST.
The client wants to integrate Veracode into their Azure DevOps pipeline but is not willing to share the source code with us. This brings up a few questions and concerns:
1. **Is direct access to the source code required to integrate Veracode with Azure DevOps and run SAST?**
2. **If the source code is not required, what are the alternative approaches to perform SAST under these conditions?**
3. **What specific type of access do I need in Azure DevOps to set up and configure Veracode for running SAST?**
* I assume I might need Project Administrator access to configure pipelines, deploy, and install/configure the Veracode extension, but any confirmation or additional insights would be helpful. if he's not okay to give us the Admin access, what are alternatives roles ?
Any advice or insights from those who have navigated similar situations would be greatly appreciated!
Thanks in advance!
https://redd.it/1e7cbjn
@r_devops
Monitoring/APM tool that can be self hosted and is relatively hassle free
We are migrating to OTel and reviewing our current observability vendors for potential replacements and/or consolidation. We have a combined 7 figure bill and have substantial metrics/logs (but no traces yet) volume. While we migrate and set up POCs with vendors, I wanted us to also have something hosted on our infra we can use internally (probably internally only on my team while we navigate the OTel implementation).
There seem to be so many observability tools now, which is a good thing. I'd like to find one that is OSS, easy to set up and maintain, and supports OTel. Features are a simple UI that we can use to easily query and that can clearly show the data. Also, we don't use k8s, mostly EC2 VMs and ECS containers. Any recommendations? Thanks!
https://redd.it/1d15dct
@r_devops
Do IT infra jobs offer remote work as much as dev jobs?
Ive been considering making a switch to work on IT infra, basically aws, terraform , kub8 kindof stuff. But currently i am enjoying remote work as a dev and i hope i can continue working remotely after the switch
i am not oppose to working onsite just that i need to make some adjustment in my life commitment should i work onsite.
https://redd.it/1d0zppg
@r_devops
REQUEST: Advice for a Windows guy coming to Linux - alternatives to PowerShell/Groovy?
**BACKGROUND**
I've been a (Windows) software developer a long time (decades) and now work, by choice, in DevOps
My responsibility has been a set of important corporate legacy apps running on Windows (.Net/IIS/SQLServer)
Everything is on AWS and deployed via Jenkins/Terraform/Ansible/PowerShell scripts, all under source control
I'm dealing with Linux systems more and more as components of the old system are rewritten/replaced
**CURRENT ISSUE**
I'm finding writing Linux BASH equivalents of Windows PowerShell scripts frustrating
BASH feels like Windows Command line (BAT) in comparison to PowerShell. Ancient, arcane, not a proper "language"
Having learned Groovy through its use in Jenkins it seems like a great option. Modern, proper "language". How can BASH compete with the power of {closure}.delegate?
But it looks like Groovy has had its day and other than Jenkins isn't widely used. r/Groovy is nearly dead
(I also have the prospect of our DevOps stack being moved to GitLab in future)
**What would you suggest is a way forward that maximizes my learning effort? (And *my* learning becomes my teaching for juniors in the department)**
**A) Learn BASH**. It IS a proper language once you understand it. Its never going away, the skill will always be in demand
**B) Use Groovy**. Rumours of its demise are greatly exaggerated.
**C) Get PowerShell on the Linux nodes** It's cross-platform for this exact reason. It's 'widely' used (in both Windows and Linux world) The Linux DevOps might roll their eyes but f&*k those insular assholes
**D) Learn "XYZ Language"** (eg PYTHON?) - that's the thing that fills this gap you're describing and will remain so for a decent amount of time. How did you not know this?
**E) Something else**... tell me, eg "Stop thinking like a developer... DevOps coding is different to Application coding" (we're gonna have an argument about this are we? ;)
Because this is Windows v Linux I'm expecting trolls, but I really appreciate any serious replies
https://redd.it/1d107lj
@r_devops
how to prevent merging if test cases have failed using Jenkins and GitHub branch protection rules
iam using jenkins pipeline to run test case using framework ceedling and unity because my code in c langauge , i want if the test case has failed to prevent this branch from merging in github repo
https://redd.it/1d0wqdt
@r_devops
Should I make the move to SWE!?
I've been an SRE and DevOps Engineer for 5 years. I make good money but when growing r/salary and seeing some of the Salaries that SWE are making I'm considering joining a bootcamp... am I overthinking this!?
https://redd.it/1d0p53n
@r_devops
Tips to broaden Knowledge
Currently I am working as a DevOps, but i want to abstract myself of my work and be able to learn the fundamentals and get the general Know How about:
kubernetes
Iaac (like terraform)
Cloud - got little experience with GCP and AWS
Any other tool for this
Tried to read some books about the subjects to try my luck with any certifications but i believe that a hands-on approach would be better in a long run.
I also have a raspberry pi for learning with minikube but i don't know if working locally would be a great advantage to start my learning.
Do you have any tips?
https://redd.it/1d0k8fk
@r_devops
Best company to prodive azure infra
Hi all,
my company needs to build a new Azure infra. Which company do you reccomend for the job?
https://redd.it/1d0dc7b
@r_devops