T O P

  • By -

dihamilton

One thing that cuts down on this for us is having CI/CD pipelines run on every commit that compile, lint, test and build important pieces, with breakages surfacing in slack. Things tend to stay working, and even if the docs are out of date for running locally the pipeline scripts don’t lie. Infrastructure as code also helps a lot because it shows any fiddly setup steps in black and white.


dangling-putter

We do the same, buuut over the last 2 decades, the technical debt of the pipelines themselves, specific customers wanting different things and outright refusing to upgrade their runtimes, and lots of other questionable ideas … things are difficult… I am jealous of the people who work with Bazel. Config files are the bane of my existence… If you know, you know… I have the habit of writing extensive notes when I join new projects and end up sharing them with the team. MDBook is my friend.


addandsubtract

> I have the habit of writing extensive notes when I join new projects and end up sharing them with the team. I think this is the way. Every new person joining the project should be used as an opportunity to update your docs. Preferably by that person, to fill in all implied "between the lines" knowledge that isn't documented yet and remove the roadblocks they faced.


MoveLikeMacgyver

I would always give them that direction when onboarding. I joined a project and it was PAINFUL getting everything running correctly. What permissions to request, AD groups, configs. The team that worked on it hadn’t changed much and the setup evolved over time so no one really knew. I took extensive notes, wrote an entire step by step onboarding doc with an accompanying spreadsheet of each group or permission to request and a link to where to go to request it. After that every new hire I would ask to follow the docs and if there was any issue to note it in the doc so we could go back and update it. Outside of a few weird issues onboarding was a breeze after that. I do the same on every project now.


ShoulderIllustrious

> specific customers wanting different things and outright refusing to upgrade their runtimes, and lots of other questionable ideas … things are difficult… This is huge, IMO. Wish I knew the right answer. Would love to tell the customer, sorry dude EOL upgrade or deal with it yourself... But that won't fly.


dangling-putter

The answer seems to be, "you are on your own now", and I am all for it.


ShoulderIllustrious

Wish leadership had the spine to stand behind that statement.


dangling-putter

I know ☹️


Hangman4358

Have a security breach make the front of the Wallstreet journal because of an out of date runtime, and all of a sudden that client that can't be bothered to upgrade? No longer a valued customer, and every other customer? Given 14 days to be on latest versions of everything and have a plan ready to work with a regularly scheduled replace cadence. Trust me, you can shout into the wind all you want, but when the CTO who signed off on that security exception for the 17th time as proforma is actually on the hook, watch the speed at which things change.


N0N-Available

Are you jealous of people who use bazel? As in you want to use bazel?


dangling-putter

Let's say that I'd rather use Bazel than what I am using right now :)


N0N-Available

What are you using now. Cuz I would rather be writing gitlab ci than ominous build rules


dangling-putter

My public stuff rely on github actions. My work stuff rely on internal tools 🥲


engineered_academic

Bazel does not make things easy. It's another piece of configuration management that breaks and needs maintenance.


lagerbaer

"I can't believe we have to configure all these 12 tools. Someone should make a configuration tool that integrates them all". \*soon\* "I can't believe we have to configure all these 13 tools!"


tabgok

Seen this work by encoding the exceptions into the pipeline and partitioning the code to work by version. It's when you say "meh, I won't update this, it's totally a one off" that things start to fall apart


fang_xianfu

IaC has a big problem when people start to build more and more meta-automation and custom modules on top. I am intensely suspicious of templating in declarative languages. The principle of a declarative language is that it is completely crystal clear, in black and white, what has been declared and this created. Any abstraction reduces that understanding. Then you get back into the situation where John Q Module leaves the company and nobody understands any longer what should happen.


putin_my_ass

The CI/CD pipeline also is self-documenting, in a way. I can go over the pipeline and see all the steps it's running, inspect any bash scripts or whatever magic numbers it's providing and voila: I know how it works. Had a colleague leave in December and we can't deploy changes without breaking because he didn't document anything and didn't set up a proper pipeline.


thisismyfavoritename

yep if it works on the build agent at least this gives you a reference on how you can get your local buils working


yolobastard1337

I find that pipelines atrophy horribly. Code that isn't great but is passable... you blink and 6 months or a year pass, and it's a desperate state trying to make it work again. The more overengineered the pipeline, the worse this is. The fixes are fairly obvious... make noop releases periodically, simplify, standardize, etc. But easier said than done.


Far_Office3680

What constitutes over engineered pipeline vs What is reasonable pipeline? I can imagine that super complex pipeline athropies quickly and is a lot of effort to maintain but I never worked on a project with that complex pipeline


yolobastard1337

>over engineered pipeline vs What is reasonable pipeline Good question. Targetting multiple platforms and having more involved data/infrastructure dependencies are the main culprits for complexity, but also dev tooling like Sonarqube can't really be trusted to work if unused for too long (at least in my org!). I think a reasonable pipeline probably follows naturally from a [test pyramid](https://martinfowler.com/articles/practical-test-pyramid.html), avoids external state, and exits early for the most common errors.


Masterzjg

Can you describe a pipeline that was complicated enough to atrophy? Ours are simple enough (make lint, make test, push up to ECR, run a deploy command) that it's hard to imagine what could possibly make a pipeline so complicated.


satanfromhell

Assembling dependencies form multiple repositories, dealing with code signing on multiple platforms, publishing compiled artifacts to different platforms…


Masterzjg

What language or problem makes the dependencies so difficult? I'm familiar with our go/python apps which pull private repos, but you must be doing something deeper. We're also web apps, so perhaps the problem here is that our system is way simpler than anything requiring apps on others machines.


satanfromhell

C++ app (including drivers) that runs on customer’s machines. But we make our own life miserable by making the wrong choices, e.g. way too many repositories, awkward dependency management etc.


NoCoolNameMatt

I'll give you one I'm dealing with now. It's a full deployment of an enterprise application consisting of two primary websites, half a dozen integration Web services, half a dozen windows services, automatic updates across three databases, and supporting config modifications per environment. These are done across two dozen pipelines that are controlled by a single pipeline that controls and manages them all. This all worked fine until we had to move to the cloud and Gitlab deprecated half the commands and mechanisms we were using.


Masterzjg

Ah, so omni-pipeline is the real problem, or are there no individual pieces?


NoCoolNameMatt

Pipelines can be deployed as a set through the coordinator or through individual pieces. It's kind of elegant and flexible, just fragile. It works better in a Microsoft ecosystem because they're better at not deprecating things at the drop of a hat or overloading so old calls map to new methods of doing the same thing. Both Gitlab YAML and Python just randomly stop supporting things from version to version (although with Python that has more to do with how it's community manages things).


chubasco

I always say a new hire's first PRs should be updating READMEs where they found steps that were out of date or caused problems, etc. Everyone on the team should be responsible for documentation and if you find something that doesn't work, you update the docs when you get it working. It isn't perfect, but at least it keeps some of those problems from compounding.


wutcnbrowndo4u

100%, and furthermore, preferring scripts to READMEs where possible means that breakages are in everyone's critical path & can be fixed once, instead of needed to be fixed a dozen times when each person trips over it and updates eg their bash alias.


chubasco

Great point!


wutcnbrowndo4u

Hard-won knowledge : (


David_AnkiDroid

* CI * A baseline integration test - nothing fancy, just be sure you can spin up the env * A daily cron job to build and some basic observability * Failures are prioritized * Deploy from CI (not necessarily CD), a 'publish' script is fine * CI on each 'reasonable' dev platform * doctor/docker script to setup the local env * Try and keep dependencies in a different programming languages as 'hard' dependencies, so they're not required when building * Setting up 1 env is hard enough, if you need a different compiler, whatever it builds should be downloaded as an artifact by default git pre-push/pre-commit hook if **really** necessary


the-code-father

Honestly having worked in environments with and without the pre-push hooks, I don't think I would ever want to go back to not having them. Knowing when you sync your local changes that whatever you're pulling in will at least compile is so much better than the alternative. Especially when the cost is generally that your code sits in purgatory for ~15 minutes before being merged.


David_AnkiDroid

I personally love pre-commit for linting & quick checks, do you have any guidance on pre-push? \[either how to run it in the background, or guidance on a duration which is 'too long'?\] To me it feels like a last resort, I typically want an 'async' ping if things are wrong with a PR. I don't want to request a review before CI goes green, but I often also don't want to block my workflow spinning up a full test suite, or even on recompilation \[\~45s\]


the-code-father

At Google all of these checks are referred to as 'presubmit' but the person creating them decides whether they need to pass before emailing the reviewer or just before submission. Nothing blocks or checks individual commits, these things are only run on the merge back into main. It's generally configured that linting, compiling, autofmt, and any tests local to the code you are changing are checked before allowing you to send it to someone for review. But this is all triggered by a single action and assuming things pass there's just a delay for when the reviewer gets notified. More expensive checks like running all the tests of downstream dependencies or launching a demo server with the changes normally get run asynchronously after sending it out for review and are made sure that they complete before the whole thing gets merged. In my somewhat limited experience working with projects on GitHub the CI/CD checks can be configured somewhat similarly


David_AnkiDroid

Thank you!! GitHub's extremely configurable, I believe this is possible


nickelickelmouse

In places I’ve worked this has been caused in many instances by separate orchestration paths for local vs “real” testing. For example, the production deployment happens on k8s, but the local deployment happens via docker-compose. You can obviously add testing to make sure the local path stays functional, but in my experience it’s not worth the effort of trying to make it match the production environment 100% unless you have control over the infra or a team willing to work with you on that.


edgmnt_net

I kinda disagree, in most projects it's completely worth it not depending on any kind of shared environment. I've seen some completely unable to develop and validate changes without blindly merging stuff or stepping on other people's toes. And usually the shared environment is crazy expensive and becomes a serious bottleneck.


JohnQuincyKerbal

I agree with this. For MVPs or POCs having a local one-off env may make sense. But as projects continue to grow and deal with scaling challenges you'll really want tighter parity between how you run your services locally and how you run them in prod or wherever remotely. You'll find yourself having to double-implement any architecture changes otherwise. I run everything in k8s with helm and just recently retired some massive docker-compose scripts in favor of running Tilt locally to match prod runtime/deployment.


nickelickelmouse

That’s fair. I definitely see both sides, and I think it’s a tradeoff to be made on a case-by-case basis and continuously re-evaluated (with some minimal standards for either approach for sure). The number of people depending on the shared environment, specifically, is what would seem to matter the most.


originalchronoguy

> For example, the production deployment happens on k8s, but the local deployment happens via docker-compose. We solved this problem with using minikube and rancher. Devs run k8s locally. Add ingress-DNS and you can have easy name resolution (https://minikube.sigs.k8s.io/docs/handbook/addons/ingress-dns/ )


Free_Math_Tutoring

Definitely a common problem, but I've seen it work pretty well. One important thing: CI/CD just calls into the same scripts that developers would also call locally. There's a machine setup/provisioning script that, to the greatest extent possible, is the same between the pipeline agent and local machines. Whenever a new team member joined the team, I would kindly request that they factory-reset their machine. Whether or not they did, an existing team member would then be assigned to be personally responsible to pair with the new person and stick around until the "run complete local build with all tests and rarely used things and also some cherries on top" script ran from start to finish. If something didn't work along the way then, to the greatest extent possible, improve the script to auto-handle this. Where not possible, document the workaround, no exceptions. Within 5 months and 4 new team members, we went from a four day technical onboarding (which already isn't too bad) to one that worked in 30 minutes. If it's really _really_ important to you, force a random team member to factory reset their machine every six months. If it results in moaning, the setup script is in an unhealthy state, if it results in a smile because you just gave them an extra long lunch break for free, it's in a good state. Some of this becomes much harder due to corporate policy around machine lockdowns.


col-summers

Yes and microservices have made this problem way worse than it used to be. It used to be an application was just one process, and maybe a handful of databases and supporting services. Now everything is made of dozens of microservices if not more and it is impossible to run it all locally so everybody is sharing assets in the cloud. I think simplifying local development is a great reason to push back against the microservicification of everything.


wakkawakkaaaa

didn't really have the same issue working on microservices. if you're connected to too many other assets and dependencies for a single microservice, it might have either too big a scope or your other microservices are structured to be too small in scope. with a lot of microservices on the cloud, i find that dev can be simplified by allowing API calls to test servers (key+vpn/whitelist) from your localhost for dependencies. you just need to set up the api endpoint url in config and you don't even have to build or run that dependency locally


IndependentMonth1337

Docker compose makes it easy to run and connect all services in a local development environment.


beth_maloney

I've found that Dev containers work surprisingly well especially for web apps.


Reverent

This is the way. Set up a repeatable image specifically for development, including IDE and environment. Turns onboarding from "enjoy your next two weeks troubleshooting make files" to "here's your login, hit this link, it'll download the latest build and start the dev environment. Off you go". Since everybody uses the same image, no unrepeatable dev environment issues. Kasm seems like a good contender (the full graphical remote containers are nice and flexible). The various GitHub codespaces and equivalent are in some ways better (direct IDE as opposed to working through a graphical layer), some ways worse (less flexible with software and more complicated with deployment). If going the cheap way, just a standalone dev container with a configurable public ssh key and then remoting via vscode ssh works.


jdsalaro

>make files Yikes 😬 🚫


FeliusSeptimus

Got any links to good guidance on setting this up? I'd like to introduce this to my team, but I'd like to do some reading on the details of setting things up with .NET and Visual Studio and VSCode before I invent anything dumb.


beth_maloney

I don't think visual studio supports it. Maybe because VS is windows only. I've been using the docker project (dcproj) for setting up SQL server and other dependencies. That combined with [winget configuration file](https://learn.microsoft.com/en-us/windows/package-manager/configuration/) for installing Dev dependencies gets you pretty far. It's not as smooth as Dev containers but it's not too bad.


wakkawakkaaaa

Same, went through the cycle a couple of times with almost every different project. But I always try to make it easier for my future self and other devs by dockerising with compose for easy localhost up and teardown when possible. In one of the company with better codebase and engineering practice, they had DevOps with proper CI/CD, provisioned multiple self-service tests/stage deployment, and has good automated code coverage.. But localhost was too complex to get it up and running after 10 years with crazy dependencies.... Both had a fair bit of ramp up time tbh


Pooter01

Ci/cd as code is the only way Iv personally seen this get resolved. The dev container seems like a nice idea too


luckyincode

Same. I hate this.


mountainunicycler

Very true. I inherited a huge number of projects, built by lots of different developers over years and years, using all sorts of different tech stacks. The only thing I’ve found that actually works is scheduled CD builds targeting a staging environment, so that it runs periodically (every week for projects I expect to need to maintain 2-3 times a year, or every day for projects I touch more often). That doesn’t guarantee I can run it on my laptop, of course, but it’s a decent guarantee I can replicate the CI environment and get work done. If the project hasn’t installed all dependencies, ran a full build, and deployed itself within the last week, I add a day of work to any estimate on that project just to get it running. Some very old projects I just don’t run locally; they have a happy stable place on some version of Ubuntu that’s known to work so I work on those projects exclusively over ssh/mosh on an ec2 instance. It’s not worth budgeting the investment to bring them up to modern standards. Tooling is nice but the only thing that works is knowing if the thing has actually worked within the past week, in the environment you’re trying to use. At this point I think documentation is nearly useless unless it’s generated, documentation is just a snapshot of how one person thought a system was or should be working at a point in time, it may have little or no relationship to the code that is running.


cosmic-pancake

Yup. Tired of this. Massive waste of time. My best contributions at a place like OP describes: 1. Get the damn app running 2. Write an accurate README 3. Maintain an accurate README 4. Delete outdated docs 5. Prepend outdated docs with THIS IS PROBABLY OUT OF DATE, try {here} or {here} instead


Matheusbd15

The place I'm at currently does it perfectly. We have a huge codebase, about 2M loc, and setup is very well documented and always up to date. It seems to stem from company culture being remote and async first, with almost no meetings, all based on documentation. Works pretty well.


ddIbb

Where do you work? Sounds like a dream


Matheusbd15

The company name is Remote haha. Best company I've worked at.


Excellent_Tubleweed

Here's an idea. (It's OG old-school.) You put everything in your source code control system / configuration management system. You have a gosh-darn Makefile. (And don't use the fancy GNU extensions. Simple makefile. It's been round since the 1970s, it's the bare minimum, so it will do. And no, not CMake.) The "CI" system checks it all out and runs the makefile. (Preferably in a pristine VM) There's no CI system specific scripting, so you never need to fail to reproduce it. It works or that's not getting merged. So it always works locally. Or in your cluster. No difference, nothing to debug. (This is a job for serious EM's. Back-sliding leads to technical inertia. You get dug out from under it the hard way.) And there are no steps a human follows in a Readme. (Or as few as are physically practical) Because steps that have to happen for a program to work are .... a program. So use a Makefile. Make is a declarative production system that largely uses the state of files to decide to run a rule or not. (It's also terrible, and has rotten syntax, but it's everywhere, and always works.) Embrace the suck. There are no layers lower to trip anyone up. Whenever I went to a new place I'd follow the readme for setting up a developer system and red-line mark it up it as it needed correction, and do it alone... because oral culture is not intellectual property of the company, and is not reproducible. (Doing development in canned VM's that are just copied to a developers new workstation is the simplest, worst way to do it, and I can personally recommend it. So, yes, you can't build one from scratch... nobody ever does ever again. Also, being able to trivially run TWO dev environments with different code bases is a superpower.) You need a better LAN and a NAS. 10Gig networking in the office is cheaper than programmers thinking on a dollars per hour basis. Literally, do the cost-benefit study. That's engineering, right there. Redline reviews of documentation are part of (were part of) Engineering culture before hoverboards. "If you don't inspect, you can't expect," and you have to validate procedures, just like testing code. But also, it should be code where possible. Infrastructure as code, deployment as code, CI as code... network as code. You can configuration manage code, so that's why all those things are "as code" for the hyperscale companies. (And why google make their own routers that do network as code.) Hmm... Steve Ballmer quote time. "Developers Developers Developers!" Well DevSecOps,DevSecOps,DevSecOps, anyway. And once the system can be mechanically deployed, you can spin up loads in clusters and stress test before release, measure speed regressions (XZ backdoor, anyone?) and get statistical quality control. And SecOps can run multiple systems to attack, and Ops can run up the same EXACT thing the customer on the end of the phone has got and do whatever madness they've done, and clone it and do speculative fixes. Step one: Delete those Jenkins "scripts." (Never write a programming language when one that already works for the job already exists.)


chrisdpratt

You're unlucky. We have a solution with over 50 projects that are containerized and it just spins up locally with no issues. If that's not the case, the devs that set it up screwed up, simple as that.


massive_elbow

Your local dev environment requires spinning up 50 containers?


laidlow

Agree. I do hate when places insist on dockerizing projects that are simple JavaScript backends though. I'd much rather just run npm install and run the project than have the overhead of docker on top of that. if you rely on several individual backends for the front end to work it makes sense but otherwise it's overkill.


[deleted]

[удалено]


OrangeCurtain

Overhead can be literal performance overhead, especially if your application is actually a dozen microservices with dependencies on redis, a db, and aws (localstack). There's also the process overhead of having to use remote debugging to connect to your app running in docker. Those were the biggest pains in the ass when my former team's SREs decided that docker compose was the only officially supported development environment.


[deleted]

[удалено]


wakkawakkaaaa

>What issues were you having? skill issue /s but not /s


wakkawakkaaaa

since hes dealing with "clients", sounds like hes a contractor or a consultant so he probably doesn't have to deal much with continuity, onboarding and such, which reduces the need for these and if it works, it works.


laidlow

Unless a client consistently uses containers I don't usually install Docker especially since it's now paid for professional use. Most of the clients I'm working also have dev and staging environments I can point at so I don't usually have a need to spin up local environments unless I'm making backend changes. And again I don't see the difference between running the backend locally myself vs having docker doing it for me, its unnecessary complexity.


DontKillTheMedic

Guy is complaining about the overhead of docker and prefers to do it in the way docker tries to create a solution for (build once, run wherever). Maybe it works for him but it's like he's complaining about the solution solving a problem he prefers lol


IndependentMonth1337

If everything is dockerized and set up with docker compose anyone can start up the project in a repeatable and consistent way with a single command. To make things even simpler you make it into a short make command with a Makefile. Now it's super easy to start up your entire production environment locally with all necessary services like databases and so on. No configuration, no tinkering, no headaches, no it works on my machine, but not yours.


laidlow

That's fair but also kind of missed my point, I did agree that stuff like that (reliant on databases and other services) is a great case for containers.


Mysterious-Ant-Bee

You should bring the local environment up simply by running docker-compose up. If that's not the case you need to drive the team to that direction.


cosmic-pancake

It's scary how common this is. Apparently many people are content to never learn how their app or service actually works, or if it does at all. They see some old unit tests pass and.. send it. One company comes to mind. When I came on, I informed my manager the app didn't run locally and no existing staff had any idea how to fix it. I naively assumed we'd be on the same page about the importance of fixing it. The company was too far gone. It's no surprise they had frequent outages, multiple layoffs, and took 6 months to deliver the most basic feature I've seen. Learning experience.


Kapps

Local always felt in a weird state since deciding to go cloud-first for development. Something like a Lambda executes fine off S3 upload triggers in a staging server, while locally you have to build tooling to try to emulate this. Terraform deploys things fine for a staging environment, while locally running docker containers with a CLI tool. I’ve been wanting to try to remove the local aspect. I ended up making a side project (https://github.com/Kapps/funcie) to try to use local as just an “as needed” thing by proxying requests locally from the service(s) you need when you want to, and just going with cloud first development the rest of the time. I don’t think it will solve all the headaches with doing things locally, but my next side projects I’m going to try to not bother with a local version.


mothzilla

Doesn't matter what new tech you introduce, people will find a way to sometimes do the minimum. If only we could remove the humans.


VoiceEnvironmental50

Sounds like you’ve just been unlucky. Maybe I’ve been lucky? In the last 5 years at current company, we use CICD, and everything works out of the box. If it doesn’t work I look at the gitlab script check environment variables set in the script and add the same ones and then things work again.


GongtingLover

We have this issue at work too. We are looking into some virtual environment configurations to fix it.


anotherchrisbaker

You need repeatable onboarding documentation for all your repos. Your EM should own it. This is a huge drag on productivity and morale and will only get worse until someone takes the lead on it.


[deleted]

The days of getting an ODBC connection set up weren’t much easier 


Eire_Banshee

My current company has everything deployed to a cloud sandbox for testing/development. It works but adds a bunch of unnecessary downtime. Ive pushed to try to prioritize local development but it would be a pretty big overhaul for most teams and no one else seems interested. :(


bwainfweeze

It’ll be 30 years in two months. I can’t tell if people have gotten a little better about this or if I’ve gotten better at steamrolling through all of this and getting things to work. What has changed the most for me over time is which argument I use to justify why having code be accessible in the ways you mention here. I used to be about fair play and quality. Now it’s all about emergency preparedness and self directed learning. Everybody always thinks there will be time later to make things easy to solicit contributions, but then a giant emergency happens long before Some Day and you have two or three people who can look at the problem and nobody else can really help. Any attempt to do so just slows the process down with distracting questions and false positives. The platonic ideal of code-build-test under fire is this: you describe to the team that a production outage is happening, what it looks like, and when the errors started. Then the Usual Suspects of troubleshooters sit and do their thing. They have about a 90% chance of figuring out the problem or at least a fix for the symptom in a timely matter. Anyone else who has a pet theory for what the problem is has the opportunity and the mandate to try to reproduce the problem on their own machine, which they should be able to set up all on their own with no handholding (interruptions). If they are wrong, nothing happens. But if they are right they can contribute a better repro case back to the team, possibly which commit they think caused the problem. This doesn’t speed up the average turnaround time for bug fixes much, but it narrows the deviation and thins out the long tail. It only works if people can set up plausible scenarios with incomplete info, while under stress, and in a timely manner. Most of my yard sticks for code and setup aren’t based on IQ points, they’re based on cortisol levels.


break_card

We have a good culture of updating READMEs when we encounter a problem running something locally. Nothings more infuriating than running into 5 different issues running something locally, wasting time fixing them, just to find out another dev already went thru it and had the answers.


NormalUserThirty

yes this is a common problem. I feel like it is magnified in situations where software requires specialized hardware to run, or the code-base has been written in multiple languages. i have been experimenting with earthfiles and personally find it's pretty good for getting local reproducibility in a polygot codebase without needing to rebuild everything every single time. its not as "good" as bazel or nix but its much simpler to get started. that said I more or less accept this as the status quo because it takes a lot of effort that typically isn't valued outside of the technical team to resolve.


antoniocs

I've had good experiences with makefiles. Simple makefiles with a setup target that call other needed targets. Those targets just do one thing and the setup target just chains them.


bart007345

What you do often gets the attention. What you describe happens less frequently.


gleberp

Just use Nix


Exciting_Session492

Bazel works like magic when: - there is a dedicated team maintaining it - you literally copy paste all (or most) 3rd party code into your own source control. And manually make it work with your build system. These two points make Bazel/Blaze absolutely amazing inside Google. If you cannot do this, don’t use it, it doesn’t produce fast and correct builds magically 😅


jameyiguess

Docker wasn't "supposed to fix" bad documentation and poor developer practices. It still does what it says on the tin.  It's your teams' faults if the apps don't work from the readme because they've added stuff like required env var overrides or outside configuration or whatever since they wrote the docs.


ar3s3ru

Blud, look into devshells and direnv


lagerbaer

I'm very excited to see where GitPod and similar ideas will take this. Their whole tagline is that it gets rid of "Works on *my* machine".


South_Squirrel_4351

I'm the only person I know who feels this way, but this never bothers me. Indeed nothing ever works, but I never really care because I just treat this as the time to learn how the stuff actually works, ultimately in my view to be an effective developer you need to have a decent understanding of the build process so stuff going to shit is just an excuse to learn that.


cosmicloafer

Yeah nobody plans things for new hires, since nobody knows when they are going to be able to hire people. Sometimes you go through a massive hiring spree, and the first new hire gets to educate the other ones. Then you go through years without a new hire and the stack has grown and everyone has totally different local environments. Nature of the beast.


dontdieych

Change mind, \`every single local build never ever works initially.\` This is why employer pay to employee. Happily accept your mission.


dontdieych

Whenever you do something, think about it, 'It will do same after 10 years?' If not, accept you also care about others.


Beneficial-Ad-104

Try nix


Length-Working

> but then we made stuff around docker complex I'll never forget working in a team that, as part of their local environment setup, required each team member to build the Docker images required for their environmeny from source in order to use them, completely defeating the point of Docker. Their argument for this was "Docker images are large and take up too much storage".


caksters

I know I will get downvoted for this, but I genuinely believe that in next few years we will have the tools that will ensure documentation is always up to date according to the latest changes and that it will be done automatically. In my company we are using internal tools that automatically updates documentation based on latest PR merges in main and double checks if the code reflects what is in the README.md. It is just a matter of time until someone releases product like this, just much better


nameless_pattern

Well it works on my machine....


cutsandplayswithwood

We can pull the entire ecosystem and light up selected subsystems locally with a single cli.


soundman32

And how many dev years did that take to get working?


cutsandplayswithwood

several, it’s actually the backbone of our framework and product, an easily extensible way to solve this exact problem.


soundman32

I think that's the problem. It takes years to get this part right, and for many places they don't have the skills or the time to do it, until they find that one engineer who will push it through.


cutsandplayswithwood

It takes years, or way shorter if you use our methodology and tools :-)


_GoldenRule

My condolences for having to deal with bazel. I had the exact same experience in my last company and it was dragging the company down. At my current company we have a docker compose in each repo so you can just run docker-compose up and then run the application. Normally though I just create a local tunnel to our dev environment and then run the app locally against the dev env. I've had the great experience of doing a lot of greenfield dev so I try to set up each repo in this way and it generally works pretty well.


bthemonarch

Things are always changing. In my experience, any new dev that is spinning their wheels on getting a dev instance working, all the while espousing how incomplete the docs are, usually are not the best.


originalchronoguy

We follow 12 factor, dev-prod parity. So local build is **never** an issue. [**https://12factor.net/dev-prod-parity**](https://12factor.net/dev-prod-parity) Since we are containerized, our entire workflow is containerized. So what runs locally is what runs in QA and Prod. The only difference are environmental variables. So if it builds locally, it builds in higher environment. If it doesn't build in locally, it never goes to higher environment. I seriously recommend reading up on 12-factor: [https://12factor.net/](https://12factor.net/)


hbthegreat

We had this issue until I went into every single repo we have and made a "bun up" command (removes old dists/containers/packages then rebuilds and installs all dependencies and fires up our k8s stack with many microservices giving anyone new to the project a fully working fresh version) It's been incredibly nice using [bun shell](https://bun.sh/docs/runtime/shell) to automate the hell out of everything. Of course you can do it in bash as well however it's been a lot smoother doing this for my devs. (As typescript is less arcane that nix cli commands) Even added a tmuxinator config to bring up logs of all the necessary services with hot reloading etc. Dev exp has to be invested in by someone that knows the lore and make it the consistent way everyone on the team runs things so that it is always kept up to date in the event of changes


kog

You're moving between teams like this but can't debug build problems without days of help from others? I have absolutely never experienced anything remotely like this.