
Senior developers donโt ignore best practices because theyโre careless. They ignore them because theyโre busy, the team is understaffed, deadlines are real, and some โbest practicesโ feel like process cosplay.
But thereโs a catch: the practices that get skipped arenโt the fluffy ones. The ones that quietly rot are the exact behaviors that keep software stable under load, under change, and under pressure.
Here are 10 best practices that experienced engineers commonly deprioritizeโand why they come back later as outages, rewrites, or slow-motion delivery failure.
1) Keeping PRs small (and boring)
What gets ignored: PRs balloon into 1,000-line โjust trust meโ changes.
Why it happens: Bigger diffs feel efficientโone branch, one merge, done.
What it costs: Review quality collapses. Bugs hide. Rollbacks become terrifying. Knowledge becomes siloed in the authorโs head.
Do this instead: Aim for reviewable slices: one behavior change per PR, and follow-up PRs for cleanup. The speedup comes from fewer rework cycles, not fewer merges.
2) Writing down decisions (ADRs) before the context evaporates
What gets ignored: Architectural Decision Records (ADRs) and lightweight decision logs.
Why it happens: โWeโll remember why we chose this.โ You wonโt.
What it costs: Six months later, a new engineer asks why you chose Kafka, why you avoided a managed DB, why the service boundaries look weirdโand nobody can answer without archaeology.
Do this instead: One-page ADRs:
- Context
- Decision
- Alternatives considered
- Consequences
This is not documentation theater. Itโs future-proofing.
3) Testing behavior, not implementation
What gets ignored: Tests that assert user-visible behavior (contracts) over internal wiring.
Why it happens: Implementation tests are faster to write and feel โthorough.โ
What it costs: Refactors become impossible. Youโre โstuckโ because changing internals breaks half the suite.
Do this instead: Prefer tests that validate:
- API contracts
- domain rules
- critical flows
โฆand keep unit tests focused on invariants, not private method choreography.
4) Designing for observability on day one
What gets ignored: Structured logs, metrics, tracing, correlation IDs, useful dashboards.
Why it happens: Observability feels like โops workโ and gets punted to later.
What it costs: Later arrives during an incident. Without good signals, every outage becomes guesswork plus Slack panic.
Do this instead: Build a minimum โdebug packageโ into every service:
- request ID propagation
- structured logs with key fields
- latency + error-rate metrics
- a few golden dashboards
Itโs cheap early and expensive late.
5) Defensive input handling (because your API will be used โcreativelyโ)
What gets ignored: Validation, sane defaults, explicit error messages, and boundary checks.
Why it happens: โThe client wonโt send that.โ The client will.
What it costs: Undefined states, data corruption, edge-case incidents, and support tickets that read like horror fiction.
Do this instead: Be strict at boundaries:
- validate inputs
- fail fast with actionable errors
- normalize once (and only once)
- treat external calls as unreliable
6) Backward compatibility isnโt optional (itโs a contract)
What gets ignored: Versioning strategy, migration planning, compatibility tests.
Why it happens: Breaking changes are easier than careful evolution.
What it costs: Every consumer becomes blocked, or worseโsilently broken. Coordinated deploys become a recurring tax.
Do this instead: Assume distributed reality:
- additive changes first
- deprecate before removal
- run dual reads/writes during migrations when needed
- use compatibility tests for APIs/events
7) Dependency hygiene (pinning, upgrades, SBOM mindset)
What gets ignored: Regular dependency updates, lockfile discipline, supply-chain awareness.
Why it happens: Upgrades feel thanklessโฆ until a security advisory becomes urgent.
What it costs: โBig bangโ upgrades that break production, surprise CVEs, and transitive dependency chaos.
Do this instead:
- Upgrade continuously (small, frequent)
- Pin where appropriate
- Automate alerts and renovate-style PRs
- Treat third-party code as part of your threat model
8) Automating quality gates instead of debating them in code review
What gets ignored: Linting, formatting, static analysis, pre-commit hooks, CI policies.
Why it happens: Senior engineers think they can โspot issues manually.โ
What it costs: Reviews devolve into style fights, and obvious mistakes slip through because humans are inconsistent.
Do this instead: Let tools be the bad cop:
- auto-format always
- lint on CI
- basic static checks (nullability, unused code, complexity thresholds)
- enforce โtests must passโ as a merge invariant
Your reviews should focus on design, correctness, and risk.
9) Feature flagsโฆ and the discipline to delete them
What gets ignored: Flag lifecycle management: ownership, expiration, cleanup.
Why it happens: Flags ship features fast. Cleanup feels optional.
What it costs: Permanent complexity. Weird branching paths. Bugs that only exist when an old flag is toggled during an incident.
Do this instead: Treat flags like perishable goods:
- every flag has an owner and a delete-by date
- track flags in a registry
- add a monthly cleanup ritual
The best feature flag is the one that no longer exists.
10) Basic security practices (because โweโre not a targetโ is not a strategy)
What gets ignored: Least privilege, secrets hygiene, secure defaults, minimal threat modeling.
Why it happens: Security is invisible when it works, and deadlines are loud.
What it costs: Data exposure, credential leaks, โminorโ incidents that become existential.
Do this instead:
- principle of least privilege everywhere
- no secrets in logs, ever
- rotate credentials, use managed secrets
- do lightweight threat modeling on new surfaces (auth flows, file uploads, webhooks, admin tools)
The uncomfortable truth: seniority doesnโt cancel entropy
The higher your seniority, the more youโre rewarded for shipping, unblocking, and moving fast. Thatโs exactly why these practices get skipped.
But the best teams donโt โdo everything.โ They choose a few disciplines that compound:
- Small PRs
- Decision logs
- Observability
- Backward compatibility
- Automated quality gates
Pick just two to enforce relentlessly, and youโll feel it in fewer incidents, cleaner velocity, and a codebase that doesnโt fight you.
If you want a north star: optimize for the day after launch. Thatโs when software stops being an idea and becomes an obligation.
Harish Museboyina
