.Keys: You've made some serious considerations there, sir. Specially the part about how students are graduating with incomplete formation in a necessary area such as security.
I've read it patiently and pleasantly, as a studant myself, so, I thank you for the shared knowledge.
I'm glad you felt inspired, it is important :).
It won't help with the technical bits, but books by Bruce Schneier are a great introductory read to get into the security mindset (which is half the battle really). He's published more books since, but 'Secrets & Lies: Digital Security in a Networked world' is a great read.
Darvond: The problem with IntSec teams is that they're a lot like IT teams; they're best when they're invisible as that means all is well, which to idiot management teams & bean counters raises the question of, "What are we paying you for?" to which most engineers have trouble kneeling down to their semi-sentient simian counterparts to say in lay terms in a summarized cliff notes what they do.
But as many have repeatedly said in this thread, the weakest link isn't your firewall, but that receptionist who chews bubblegum on a phone call who gets socially engineered into letting someone past the gates who so happens to be carrying an outside USB stick.
And nobody thinks to manage the groups or wheel to prevent content not specifically from within the building from even being allowed to execute (though I'm not even sure Windows has such fine grain control, even with group policies because it's that backwards at times.)
I don't even work in InfoSec or IT and these are just some basic things that came to mind.
That is less of a problem where I am (there is no non-technical person with any kind of access to the systems we are building, but that being said, even technical people are not foolproof), but you're right, it social engineering is a big issue in several places. You need to train people to realize protocols are there to prevent a breach, not just to make your work more difficult.
And as previously stated, it is also good to just give people as little access as you can (which you will do to protect against malicious employees, but which incidentally also help if an employee is compromised) and assume that some attacks will come from within. For example, even developers should not be able to impact a production system without first having to go through code reviews by their peers. The whole gitops methodology takes this concept further: Not only application artifacts, but all system operations should go through code, which among other things will force peer reviews and make everything auditable (ie, everything is preserved in the Git history).
In your example above, it greatly reduces the attack surface when you don't give the receptionist with a bubble gum any kind of meaningful access in the first place. Hopefully, your developers will know better, but if they don't, code reviews are very likely to catch a lot there. Also, most developers should not have access to the production system or production data. The few who do should understand that they have been entrusted with a special responsibility and be downright paranoid.
And yes, some engineering resources need to be beyond the reach of end-user facing product owners who won't understand the direct consumer benefit of several things that get done in an IT system (the benefits only become obvious once something goes wrong). That's why its a good idea to have product teams that are separate from other teams that operate deeper into the system.
Otherwise, on the technical side of things, it goes way deeper than the firewall. That is the security "outside wall" poster child, but most attack vectors don't need to do anything at all to the firewall. They attack various layers of your application through traffic your firewall will allow.