We need to talk about Agile
computer-science agile"Agile" is not a software methodology, it is an ideology... it is built on a manifesto, that in practice is often corrupted as meaning something beyond what it states. There's too many articles on what Agile does or doesn't mean, but essentially it does demand design, process, documentation and tooling; it just suggests they should be enriched with greater attention to the functions that lead to results. It was created at at time, when you could still buy software off the shelves with user manuals that were hundreds of pages long and if it didn't work, you couldn't easily update it.
As an ideology it has a desired goal, which is to enable software development, but promoting ways of working that help reach functional goals. It does also tend to be a bad fit for heavily regulated environments or security conscious environments.
It has very obvious missing goals, it does not address non-functional requirements like compliance, performance, reliability, consistency, accessibility, maintainability, backups, ..., the list goes on.
Therefore, it is a an ideology to achieve a function, regardless of how well that function works in an ever changing and complex environment.
So what is missing and what do we need to add to or replace in Agile. I'd like to introduce two very important ideologies that we should add to Agile.
The Right People: we need expertise, not just devs #
Most devs can write great functional code based on business ideas for most logical and presentation requirements but they come unstuck when they need expertise in:
- Encryption
- Information Security architecture
- ACID /transactions
- Deadlock and concurrency
- Legal requirements - audit, auth, retention, access control, ...
- Accessibility
- ...
An example: Encryption #
I do not know a single software developer who can write encryption libraries, like TLS 1.2 level encryption and I include myself. I believe there are lots of mathematicians and computer scientists who could develop individual encryption functions, but combined into a framework that is secure through the whole layer, I'm not sure many can.
I have not met a single developer qualified to identify what encryption libraries are good or bad.
So why do we let software developers pick encryption libraries and configure their implementation?
AES 256 and RSA 4096 are surely all you need? Well, no, you'll still need to understand at least the following to use them:
- PRNGs
- IVs
- Sources of random
- Blacklists
- key lengths
- key randomisation
- key management
- information leakage (especially dangers of using compression, caches or any other indexed data)
- Appropriateness of re-use
But our team is small and only average software developers and QA? #
- Contract expertise for design, review and testing
- Delegate features to specialist teams (information security development)
- Adopt a recognised standard (NIST, OWASP, WCAG, Mozilla recommends server sidde encryption configurations, etc)
- Adopt a recognised library/application: but does the proprietary or open source library guarantee standards met to help choose adoption? You should probably avoid hashids.
- Adopt a recognised service: cloud services mean everything except your business logic can likely be bought as a service, so why not do that: then it is someone else's responsibility... just have fun making sure the cloud is appropriate.
- Alternatively, don't do it - if a small building firm can't build a skyscraper it will find something else to do - some things are not supposed to be done by small teams and startups. Is the business value really there, is it worth employing specialist help? If the business value doesn't warrant the expertise, then it probably isn't valuable enough to be worth doing.
Maintenance: it isn't just bugs #
Agile typically does cater for bug fixing, but maintenance is more than that.
- Legal changes
- Vulnerability management
- Licences end, projects die
With GDPR arriving soon, hopefully everyone is reviewing all systems that hold, transport and ... access PII. However, it's not just GDPR that is and has been changing, especially in more regulated environments with PCI, MIFID, GCP, Accessibility law, etc often have amendments too. Contracts with third parties typically demand levels of security that must be adhered to as well. Some of these changes are passive (you have to discover a legal change) and some are active (you are told of a contractual change) but both need to cycle into the maintenance of what would otherwise be ignored code running in production.
Access is actually a really worrying problem. Many systems are setup with walls at perimeter, but not inside. So the web frontends and web api gates into the system are typically offered some lifecycle management to check for greater maintenance risks, but the other services can be just as dangerous.
XXE, Remote Code Execution (reflection, SQL, etc) and even internal tooling are all often one step away from attackers and that step might not be designed to worry about the concern.
The last concern is licences and discovering that your licence for proprietary software can leave you in a legal dilemma (do you shutdown to respect the licence or steal some more time, hopefully only to migrate or negotiate renewal) or that the open source project you use has died... do you take a risk and continue with unsupported and increasingly likely vulnerable software or refactor off. To do the right thing means knowing about it.
This requires audit #
Where is audit mentioned in the Agile manifesto?
Audit is a function of all business environments have and whether it is a legally qualified audit, like an accountancy audit or regulatory audit for compliance that has a high process demand or whether it is a just a regular check, like are the toilets clean with a tickbox to fill, it happens everywhere... except so much in software development.
How do you check your toilets are clean in your website in production? Sounds a bit strange, but how many of us are testing for vulnerabilities, reviewing log cleanliness, etc. We might be doing the functional parts that we know will break the application: is the server running, does the database have disk space... but beyond that too many places have security problems or even embarrassingly suffer problems like their TLS certificates expiring... when that happens, it is not a failure of the dev/ops who set it up, it is a failure of the business to have a business control around it.
So we might clean the toilets with a quarterly automated penetration test, alarms when the disk levels are high and a Jira scheduled ticket (why doesn't that exist?) for renewing the TLS certs, but what about the other layers of audit.
- What about the full stock take? Double check everything is correct and proper annually?
- The expired goods? Third party software validation of not just CVEs but that there still is support from the third party
- The health and safety risk assessments? Still in their early days, privacy impact assessments are becoming part of software development, but many still don't do them and are they maintained on a lifecycle basis or on a first release basis? Are the requirements drawn out from them validated?
As a lifecycle event it should be driven by business controls, which means they know what to control and that requires documentation, which is fine in Agile, Agile only really demanded you didn't write thousand page usage manuals like you used to get with software in the 90s (okay, some were only 600 pages).
This is the key reason why Agile Software Development fails as a project management methodology alone: the business behaviours demanded are always changing and yet the project leaves features dead in Jira: with just bug fixing idling along until a new feature is demanded.