Soulsailor Consulting Ltd

Enabling organisational value by positively disrupting your technology projects

©2018 Soulsailor Consulting Ltd
Registered in England & Wales: 8086254
VAT Registration: 136 9505 96

IT Assurance - SharePoint Governance Guest Blog Series


Welcome to the first in the series of guest blog posts discussing alternative (well different to mine anyway) perspectives to the Seven Waves of SharePoint Governance that I define in my book The SharePoint Governance Manifesto.

For this post we have Hugh Wood talking about Code Governance which is an essential part of the IT Assurance Governance Wave and splashes into the Information Governance Wave too!


Hugh Wood

Code Governance

My name is Hugh Wood, and I am the lead developer at Rencore AB, the company behind SPCAF the only code analysis framework for SharePoint. The tool is designed to help developers, architects, project managers and IT Pros to keep track of threats in custom code solutions reducing cost and risk of every IT project involving custom solutions in and around SharePoint. I am also a qualified Systems Analyst and a member of the International Association of Software Architects. Which means not only can I develop solutions for code governance but I understand them and can quantify the data in custom solutions for analysis.

I have spoken to Ant Clay a few times about doing a blog series or a podcast on governance covering business and IT sides of governance. Here we go kicking it off with an IT focused blog post on code governance.

The planning and development process around the code is as complex as any other section. However it seems most companies ignore this.

So why code governance? It is the critical weak point of any IT project. Code written by developers and placed anywhere can open up new breaches in security, information policies or even break local laws, and you may not even know it has happened.

Policies and Thresholds

If you are a company following information security management guidelines, then you will probably be ISO-27001. In contrast, ISO-9000 does not include guidelines for Information Security Risk Assessment.

This simple difference means that an ISO-27001 company will have a set of Information Policies in place before we get to code governance. An ISO-9000 company will not be required to have such policies however they still may dependant on the business involved.

This cannot be the case with customisations. If you are writing any form of customisation or any code in any system, then you need to define your quality, security and performance targets of all custom solutions. Depending on your targets you can go from only allowing minor defects in the code to allowing for zero defects only.

Each type of code platform should be considered separately. HTML, for example, has a broader range of allowance when it comes to errors, but low-level languages such as C++ can cause catastrophic malfunctions when errors are allowed to slip through into the system.


Here we are looking at 3 KPIs from the code:

1.       Defect Density (High or Medium per 1000 lines of code)

2.       Critical Defects (Per 1000 lines of code)

3.       Uninspected Defects (Defects that persist over time without being fixed)

If we are under one defect per 1000 lines of code, then we are considered to be in good stead. However if we are not at 0 critical defects per 1000 lines of code, then we will have a problem, and this will cost money.

The third part to track is uninspected defects. If a defect persists for more than 15 days, then we have a problem in that area of code that can also cost money.

Dealing with defects may by in fact though a false positive, in which case it is fully acceptable that after testing the defect should be excluded from further analysis.


There are two areas of security to consider:

        Information Security – Encryption, storage and transmission.

        System Security – Memory leaks, unsafe code, authentication spoofing, etc.

Code analysis can prevent these from occurring by identifying known issues early. These will also affect quality defects but should always be tracked individually.

We track this by two metrics; one being Security Defect Density (Per 1000 lines) and the other being Web Security Defects (Total defects).

Research and Development Efficiency

There are critical metrics in code design that can not only impact performance, but can also make code analysis and testing impossible. This creates a massive risk to any project as any code that breaks these thresholds cannot be guaranteed if tested/analysed at all.

Furthermore, complex code can be difficult to track and maintain, and we should track this to prevent such matters from occurring. This can be extremely important if you have code from a third party supplier, as if you take on the external code you can experience extreme difficulty in the maintenance of this code in future endeavours.

We look at several indicators here:

        Cyclomatic complexity – If this reaches 24 then the code is determined to be impossible to test, past 15 it is difficult to analyse, past ten it is difficult to maintain. This is the key metric that needs to be tracked in order to ensure that all other metrics remain accurate.

        Comment Density – This is an optional metric dependent on the strategy taken. There are two camps here either code should be self-commenting or comments should be provided to allow for easier navigation of code. If required this should be recorded as comments per 1000 lines for the code. A density of around 25~30% is deemed acceptable.


Testing is an important factor of code quality. Because the code does not throw issues with static analysis it can still not work at all.

We take one or more approaches to testing depending on how robust you require the project to be:

        Test Driven Development – TDD as it is known, means that the test is written before the code. This is usually used for automated testing through a unit test; the code is then written to complete this unit test. Every branch in code required 2 unit further unit tests. This goes back to cyclomatic complexity because a perfect TDD will have 1 unit test per the cyclomatic complexity of a method. However above, 80% is usually acceptable as long as other types of testing are involved.

        Smoke Testing – This is a quick end user pass done by the developer before it is sent to a test team. This ensures that the software at least appears to work as required.

        Test Scripting – A test team will write set procedures for testing functionality including any scenario that should, and shouldn’t arise. This combined with the above two methods will make an extremely sturdy method. However smaller projects on their own may or may not use this method exclusively.

Software and Automation

The type of analysis software you use depends on what kind of project you have. Planning this should be a part of your code governance strategy and should be in place before a project can start. Microsoft and other vendors do supply some free tools to do basic testing, but there are many more advanced tools on the market to supply more accurate data.

I would strongly recommend having a team research this thoroughly and get demos from vendors and pick the correct tools for the job. The cost in this area is often up to 20% of the target project expected cost, and it will save more than is spent on the software in question.


Code governance is a broad topic, but it has some very tight focal points. The software can help you significantly as long as the correct research is done to get what you need to cover the desired policy.

Personally I would strongly recommend following ISO-27001 Information Security Guidelines and implementing coverage for all the above KPIs and metrics in order to ensure an acceptable level of IT Assurance as Ant would put it in his 7 Waves of SharePoint Governance.