In this post I am going to cover some pitfalls of Penetration Testing. It is kind of three rants stitched together. Loosely around the theme of how we generally interact with customers, as well as the reporting processes that I have seen over the last 15 years.
A person whose job it is to respond to penetration testing findings was asked this question:
What are the pain points you have experienced when responding to Penetration test findings?
This is what they said:
It is also worth noting that this was not a customer of ours.
I yelled “preach!”. Whoever this was I really love that they hit the nail on the head. I opened my most recent report where I had tackled that concern , I hope, adequately:
I hope that if the anonymous responder were to have seen my report. That they would at least see that I considered their plight, and that I have given them an easy out when responding to their manager. “Look, this guy even said it is possibly a false-positive”.
The target had a server banner which, if true, was vulnerable to several things. Unfortunately the OS was not listed in the banner (and was not otherwise 100% confirmed) so I could not prove or disprove the versions without either exploiting the issue, or being given more access. Had the banner said “RedHat” then I would most definitely have changed what I said. It would say there is a high potential that backporting was being used.
This set me off thinking again about how our industry often fails the customers we are paid to help.
If our industry has heroes they may or may not wear capes. But they almost definitely work on the blue side in my opinion. The brave souls tasked with the gargantuan task of interpreting penetration testing reports. From multiple consultants, from different vendors. The variability of output is enormous. These warriors have to find someway to make it work regardless of what thing has arrived as the deliverable.
I have seen Pentest companies who try to solve it in two ways:
Dictatorship – Based on one person’s vision you set a reporting standard.
You develop a rigid knowledge base of vulnerability write ups which tells everyone exactly how to report something. This includes fixed recommendations which must be provided.
You retrain every consultant in your team to meet that standard.
You yell at people during QA to remove any sense of individuality in reporting.
You fall out over CVSS risk ratings because “we need to risk this exactly the same way as the customer got an XSS which was 6.5 last week”.
Some Customers LOVE This. They don’t want any variability because the master spreadsheet they have with all vulns exists. They want the exact risk score for every instance of a vulnerability ever. They just like it neat.
The goal is to make every report as identical as possible across any customer and from any member of the team.Robotic Reporting.
Cheerful Anarchy – You set a baseline standard for reporting by providing a structure for the reporting and a style guide. Then you let folks have at it!
You accept that Pentesting is consultancy profession. Which is influenced by the experience of the consultant doing the work along with their understanding of the risk appetite for the customer.
You provide a basic knowledge base of vulnerability write ups which covers a consistent title, background, and baseline risk score. Then encourage the consultant to produce the remaining content just for that project.
You train your consultants to understand risk calculation and expect them to alter the baseline risk considering every instance they see.
The goal of this is to make every report tailored. Therefore inconsistencies will exist such as two consultants finding the same vulnerability with the same impact but providing different risk ratings.
Of the two I have always preferred cheerful anarchy. I know that some customers absolutely want a penetration test to deliver consistent results over time. It helps them sleep at night. I argue that a little anarchy might be good since the consultant should be free to express their opinions SO LONG AS THEY EXPLAIN THEM WELL ENOUGH.
In truth you need to essentially support both in 2020. Big accounts who want the consistency need to get it. Other customers who are perhaps in earlier stages of their security maturity processes should be given tailored findings in my opinion. They haven’t necessarily encountered an SQLi before, so you need to contextualise it a lot more. So I recommend being so flexible that you can be rigid… I suppose?
Places where a penetration tester needs to be super clear is when dealing with potential false-positives. If the only evidence you have is from a vulnerability scanner then you have not done a good job. I implore you to always find some other means of confirmation.
In situations where the vulnerability is raised only based on banners.. Then your flow is to:
Find a working exploit. If you can, then try to exploit a docker container or VM with the same software first to verify the payload works well. Ask the customer if you can use the exploit. If you have done it in your lab first you can explain that it works well without a risk to stability. Otherwise you can warn them that it may trigger an outage. They can then make the decision themselves as it is their risk.
If no exploit is available. If you can, then execute OS commands to verify the installed patch. In most cases you do not have this access. You can either document the finding with caveats (as my report did), or.. and I appreciate this is a revolutionary idea. You can ASK the customer to confirm the installed version themselves and provide a screenshot. In my case the time was not available to do so and I was forced into the caveat approach.
I know, I know. I suggested you speak to the customer! Worse still I say you should ask them to support you improving the quality of how you serve them. You should not forget that a Penetration Test is a consultation, and that you are on the customer’s team for the duration of the engagement.
They say you should never meet your heroes. But it has been going really well for me when I speak to them so far.
When interviewing candidates, who have no previous penetration testing experience, there is often a gap in their knowledge. While they have all practised and honed their technical skills they will generally not have practised risk assessment or the impact that a vulnerability would have.
The probable reason for this is that the act of hacking a target is way sexier than trying to categorise or document the fault. There is no impetus to generate a report while you test Damn Vulnerable Web App or the myriad of other safe to play with targets. So exactly why should you?
The difference between a hacker and a consultant is that as a professional you will have to document what you do. You will definitely have to tell the customer exactly what is impacted, who can do it, and for extra points equate that directly to their business if you can.
Failing to do so will generally result in a shrug from your customer and a look in their eye that asks “why should I care?”, while they spin a coke bottle absentmindedly.
In order to work out an appropriate impact rating you are going to need to answer at least these questions:
What is impacted?
Who can locate or exploit the vulnerability?
Are exploit tools and techniques freely available?
Does an attacker need any conditions to be true to exploit the flaw?
If an attacker was to exploit it is there a direct impact to the business?
Simple eh? Let’s work through one example so you can see the reasoning and logic going into an impact rating.
Anyone with a clue will tell you that SQL Injection is a “high” risk vulnerability. But do you know exactly why? That is the difference I look out for.
A deeper understanding and not simply memorising the impact rating of everything will help you risk the previously unknown flaws. Or deal with the crafty bespoke ones that will never come around again in your career.
To enable me to set an impact I am going to need to spit out a bit on the location of the vulnerability as context is absolutely everything when you are creating an impact rating. Slavishly replaying the same ratings every time without reviewing the context makes a poor consultant. You are being paid to tailor your work to the actual environment and provide the right advice to them.
Overview of the target
The target web application is an e-commerce platform which sells items. It handles personal information for users including their contact details, home address, and their order history. The payment is handled by a 3rd party. The technology stack is Linux running MySQL and Apache. The location is on the product page through the “productId” parameter which is sent in the URL.
What is impacted?
Database for certain. With the applications configured user you can: Read, Modify, Insert and Delete data.
Potentially the operating system through “file read” and “file write”.
Potentially the operating system through command execution though more difficult in MySQL than in some alternative databases.
As a pro you will need to confirm these extra “potential” impacts to the operating system. For my simple scenario lets say they have configured away your ability to access or write files, and that you cannot achieve OS command execution.
Who can locate or exploit the vulnerability?
The product page is accessed without authentication since people sign in at the point of sale only. There is no authentication barrier to limit knowledge of the flaw, any attacker can find this.
Are exploit tools and techniques freely available?
SQL Injection is a well known technique.
Training and practical tutorials are free and easy to find.
Tools exist such as sqlmap which can automatically find and exploit it WITHOUT needing to know the intricacies of the exploitation.
Bottom line: it is very easy to exploit.
Does an attacker need any conditions to be true to exploit the flaw?
Short answer: no
Longer answer: no user interaction is required to exploit. The attacker is over the Internet so does not require physical access or access to a particular local network. Worth repeating again that it can be found and exploited without authentication.
If an attacker was to exploit it is there a direct impact to the business?
There is a direct impact to the business.
The personal data that is being stored is identifiable and so would fall under the Data Protection Act in the UK. Should someone dump all the data and then leak that then a fine is likely for the business. Depending on the scale of the breach and the target customer this might be a sufficient fine to cripple their business or close it entirely.
There is also a potential reputation damage risk to the business. Consumer trust can be lost and sales will go down.
There we have it all of the ingredients to consider when you think of the impact. There was a slight of hand up there where I split the answer for “what is the impact” into two different entities: the database, and the operating system. I will get back to that in a moment.
First lets explain a simple impact model to you. There is a model called “CIA” which stands for Confidentiality Integrity and Availability. Lets expand a little on these three concepts:
Confidentiality – Access to information which an attacker should not have. Pretty simple if an attacker can read your account details or access files from the server then they will know more than they should. The impact of a loss of confidentiality is dependent on the value of the disclosed information.
Integrity – Ability to modify information or execute arbitrary commands on a system would affect the integrity. If you change the contents of a web page to suit your needs you have affected the integrity. If you execute an operating system command you cannot trust the server is operating as intended anymore.
Availability – If an attacker can simply delete the data or the website content then you will be making it unavailable for legitimate users. If there is another means by which to prevent legitimate users acting as they would like, then you will also have removed availability.
Lets say you provide customers with impact ratings in the categories: high, medium, or low. A very simplistic approach but a fairly effective one and not uncommon in the industry. In order to get to your category of impact you will need to evaluate your vulnerability in terms of the CIA for each answer in “What is impacted?”,
As we provided two entities that are impacted (or potentially so) lets ask and answer ourselves two more questions:
What is the impact rating for the database?
Reminder: our SQL injection has allowed full: read, modify, insert and delete privileges.
Lets fill out the CIA model for the database then:
Confidentiality – We can read everything. All user data is at risk including login credentials potentially but definitely including personally identifiable information. There is a “high” impact to confidentiality.
Integrity – We can modify everything. You cannot trust the data anymore since an attacker could alter every accounts password, invent new orders etc. There is a “high” impact to integrity.
Availability – We can delete everything. Dropping all tables would remove everyone’s orders and their personal information. The site would be dead and nobody could access it or browse the product range. There is a “high” impact to availability.
Three “highs” under CIA? Seems to me we have a “high” impact vulnerability to me, what about you?
What is the impact rating for the operating system?
Reminder: our SQL injection cannot read or write files to the operating system, and cannot execute operating system commands.
Lets fill out the CIA model for the operating system then:
Confidentiality – We cannot read files. The impact to confidentiality of data held outside the database on the operating system is non-existent.
Integrity – We cannot modify files, and we cannot execute commands. The impact is non-existent.
Availability – We cannot execute commands. Even if we “drop all tables” at the database layer the OS would be functioning perfectly. Splitting hairs here because the net effect of dropping all tables is that the site would remain unavailable. But just to be clear the OS is sitting pretty and available.
Three “non-existent” impacts to the OS? Smells like a zero impact issue then.
You would provide customers with one impact rating only. The artistry in penetration testing is being able to calculate all of the potential impacts to arrive at a final snappy answer for the customer. They will often want to order vulnerabilities by the perceived “risk” or “impact” so that they can address the biggest points first.
You always lead with the biggest impact rating which in this case is to the database. You should also make mention of the OS impacts being explored in your report but proving fruitless on this occasion. However, we have arrived at “high” and that is what we go with.
There are various other models for calculating risk or impact and if you love a number from 0.0 to 10.0 then check out CVSS in particular. It is a fully fledged formulae which embeds the concept of CIA reasonably well. As with my process above you would need to calculate multiple risks based on what is being affected and then select the highest rating.
The problem is that CVSS does not always sit well with every type of potential risk you may need to capture in your report. For those fiddly bespoke ones you sometimes have to get your hands dirty and pick a “high”, “medium” or “low” out of the air.
Now that you have read this you will know exactly what to do on that day.
Tl; dr version – I made a thing that makes the bulk of CVE vulnerability details easy to grep for use when reporting. Get it here: https://github.com/cornerpirate/cve-offline. I update it once a month from the NIST database.
Anyone sticking around to read the rest gets a process for checking for known weaknesses in a service. Rationales for why, as well as a bit about how to use CVE-Offline.
Rationale and Process for vulnerability identification
When writing penetration test reports you will need to interact with many online resources to do that well. The simplest thing that you should do is include a summary of any known weaknesses within an installed version of software.
The biggest database of shared vulnerability knowledge is the CVE database. An overview of that is available at the URL below:
Almost every vulnerability in common services will result in a vendor patch AND an entry with a unique CVE identifier. When you are targeting a customer you need to compile a list of known flaws. This is so you can advise them of the exact technical risk posed by using version X of Apache on the day the test was conducted (as an example, not to pick on Apache).
To do this, you can follow the process outlined below:
Identify a service version – “nmap -sV -p <portnumber><ip>” will to this in most cases.
Look for known vulnerabilities in version – If you are using Nessus, or an alternative vulnerability scanner, they do a pretty decent job of maintaining their database of CVE issues. “IF $version == x.x.x THEN vulnerable to x,y,z” is the logic they use.
If the service is unknown to your vulnerability scanner you will have to find your own list of CVEs if possible. Look up the vendor and product on http://www.cvedetails.com/.
They have an export facility of sorts so you can use the output for your product to achieve what CVE-Offline does when you have an Internet connection.
Alternatively, you are going to have to find the release notes, bug tracker, or change log for your target service. They usually track security defects using CVE references. This is not universal though.
Format your list for your report – now that you have a list you are going to want to present them to your customer.
If your list of vulnerabilities is around 20 you can probably show a summary of each issue to the customer.
If the list is ENORMOUS, then are you adding much value in creating 20 pages in your report? Probably not.
In both scenarios you can present the most significant risks with fuller write ups. Go out looking for exploit code to help quantify the exploitability of the service, and present a statistical summary.
For example, “The service had 200 known weaknesses within the CVE database. Of those X were in the high risk range, Y were in the the medium risk range, and Z were in the low risk range. The consultant would advise special attention is paid to CVE-….-…. and CVE-….-…. which allow remote code execution, and denial of service respectively.”
Following that process should get what you want in your reports.
A word on false positives
The above process is pretty simple. There is one massive caveat that you will need to be aware of. That process is, more often than not, based on the service banner returned by your target.
If the OS uses “backporting” to supply security updates, or the admin has mucked about, then the banner number will not necessarily reflect the target’s exploitability. In this case we can get into “false positives”. A quick Google gives us the following definition:
If the target is reporting a service banner which is inaccurate when clashed against the vendor’s official release history. Then your results will be inaccurate and you have a false positive.
If we are testing in a black-box scenario we have no access to the underlying operating system by default and cannot fully confirm the banner status in many cases.
What to do? Ensure that your report includes phrases like these:
Vulnerability based on service banner only.
Potential false positive result.
Impact and risk based on assumption that banner was accurate as this is the worst case scenario.
The first line of your recommendations section should definitely be “Conduct an investigation to ensure that the service is vulnerable”. Be sure that you convey these sentiments on all phone calls with your customer and they should learn to trust the other results more as they do not have them!
With that caveat dealt with by the language in your report you can be sure to prevent awkward phone calls when customers call up going “we looked into it, and it was NOT vulnerable.”
Experience tells me that even if you include these caveats you have to justify yourself sometimes. Be aware of it coming by preparing some polite reasons to give.
It is pretty simple to obtain CVE-Offline. Just clone the git down:
You can then use your platforms “grep” of choice to find a CVE in the “cve-summary.csv” file. For example, lets look for “CVE-2016-0142” do the following:
grep "CVE-2016-0142" cve-summary.csv
CVE-2016-0142,9.3,"Video Control in Microsoft Windows Vista SP2, Windows 7 SP1, Windows 8.1, Windows RT 8.1, and Windows 10 Gold, 1511, and 1607 allows remote attackers to execute arbitrary code via a crafted web page, aka ""Microsoft Video Control Remote Code Execution Vulnerability."""
This has given you a comma separated line of output that you can now work with. The format of this line is:
You can pipe that bad boy into a .csv file, open it in excel and make pretty tables that you can paste into your report. You are welcome.
I update the repository once a month from the nist export feeds. You can get the raw data from here if you want:
Report Compiler is a Java GUI allowing you to manipulate the output of various computer security scanner tools and save the results as XML. That is great if all you want to do is make an ordered list of vulnerabilities and set their ratings etc.
What it does not do is take that final step for you and make the a word document from the resulting tree. It does this on purpose since that becomes a commercial advantage that I should not give anyone, and also because you need to take ownership of your own companies reporting template not me! If I gave you a way to make word docs it would simply become a bun fight as to what to include and where.
This post will cover techniques you can use to integrate with your own word template by presenting a complete end-to-end process.
So you WANT to import an XML file into a word document and have it do pretty things? You have three options:
Create a Word application level add-in
1. Always available when word is open.
2. Maximum level of control.
3. Allows you to use “modern” approach via Visual Studio.
4. Shipped docx files do not have any macros (no security warnings to recipients).
1. Visual Studio costs money to get the project template.
Create a Word template with embedded macros
1. You can use the VB editor built into word (ALT+F11).
2. Decent level of control over word objects.
1. Debugging a macro is a PAIN in the arse.
2. Can trigger security alerts when emailing resulting files.
3. The VBA coding interface is not modern compared to Visual Studio.
Create a script to convert XML to docx
1. Doing it all in code means learning nothing about word!
2. It is usually quicker to make a PoC.
1. I have used various libraries to do this in various languages. You do not have full control of Word objects with any of them.
2. The learning curve is steep as the APIs are usually not well explained. For simple tasks this is usually sufficient though.
The keen ones amongst you will note that option 2 (embedded macro) is the only one that everyone has access to. So this post will focus on that.
This post will describe the techniques that can be used to integrate ReportCompiler’s XML with a word document. It covers the following:
Creating a report template – So we are on the same page lets make a blank document and show all I am doing step by step.
Creating a building block for your vulnerability – design a reusable section you want to rely on again and again to deliver the start point for your vulnerability write ups.
Add bookmarks – these are place holders that should be inserted for instance where the title of the vulnerability should be placed within your blank vulnerability.
Save that as a building block
How to manually re-use that building block again.
Modify Word GUI to add options – A tangent away from importing ReportCompiler but very useful.
Add a macro to import ReportCompiler XML file – After all of the above finally getting down to showing the Macro code capable of importing XML and dropping it within your blank vulnerability.
By the end of this you will be capable of leveraging ReportCompiler to aid your reporting in your day job. There should be no mysteries left and you can use it to bodge pretty much any reporting process which ultimately lands as a word document before delivery.
The final template that was produced by this tutorial is available on github at:
If you are familiar with all of the word concepts and just want to see the thing in operation then skip to the end my friend.
Create a report template
A template is NOT a docx it is a dotx or in this case a dotm file since we are going to save a macro in it. If you are using a docx then you are not getting the full benefits of templating using word.
When you double click on a template file (dotx or dotm) it will launch a new instance of the file called “Document 1” which is entirely separate. This means edits in your new report are not directly editing your source template. Ensure your template has no client data in it and profit from never worrying about cross contamination from old reports again!
Create a building block for your vulnerability
I covered the concept of building blocks in Word Tips. The high level idea is that if you want to re-use a section you should probably create a building block. In this case our Macro is going to want a predefined building block to drop and then populate with our data from ReportCompiler.
The following shows part of the template where I created a basic structure that all our vulnerabilities can go into:
The part highlighted in red demonstrates the area which will become our building block. The line starting 1.1 is a Heading 2 style paragraph which will be where the vulnerability title will appear. This is a good choice since your vulnerabilities will appear in the table of contents and you can cross-reference to this specific heading.
The previous screenshot shows the building block without any “bookmarks” set. Here I will show how to add one and then show the final product.
To insert a bookmark follow the steps below:
Place the mouse cursor at the desired location.
In this case click after 1.1 where the vulnerability title should go.
Click on the “insert” tab on the ribbon and then look for the “links” section to find the “Bookmark” option.
This will load a popup. Give the bookmark a useful name and then click on “add” as shown below:
After you have clicked on “add” you should see a new bookmark next to the 1.1 as shown below:
If you do not see that vertical grey bar then the bookmark has either not been added or your view in word is setup to hide formatting.
If you click “insert” -> “Bookmark” your bookmark should be visible in the list also.
Now that you know how to add one bookmark here I go skipping to the end to show where the additional ones go into our building block:
You probably want to take note of the exact bookmark names as case is important when it comes to the Macro interacting with these.
Note; I took the screenshot at a point where the vuln title was not blank. I could not be bothered redoing the red lines. Sue me.
It is also possible to use placeholder text and then simply search and replace that to obtain the same results. However, when it comes to performance that worked out way slower and way more prone to errors. Hence I now use Bookmarks like I was probably always supposed to.
Save as building block
Now that we have a blank vulnerability section marked up with Bookmarks we should go ahead and save that as a building block. To do that select all of the content that you want to save as shown:
At this point you want to follow the steps below:
Click on the “insert” tab on the ribbon.
Click on the area labelled “Quick Parts” on the text group (towards the right of the screen).
Click on “Save selection to quick part gallery”
Then configure the building block as shown below:
A couple of things are worth highlighting here:
Name – Select a useful name and note it down. Your macro will need to know the exact case sensitive version of this.
Gallery – Select a custom gallery, others are available but if you use Custom 1 then you can add a control easily to word to show ONLY building blocks in that gallery. This is shown later under heading “Modify Word GUI to add options” below if you cannot wait.
Save In – If you want the building blocks to be shipped with your template then the name of your dotm should appear here. If you were to change the drop down to “normal.dotm” or the other options, then you will find that building blocks are stuck on your “developer” computer. By selecting the template file itself you are ensuring they are shipped to other users which is almost always what you will want.
Options – I have selected “Insert Content on its own page”. You can alter that if you want vulnerabilities to appear one immediately after the other. Sometimes people want one vulnerability per page etc. Choice is yours.
Click on “ok” and you will find that the building block has been saved to Custom Gallery 1.
Now that you have saved this as a building block you can go ahead and straight up delete the highlighted blank vulnerability. Your report template probably wants to start with zero vulnerabilities since you haven’t found any yet.
Save the template file at this point.
How to insert our blank vulnerability when making a report?
This section is optional if you know how to use building blocks. Skip straight down to the macro if you already know this.
In the previous section we finally saved our blank vulnerability as a building block. We can use the laborious “insert” -> “quick parts” -> “building blocks organiser” -> find Custom 1 gallery entry, select the block we want and then “insert” process to manually add back in our building block. But that is a lot of effort to do every time.
Word has a few options to make inserting your blank vulnerability easier. One is simply start typing the name of the building block. In this case we called it “BlankVulnerability”.
The following shows what happens if you try and type “blank”:
You can press Enter to insert the building block. Amazeballs yea?
However, what if you grow your building blocks repository massively are you going to remember all of the names? In which case look in the next section to GUI yourself happy.
Modify Word GUI to add options
This section is optional if you know how to use building blocks. Skip straight down to the macro if you already know this.
It is possible to modify the Word GUI to add a custom tab to allow us to put pretty much any control we want where we want it. The process is a bit convoluted so most people get scared and never bother.
Follow the steps listed below to do this:
Access the word options by going to “File” -> and then “Options”.
Click on “Customize Ribbon” on the left hand side.
Where it says “Choose commands from” select the drop down and set it to “All Commands”
In the massive list of all commands find the “Custom Gallery 1” control (this has to match exactly the Gallery you saved your building block into.
On the right hand side you need to create a new tab. Click on “New Tab”, you can right click on the newly created tab and give is a name. I called mine “ReportCompiler”.
You should then select the “ReportCompiler” tab and then click on “New Group”. Again right click on the newly made group and rename this to something meaningful. I used “Custom Building Blocks” as the name.
On the right hand side you should have your custom group selected and on the left hand side “Custom Gallery 1” should be selected. Now click on “Add >>” in the middle and watch in awe as it adds that to your custom group.
The following screenshot shows how this should look at the end of the process:
If all is well you should have a brand new tab on your ribbon called “ReportCompiler”. Why on earth did I bother making you do that? Well looky at your new tab here:
It shows you a preview of the building block, it shows the name of the building block, it only shows your building blocks, and if you click on one it will drop it instantly into the word document.
This saves me a lot of time. However, this has nothing to do with the Macro importing of ReportCompiler which we are getting onto in the next section.
Add a macro to import ReportCompiler XML file
Now you have; a blank vulnerability section marked up with bookmarks, saved in Custom Gallery 1. You are good to go and fire into the world of VBA within word.
I have to admit that the development environment for VBA is pretty poor and you will have a real nightmare debugging things at first. It really feels legacy but we are just about able to coax what we want out of it. If you want all of the modern conveniences your best best is to write an application level word add-in which requires a paid for version of Visual Studio to get the project template.
You should have a word document roughly similar to mine at this point. Press “ALT”+F11 to launch the VBA Editor.
My file is called “Template.dotm” we want to make sure our VBA code is in the right project. The following screenshot shows how to locate the correct one:
Find the project called “Project (<filename>)” then expand and select “ThisDocument”.
This means that we are finally embedding a Macro into this template file.
When the user double clicks on “Template.dotm” they will get a new file called “Document 1.docm” which will also contain any VBA specified in “ThisDocument”. Got it? Good.
The following shows the full listing for the Macro code that I am giving you to get started with. If you copy it all and paste it into that massive empty text box on the right you will have just created the same template.dotm that I have added to github:
‘Copyright 2016 Cornerpirate
‘Licensed under the Apache License, Version 2.0 (the “License”);
‘you may not use this file except in compliance with the License.
‘You may obtain a copy of the License at
‘Unless required by applicable law or agreed to in writing, software
‘distributed under the License is distributed on an “AS IS” BASIS,
‘WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
‘See the License for the specific language governing permissions and
‘limitations under the License.
‘ 1) Prompt user for location of report compiler file.
‘ 2) Parse file and get access to all “vuln” tags.
‘ 3) Loop through each vuln and add into word document at current selection location.
‘ There is no error handling, and this is NOT handling how you get the affected hosts,
‘ Or handling the reference URLs. If it a demo of how to read the XML file and shows
‘ how to access XML attributes, and decode the base64 text fields. You can take it
‘ from here right?
Debug.Print “== ImportVulns”
‘ 1) Prompt user for file
filepath = promptForInputFile()
Debug.Print “User Selected:” + filepath
‘ 2) Parse XML
Dim objXML As MSXML2.DOMDocument60
Set objXML = New MSXML2.DOMDocument60
If Not objXML.Load(filepath) Then
Err.Raise objXML.parseError.ErrorCode, , objXML.parseError.reason
Dim xmlNodes As MSXML2.IXMLDOMNodeList
Dim xmlNode As MSXML2.IXMLDOMNode
Set xmlNodes = objXML.getElementsByTagName(“vuln”)
‘ Before inserting anything park the undo record tracker
‘ This should mean one undo will remove all vulns added.
Dim objUndo As UndoRecord
Set objUndo = Application.UndoRecord
objUndo.StartCustomRecord (“Insert Vuln Action”)
‘ Loop through each vuln tag.
For Each xmlNode In xmlNodes
Dim title, description, riskcategory, riskscore, cvssvector, recommendation As String
Dim customRisk As Boolean
‘ get properties from attributes
riskcategory = xmlNode.Attributes.getNamedItem(“category”).text
customRisk = StrComp(xmlNode.Attributes.getNamedItem(“custom-risk”).text, “true”)
‘ A “custom Risk” is something using “high”, “medium”, or “low” style risk scoring.
‘ A “CVSS” vuln is one using CVSS.
If (customRisk) Then
cvssvector = xmlNode.Attributes.getNamedItem(“cvss”).text
‘ Here I am substringing to get only the base vector.
‘ This is a demo only, and you might want the full vector.
cvssvector = Mid(cvssvector, 7, 26)
‘ This is a custom risk, we do not have a CVSS vector
cvssvector = “n/a”
‘ get simple data from child nodes.
title = DecodeBase64(GetDataFromNodesChildren(xmlNode, “title”))
description = DecodeBase64(GetDataFromNodesChildren(xmlNode, “description”))
description = Replace(description, vbNewLine, “BB”)
recommendation = DecodeBase64(GetDataFromNodesChildren(xmlNode, “description”))
‘ At this point we need to get the building block
‘ Called “BlankVulnerability” which will be in the templates building blocks.
Dim oTemplate As template
Dim oBuildingBlock As BuildingBlock
Dim theBuildingBlock As BuildingBlock
Dim i As Integer
Set oTemplate = activeDocument.AttachedTemplate
For i = 1 To oTemplate.BuildingBlockEntries.count
Set oBuildingBlock = oTemplate.BuildingBlockEntries.Item(i)
If oBuildingBlock.name = “BlankVulnerability” Then
Set theBuildingBlock = oBuildingBlock
‘ Add this vulnerability at the current selection range.
‘ This means “where the cursor is”
Set newRange = theBuildingBlock.Insert(Selection.range)
‘ After insertion update all of the details using the bookmarks for the locations
newRange.Bookmarks(“vulnTitle”).range.text = title
newRange.Bookmarks(“vulnDescription”).range.text = description
newRange.Bookmarks(“vulnRecommendation”).range.text = recommendation
newRange.Bookmarks(“vulnRiskCategory”).range.text = riskcategory
newRange.Bookmarks(“vulnRiskScore”).range.text = riskscore
newRange.Bookmarks(“vulnCVSSVector”).range.text = cvssvector
‘End the custom undo record
‘ Display popup looking for Report Compiler XML file
‘ Returns file path.
Function promptForInputFile() As String
Dim fd As Office.FileDialog
Set fd = Application.FileDialog(msoFileDialogFilePicker)
fd.AllowMultiSelect = False
fd.title = “Please select the file.”
fd.Filters.Add “Extensible Markup Language Files”, “*.xml”
promptForInputFile = fd.SelectedItems(1)
‘ Drill through the child nodes and return text of the named node.
Function GetDataFromNodesChildren(xmlNode As MSXML2.IXMLDOMNode, name As String) As String
Dim answer As String
Dim childNodes As MSXML2.IXMLDOMNodeList
Set childNodes = xmlNode.childNodes
Dim childNode As MSXML2.IXMLDOMNode
For Each childNode In childNodes
If StrComp(childNode.nodeName, name) = 0 Then
‘ found the target node
answer = childNode.text
Set objXML = New MSXML2.DOMDocument60
Set objNode = objXML.createElement(“b64”)
objNode.dataType = “bin.base64”
objNode.text = strData
DecodeBase64 = StrConv(objNode.nodeTypedValue, vbUnicode)
Set objNode = Nothing
Set objXML = Nothing
This has comments to explain the big things and it will read better in the VBA editor than on this page for sure.
Use CTRL+S to save the macro into your document.
At this point you can run the “ImportVulns” function by placing your cursor inside it and pressing F5. Or you can get fancy and bind the macro to a button on our ReportCompiler tab (if you followed the previous section you should roughly know how already).
The following screenshot shows how that should appear once done:
The difference this time is that you should select “Macros” for the “Choose Commands From” setting. I also created a new group and called it “import” and then placed the macro under that group.
If you have done the customisation correctly you should see the following options on your “ReportCompiler” tab:
If you click on this button it will do as the VBA comments say:
1) Prompt user for location of report compiler file.
2) Parse file and get access to all “vuln” tags.
3) Loop through each vuln and add into word document at current selection location.
Error; User-defined type not defined
If you are running the “ImportVulns” function and it produces this error. The chances are that you need to map XML as a resource. To fix this follow these steps:
select the relevant “ThisDocument” again from the projects on the top left side. (this was shown previously)
Click on “tools” menu along the top and select “References”.
Scroll down the list until you find “Microsoft XML, v6.0” and check that.
The following screenshot demonstrates the above process:
I am an odd penetration tester. I actually *enjoy* making reports. I like explaining things to customers and making anything with my name on it as useful as possible. I shoot for the most detailed and tailored recommendations that I can muster. I am also big on making the smallest list of vulnerabilities possible. If two things are solved by one recommendation? Then it 100% must be one thing. I aim to get to the root cause to reduce the burden on customers when reacting to my reports.
To get toward my goal I have made a raft of tools to help me gather evidence, collate and interact with it, and ultimately kick into a report. The sooner it gets into Word the sooner I can go nuts with formatting everything appropriately.
With this I have solved a lot of the hassle for people wanting to interact with the output from Vulnerability scanners.
ReportCompiler is useful for people who simply get handed the output from various scanners (say those engaged in VA cycles) and who want to make a spreadsheet. Through those who want a better Nessus viewer (and be honest the web interface pisses you off!), to those who want to actually automate their reporting and who are willing to do some work for it.
The GUI is, I am told, reasonably intuitive. If in doubt right-click on shit and eventually you might find what you were looking for. You get a different context menu on the vulnerability tree, and the affected hosts list in particular and its worth seeing those options.
Caveat; This is bleeding edge. It has no undo/redo for actions on the vulnerability tree, or affected hosts list, it does not autosave. Trusting it to do these things will ruin your day.
Caveat 2; While you can edit text there is no spell check and there is no support for advanced formatting. I would only use this to make a list of vulnerabilities with a risk rating, a generic description, and provide references. Word is a far superior editor so be aware of the limitations.
Caveat 3; I know the features it is missing. I have a road map, but that map has no timescales!
What does it Import
Currently it includes the following importers:
Nessus (.nessus files only using v2.0 of their XML)
SureCheck (.xml file only)
Burp Scanner (you probably have never found the export option for how though, this is not the file -> save. You right click on the issues save the XML and select Base64 encode request/response).
It has a reasonably flexible architecture for importing which means if I have a need to import something I basically implement it pretty quickly. In the past I have written parsers for many more security tools but this is what I have in the open source realm. A decent start.
What does it Output?
It outputs to either of the formats listed below:
ReportCompiler XML – the “file -> save as” option will drop a save file so you can keep your current details for later.
Note: you should use CTRL + S to save files as you go because this is not going to autosave a damn thing for you.
I periodically save as to create a new file to not put all my eggs in one basket.
It is not made as a text editor for vulnerabilities so I just tend to sort out risk scores and line things up for proper editing in word.
Excel XLS – the “export -> xls” option will drop a spreadsheet that can be used to debrief clients pretty well. Devoid of any additional content you will look like a total amateur if you attempt to sell that as a report.
It purposefully doesn’t make word documents. That would be a commercial advantage and I literally will never make this open source version make your reports for you. Sorry bro, you will have to manipulate the XML file into whatever template you have.
I will make a tutorial on how to do that, but it will take a bit of time to assemble.
You are free to fork the source code repository and make it do whatever you want. All that I ask is that importing routines for standard tools are fed back to the main open source repo by a pull request.
Brief Overview of what it does
The killer (but not exhaustive) features are basically the following:
Merge – Select more than one vulnerability and right click to merge. You will understand what this does and eventually it will solve most of your problems. From reducing multiple overlapping issues into one but keeping the affected hosts accurate, to simply conflating two issues and keeping the affected hosts from both but the text write up from another. This is the goto feature for lots of stuff.
Personal Vulnerabilities – Write your own versions of vulnerabilities and then save them for later. You can marry up one personal vuln to many issues from vulnerability scanners and then have them “auto merge”, or you could add yours to your tree and manually merge. The result would be you converting multiple cruft vulns into one well worded and ready to fire issue.
Classic example would be writing one “Insecure SSL Certificate” finding which is mapped to the 5 or so Nessus plugins. Great I get it that the certificate is invalid, but since the solution is “buy a valid cert”, I really don’t need to tell my customer 5 times do I?
Output to XML – A pretty simple XML format is used as the save file. Once you are done with manipulating your list of vulns you can save it for later. If you do a bit of legwork you can integrate this XML file into your reporting format for production.
Grep for vulns – There is a tree filter to which you can supply Java regular expressions to group vulns together. This is very good when it comes to finding issues you might want to delete entirely or merge together.
There are many more features in there. Some are bleeding edge and the joy is not telling you where those mines are. If I haven’t listed it above? Chances are it’s fun but probably a use at your own risk kind of thing.
Over this summer I have delivered a speech entitled in my head “how to Word” a number of times. It covers some stuff which helps get you from someone who wrote a CV in Word once to someone who is more able to spit out 100 page documents in 5 days.
Likely to be a thing I add to over time. The blog post is really to shamelessly link people to the pinned page:
When I am interviewing people for a job I use the line about how “Pentesting is 50% technical and 50% consulting”. Not much point in being able to do technical gymnastics if you cannot document it, explain things right, or deliver those notes on time to customers.