Category Archives: Pentesting

Firefox Add-Ons that you actually need

In this blog post I will introduce you to a few Firefox Add-Ons which are useful when assessing the security of web applications. There are many, many more Add-ons that people swear by but these ones help me out a lot.

To test a web application you are going to need a web browser to do so. That browser will need to be passed through a local proxy such as OWASP’s Zap or PortSwigger’s Burp Suite Pro if you are on someone’s payroll. I suggest that you pick Firefox for this purpose and that you use a completely separate web browser for keeping up-to-date with Twitter, idling in slack channels etc.

*STOP* In addition to the main point of this post let me park up in this lay by and drop an anecdote on you.

Many moons ago (~2006 I think) I was helping a newbie start their career. I told them to use one web browser for testing and another for their browsing. They didn’t listen to that advice. So when they uploaded their test data for archive it included their proxy logs. As I QAed their report I opened up the proxy logs to check some details and spotted that it included a whole raft of personal browsing and therefore their password which they reused on everything at the time.

I didn’t overly abuse that privileged information before the point was made that you need to keep things separate. Shout out to newbie who still newbs, though they never write or visit anymore. I still love you. Not least because every newbie since has had this anecdote told to them and it has rounded out the point nicely.

Anecdote dropped. Lets discuss the four Add-Ons that help me out loads.

Multi Account Containers

URL: https://addons.mozilla.org/en-GB/firefox/addon/multi-account-containers/

This is amazing. You can setup containers which are completely separate instances of Firefox. This means you can setup one tab to login as an admin level user and another tab to operate as a standard user:

Configuring multiple containers

These containers are marked by the colour you have assigned them and display the name on the far right:

Loading a site in two containers showing the different user levels

This is a game changer honestly. I feel like the way I worked before was in a cave with no light. Now I can line up access control checks with improved ease and more efficiently test complicated logic. Absolutely brilliant.

A shout out to Chris who showed this one to me.

Web Developer Toolbar

URL: https://addons.mozilla.org/en-GB/firefox/addon/web-developer/

I have used this for a very, very long time. It is useful if you want to quickly view all JavaScript files loaded in the current page:

Viewing all JavaScript Files Quickly

You can achieve a lot of other useful things with it. My need for this has diminished slightly as the in-built console when you press F12 has improved over the years. But I still find it useful for collecting all the JavaScript.

Cookie Quick Manager

URL – https://addons.mozilla.org/en-US/firefox/addon/cookie-quick-manager/

Technically you can manipulate cookies using Web Developer toolbar. I just find the interface with this Add-On much easier to use for this one:

Using Cookie Manager to add a new cookie

When you just want to clear a cookie, or maybe try swapping a value with another user this is quick and simple.

User-Agent Switcher and Manager

URL – https://addons.mozilla.org/en-GB/firefox/addon/user-agent-string-switcher/

Sometimes an application responds differently to different User-Agent strings. You can use a Burp match and replace rule or you can use this add-on which has the benefit of a massive list of built in User-Agent strings.

You can also add a little bit to you User-Agent to differentiate your users like this:

Add String to User-Agent

By applying the setting to the container you can mark up which level of user made the request. Now that I do this I have found it absolutely invaluable in sorting out what I was doing.

When you view the requests in your local proxy you will instantly know which user level was making that particular request. This is vital particularly where apps issues lots of teeny tiny annoying requests per minute. When it is otherwise easy to lose which browser container was saying what.

I hope that has helped you. If you have any other Add-ons you think are vital please sling me a comment or a Tweet. I’d like to look into more.

Regards

API testing with Swurg for Burp Suite

Swurg is a Burp Extender designed to make it easy to parse swagger documentation and create baseline requests. This is a function that penetration testers need if they are being asked to test an API.

Our ideal pre-requisites would be:

A Postman collection with environments configured and ready to go valid baseline requests. Ideally setup with any necessary pre or post request scripts to ensure that authentication tokens are updated where necessary.

— Every penetration tester

Not everyone works that way so we often have to fall back to a Swagger JSON file. In the worst cases we get a PDF file with 100s of pages of exposition and from here we are punching up hill to even say hello to the target. That is a cost to the project and isn’t a great experience for your customers either.

If you are reading this post and you are somehow in charge of how you distribute API details to your customers. Then I implore you to NOT rely on that massive PDF approach. This is for your sanity as much as for customers. Shorten your guides to explain how to authenticate and what API calls are required in a sequence to achieve a specific workflow. Then by providing living breathing documentation which is generated from your code you will rarely have to update the PDF. With the bonus that your documentation will be easier to interact with and accurate to the version of the code it was compiled against.

Anyway you have come here to learn how to setup and start using Swurg.

A shout out and thank you to the creator Alexandre Teyar who saw a problem and fixed it. Not all heroes wear capes.

This extender is now in the Burp app store under the name “OpenAPI Parser” so you can install it the easy way.

But if you want to make any changes to the Extender or others in general then the next few sections will be useful.

Check that you have Java in your Path

Open a command prompt and type:

java --version

If you get a warning that the command cannot be located then you need to:

  1. Ensure that you have a version of the JDK installed.
  2. That the path to the /bin folder for that JDK is in the environment’s PATH variable.

Note: after you have added something to the PATH variable you need to load a new command prompt for the change to take effect. There is probably a neat way to bring altered environment variables into the current cmd.exe session but honestly? I have so rarely needed to set environment variables on windows I would not retain the command in memory anyway so a restart suits me.

Installing Git Bash

I already had Git Bash installed but you might need it:

This has a binary installer which works fine and I have nothing more to add your honour.

Installing Gradle on Windows 10

There is a guide (link below) but it missed a few beats for Windows 10:

Step 1 download the latest binary only release from here:

There is no installer for the binary release so you have to do things manually. You will have a zip file. It tells you to extract to “c:\gradle”. Installing binaries in the root of c:\ has historically been exploitable in Windows leading to local privilege escalations. So I get nervous when I see this in the installation guide!

Usually “C:\Program Files\gradle” would be the location for an application to be installed. In Windows 10 you are going to need admin privileges to write to either of these locations. It is generally assumed that basically all developers have this but that is often not the case.

Based on the installation steps you should be able to unzip anywhere you have write access such as “C:\Users\USERNAME\Desktop” or other location.

Having extracted the Zip you should add some environment variables:

  • GRADLE_HOME – set this to point to the folder you extracted. The location should be the parent folder of “/bin”.
  • JAVA_HOME – set this to point to the root folder of a JDK install. This is also going to be the parent folder of “/bin”.

Finally you need to add this this to your PATH variable:

%GRADLE_HOME%/bin

If you ever upgrade to a newer version of gradle (and from the installer I expect there is not an automated update process) then you unzip the new version and change where GRADLE_HOME points to and your updated version will work.

Open yourself a new cmd prompt to ensure the env variables are applied. Type “gradle” and get your rewards:

Now lets get back to Swurg!

Building Swurg

The repository has excellent install instructions here:

But to tie it all together in my single post I’ll replicate what I needed to do.

I used git bash to clone the repository down and then gradle to build the jar:

git clone https://github.com/AresS31/swurg
cd swurg
gradle fatJar

That worked an absolute treat:

That process completes and leaves you a fresh new jar file in the “\build\libs” folder:

Installing Swurg in Burp

Use the “Extender” -> “Add” functionality to select the “swurg-all.jar”:

How to install a plugin manually

Using Swurg

You should now have a new tab and the opportunity to load a swagger file:

We have Swurg working away merrily here

If you load a valid swagger file this will create a full list of endpoints that you can explore.

Right click on an endpoint and you have an excellent place to start launching things from:

Sending things to Burp tabs

That is definitely enough to get going with. In my case I had replaced my target host with localhost to keep things anonymous as to what I was testing.

This worked well for me and was probably worth the setup. I prefer this to using Swagger-EZ which I have been using in the past.

If we are honest what we all want is a properly configured Postman collection which allows you to have fully configurable environment variables and run pre/post scripts for things such as taking the current Bearer token automatically into all subsequent requests.

In lieu of that this is a reasonable starting point which is embedded where you want it right into Burp suite. If I was to make any changes to the Extender I would probably want an option to globally set the host name and base folder locations. One of those “If I ever get the time” projects.

Hope this helps someone.

Preload or GTFO; Middling users over TCP 443.

Your website only has TCP 443 open and has a bulletproof TLS configuration. I hear you scream that I cannot middle your users to exploit them! On the surface of it you are correct. Let me lay out some basics, explain how we got here, and then show you that you are incorrect. We can middle your users (but it is unlikely).

Laying the basics about HTTP and HTTPS

The default port of the Internet is TCP 80 which is where requests prefixed with “http://” will go. This is a plain-text protocol and offers neither confidentiality or integrity of data being sent between the client and server.

The default port for the “https://” protocol is TCP 443. This is an encrypted protocol with the “s” meaning “secure”.

As the Internet matured it became apparent that pretty much every request needed to be secured. An attacker using man-in-the-middle techniques can easily subvert plain-text communication channels. Any personal information being exchanged would be theirs to steal. They would also be able to alter server replies to serve either phishing or malware payloads straight into their victim’s browser.

This opened up a front in the cyber war to force encryption for every connection.

Question: What … all of them?

Answer:


Redirect to secure!

A common strategy has been to leave both TCP 80 and 443 open but to configure a redirect from 80 to 443. Any request over plain-text (http://) is immediately redirected to the secure site (https://).

The problem with this strategy is that the victim’s web browser will issue a plain-text request. If that attacker was there when they did this, then they could still compromise the victim. It only takes a single plain-text request and response to enable them to do so.

Only offer secure

To get around this savvy administrators make no compromises and simply disable TCP port 80. If a web browser attempts an “http://” request the port simply is not open. It cannot establish a TCP session and so will not send the plain-text HTTP request.

The downside of this is that the user might assume the target application is not online. They would go and try and find another domain to buy whatever it was they wanted. This is why redirecting to secure has been such a pervasive strategy. Vendors simply do not want to lose out on important traffic which can drive this quarter’s sales chart.

What is this HTTPS Strict Transport Security (HSTS) stuff?

You can learn more about HSTS here:

My understanding is that HSTS was created to reduce the number of plain-text HTTP requests being issued. There are two modes of operation:

  1. A URL is added to a preload list which is then available to modern web browsers.
  2. An HTTP header (Strict-Transport-Security) is added to server responses which tells the web browser to redirect all “http://” to “https://” before issuing the request.

When a user types a URL into the address bar and hits enter the browser will check to see if the redirection must happen. Where required the redirect happens in memory on the user’s computer BEFORE the TCP connection is established.

For strategy 1. the target site is in the preload list. A well behaved web browser will never issue a single “http://” request to the target site. The problem of middling the connection has been successfully resolved.

For strategy 2. we are arguably no better than the server redirecting from “http://” to “https://“. A single plain-text request will be issued. If the attacker is middling at that point they can alter the response as desired to exploit users.

However, strategy 2. is likely to lead to fewer plain-text requests overall since the browser will not request via “http://” until after an expiry date. Relying on “redirect to secure” alone will result in a single plain-text request per visit the user makes to the site. This increases the number of opportunities to middle the victim’s connection.

Gap Analysis

The reason for writing this blog was because I had an interesting conversation with a customer. They enabled only TCP 443 (https://). They saw this as sufficient and did not want to enable HSTS as recommended in my report. I was challenged to show an exploit route that could work or they would not bother.

Fortunately the edge case I am about to explain has been public knowledge for a long time. So I didn’t have to think too hard to add it in. I am just adding my voice to bounce that beach ball up again for visibility.

Exploit Steps

The exploit route is like this:

  1. An attacker must be able to middle the victim’s traffic.
    • Chances are this is on the same network as the victim.
    • For this reason mass exploitation of users is unlikely and the risk is small as a result.
    • Lets proceed with the steps assuming that this attacker is ABSOLUTELY DETERMINED to exploit this one person.
  2. An attacker crafts a link and sends it to the victim to click on.
    • That link is: http://target:443.
  3. The victim clicks on the link and their browser dutifully establishes a TCP connection to port 443. Because the browser sees a service it can talk to it fires a plain-text “http://” connection.
  4. The server then rejects the connection because it is expecting “https://“. However, the damage had already been done. Our attacker had the single request that they needed for exploitation to occur.

The following screenshot shows the Wireshark capture when this example URL was requested:

URL: http://www.cornerpirate.com:443
DNS lookup and then HTTP request being captured

The only requirement for this to work is that the targeted TCP port is open. It is most likely that 443 is used but you can do the same thing with any open TCP port.

What is the solution?

The optimal solution is to enable HSTS via the preload method. Even if your website only has HTTPS enabled.

Adding a site to the preload list can done here:

All other solutions leave a victim’s web browser issuing at least a single HTTP request.

Unfortunately it takes time for a site to be added to the preload list. Therefore at the same time you should also enable the “Strict-Transport-Security” header as described:

That is the famous belt and braces manoeuvre to reduce the chances of the world seeing you butt.

And you should definitely do as I say and not as I do:

Hope that helps

Basic code review tools for Ruby

This blog post is to document how to get started analysing a Ruby code base for trivial security vulnerabilities. Particularly in the case, like me, when you have absolutely no ability in Ruby. If you are being asked to do an actual code review then I feel sorry for you dear reader. This will help you get started, but you cannot replace having developed something sizeable within the target language and elbow grease.

The sum total of my Ruby experience was my entirely unpopular module for metasploit a few years ago called “git_enum“. This is a post exploitation module which will seek to rob any stored git passwords or authentication tokens from a user’s home folder. I wanted to merge it into MSF but I am locked in anxiety about how awful that was to write, and assuming it will be laughed out of town if I dared try and contribute it!

I digress. My point was that I am not going to be getting scheduled on any Ruby source code reviews any time soon. The syntax is just alien enough to successfully spurn my interest.

This has been prompted by me having access to source code during an application test. This is a move from a black-box to white-box methodology to aid defence in depth recommendations to be made. There is no assumption that I am reading everything line by line. In saying that, when I have access to source code so I like to leverage automation where possible to maybe point toward weaknesses.

Overview of the process

  1. Obtain the source and save it locally
  2. Identify Static Code analysis tools for the target language
  3. Identify tools to check dependencies for known vulnerabilities

I don’t need to say much about 1. so I will move right on to discussing 2. and 3. below.

Static Code Analysis Tools for Ruby

There is almost always some very expensive commercial tool for doing automated static code analysis. They are probably very good at what they do. However, they always have eye watering license fees and I have never actually had the privilege of using one to find out!

As this is not a full code review you will likely have no budget and so you need to find open source projects that support your target language. A great place to start is this URL from OWASP:

I picked two tools from that list which were open source and which seemed active within the last 2 years of development:

Both were easy to install and use within a Kali host. The other tools may be as good but for me I had two static analysers and that was enough for me.

Brakeman installation and usage

gem install brakeman
brakeman -o brakeman_report.html /path/to/rails/application

Dawnscanner installation and usage

gem install dawnscanner 
dawn --file dawn_report.html --html /path/to/rails/application

Dependency Scanning Tool for Ruby

A dependency is an extension from the core language which has been made by a project and then made available for others to use. Most applications are made using dependencies because they save development time and therefore cost.

The downside of using dependencies is that they are shared by hundreds, thousands, or millions or of other applications. They therefore get scrutinised regularly and a vulnerable dependency can be a bad day for many sites at a single time. One thing you have to stay on top of is the version of dependencies in use and that is why it is an important check to make even if you are not doing a full code review.

The best dependency scanner out there is OWASP’s own Depedenency-check. This tool is getting better every time I use it. It integrates with more dependency management formats all the time. As per the URL below:

This is capable of doing Ruby but to do so it uses “bundler-audit“. For this one I went straight to Bundler-Audit.

Bundler-Audit Installation and Usage

gem install bundler-audit
cd /path/to/rails/application # folder where the Gemile.lock file is.
bundler-audit chec

I would include one vulnerability in my Report for the outdated dependencies which summarises in a table the known vulnerabilities and the CVSS risk rating taken from the CVE references from bundler-audit. If there are hundreds of known vulnerabilities you should prioritise and summarise further.

That is it for this blog post. You have to interpret the results yourselves.

Hope that helps.

Persistent SSH Sessions

If you win the lottery and start a job working as a penetration tester the chances are you will need to learn a couple of vital lessons sharpish. One that I like to drill into people is about SSH sessions that persist even if your client connection dies. A complete rookie mistake – that we all make – is to lose data when our SSH connection dies. Maybe the Wi-Fi disconnects or you close your laptop to go for lunch? Who knows.

Don’t blame yourself. The chances are you partly educated yourself and you were using either a Linux base machine or a VM. In that scenario your terminal lives as long as you want it to with no questions asked.

Now that you are on someone’s payroll the chances are you have a fancy “penetration testing lab” that you have to send connections through for legal reasons. While I like that I won’t lose my liberty it does introduce this complexity into our lives.

Tmux

I am a relative noob to Tmux but it really seems to be worth the investment of time.

If I had a time machine I would get future me who understands Tmux completely to come and teach me. Maybe in a fancy silver car with a… I am gonna say it… *pffft*. Ok ok, calm down. In a fancy silver car with a tmux-capacitor! I know some of you liked that pun and that means you are as bad as me.

The absolute basics are these three commands:

tmux new -s <session_name>    # used to establish a new session.
tmux new -s customerA         # I name a session after the project for ease.
tmux ls                       # list the sessions that exist
tmux attach -t <session_name> # used to attach to your previous session
tmux attach -t customerA      # attaching back to the session created last time.

If you create a new session you can then kill your client SSH connection by disconnecting from Wi-Fi or whatever. On reconnecting when back online you attach to that and you have lost nothing (assuming the server has remained online and the issue was client side only).

For the purposes of this tutorial you have done all you need to do to prevent yourself losing work. Go you.

However, Tmux is capable of lots more things such as splitting an SSH session horizontally and/or vertically when you want to show two processes at once in a screenshot. Or what about having multiple “windows” in a single SSH session and a relatively easy way to move between those windows? Instead of having additional instances of Putty on windows or tabs in “MTPutty” you can do everything over a single SSH session inside of Tmux.

There is a full cheat sheet here.

https://tmuxcheatsheet.com/

Totally worth the learning curve.

Getting started with iOS testing

Jailbreak a device (At your own risk)

Disclaimer: I would never jailbreak a device that was going to carry my personal information. You should not either. It is absolutely at your own risk.

This blog post is about getting started with assessing iOS apps. I had not done this in a few years and so this is notes to bridge the past with modern which may be of use to you.

There is currently a stable root exploit called “checkra1n“. This works at the bootloader level and so long as you prevent your rooted handset from rebooting you will have a rooted handset. There is stable exploitation tools for Linux and now for Windows.

I use Windows as a host OS. I do this for many reasons but the simplest one is because Linux works better in a VM than windows does in my experience. I tried checkRa1n in a kali VM with the phone passed over USB directly to the VM. This was a dead end. The exploit process looked like it was working but it never completed, do not enter this cul-de-sac.

To get around that I could have tried the Windows exploit tools. But I selected to use “bootra1n“. This was a bootable USB Linux distro which included checkRa1n and it worked exactly as advertised.

Install the device via app store

  • Setup a test account without any of your real personal info.
  • Sign in to the app store, and then install your target app on the device.

There are other ways to install apps including “3uTools” (see section later). For me this did not work as my target app was not available in the app store they maintain. If your target is available for install then you will find an easier process where you don’t need to dump the IPA file as described in the next section.

Dump IPA file from handset

  • On Jailbroken Handset
    • Open Cydia and install “frida-server” as per this guide.
  • Inside a Kali VM (I used a VM, you can go barebones. Process did not work on Windows).
    • Install frida
pip install frida-tools
  • Inside Kali install “frida-ios-dump”
apt-get install libusbmuxd-tools
ssh -p 2222 root@localhost # leave yourself connected to this session
git clone https://github.com/AloneMonkey/frida-ios-dump.git
cd frida-ios-dump
pip install -r requirements.txt

Now all you need to do is run “dump.py” against your target as shown:

python3 dump.py <target_app_name>

To obtain the correct target app name use “frida-ps” as shown:

frida-ps -Uai

Getting MobSF The Quick Way

MobSF is an excellent tool for gathering some low hanging fruit. As a minimum I would advise throwing every IPA (and Android APK) through this for static analysis. It does a good job of finding strings which may be of use, as well as analysing permissions and other basics. This post is about getting you started and MobSF will be an excellent place to end this post.

Install docker as per this guide. Then after you have that up and running you can get access to MobSF using this:

docker pull opensecurity/mobile-security-framework-mobsf
docker run opensecurity/mobile-security-framework-mobsf

This will start an HTTP listener bound to 0.0.0.0 which is great. But you need to know what IP address Docker just gave you. First list your running containers:

docker ps

Then use docker inspect with a grep to get that for you:

docker inspect <container_id> | grep IPAddress

Fire up your web browser at http://YOUR_IP:8000/ you can now upload the IPA file and it will give you that static analysis juice.

3uTools

This is a beast which gets around having to install iTunes. A bit of software I have a ~15 year old past with which I frequently refer to as a “virus”. It is simply not possible for iTunes to be as shit as it is/was. Therefore, it must have been maliciously generated.

3uTools allowing you to dodge the virus that is iTunes

A lot (but not ALL) of apps from the app store are available for install using this. You will still need to supply legit app store creds to use that feature. If you can install using 3uTools then you get a super easy way to export the IPA file. But it only works on apps installed via 3uTools. In my case the app I needed to examine was in the app store, but not in the 3uTools equivalent.

Thats it from me, I am not going to rehash how to test an iOS app here as there are excellent resources explaining how to do that.

Your next steps would be to Google the heck out of these things:

Best of luck on your road to pwning iOS.

References

Pitfalls in Pentesting

In this post I am going to cover some pitfalls of Penetration Testing. It is kind of three rants stitched together. Loosely around the theme of how we generally interact with customers, as well as the reporting processes that I have seen over the last 15 years.

A person whose job it is to respond to penetration testing findings was asked this question:

  • What are the pain points you have experienced when responding to Penetration test findings?

This is what they said:

“…For my part, as an engineer that gets the fallout from these things I can tell you that I really hate that these scans report stuff that’s been fixed by back-porting by the suppliers. I’ve lost count of the number of times I’ve had to explain to SecOps, Managers and developers that the hundreds of “alerts” they have can be ignored because RedHat have already backported fixes not reflected in the reported version numbers. Time to get off one of my soap boxes!..”

— Anonymous fighter in the trenches

It is also worth noting that this was not a customer of ours.

I yelled “preach!”. Whoever this was I really love that they hit the nail on the head. I opened my most recent report where I had tackled that concern , I hope, adequately:

An except from a report

I hope that if the anonymous responder were to have seen my report. That they would at least see that I considered their plight, and that I have given them an easy out when responding to their manager. “Look, this guy even said it is possibly a false-positive”.

The target had a server banner which, if true, was vulnerable to several things. Unfortunately the OS was not listed in the banner (and was not otherwise 100% confirmed) so I could not prove or disprove the versions without either exploiting the issue, or being given more access. Had the banner said “RedHat” then I would most definitely have changed what I said. It would say there is a high potential that backporting was being used.

This set me off thinking again about how our industry often fails the customers we are paid to help.

If our industry has heroes they may or may not wear capes. But they almost definitely work on the blue side in my opinion. The brave souls tasked with the gargantuan task of interpreting penetration testing reports. From multiple consultants, from different vendors. The variability of output is enormous. These warriors have to find someway to make it work regardless of what thing has arrived as the deliverable.

I have seen Pentest companies who try to solve it in two ways:

  • Dictatorship – Based on one person’s vision you set a reporting standard.
    • You develop a rigid knowledge base of vulnerability write ups which tells everyone exactly how to report something. This includes fixed recommendations which must be provided.
    • You retrain every consultant in your team to meet that standard.
    • You yell at people during QA to remove any sense of individuality in reporting.
    • You fall out over CVSS risk ratings because “we need to risk this exactly the same way as the customer got an XSS which was 6.5 last week”.
    • Some Customers LOVE This. They don’t want any variability because the master spreadsheet they have with all vulns exists. They want the exact risk score for every instance of a vulnerability ever. They just like it neat.
    • The goal is to make every report as identical as possible across any customer and from any member of the team. Robotic Reporting.
  • Cheerful Anarchy – You set a baseline standard for reporting by providing a structure for the reporting and a style guide. Then you let folks have at it!
    • You accept that Pentesting is consultancy profession. Which is influenced by the experience of the consultant doing the work along with their understanding of the risk appetite for the customer.
    • You provide a basic knowledge base of vulnerability write ups which covers a consistent title, background, and baseline risk score. Then encourage the consultant to produce the remaining content just for that project.
    • You train your consultants to understand risk calculation and expect them to alter the baseline risk considering every instance they see.
    • The goal of this is to make every report tailored. Therefore inconsistencies will exist such as two consultants finding the same vulnerability with the same impact but providing different risk ratings.

Of the two I have always preferred cheerful anarchy. I know that some customers absolutely want a penetration test to deliver consistent results over time. It helps them sleep at night. I argue that a little anarchy might be good since the consultant should be free to express their opinions SO LONG AS THEY EXPLAIN THEM WELL ENOUGH.

In truth you need to essentially support both in 2020. Big accounts who want the consistency need to get it. Other customers who are perhaps in earlier stages of their security maturity processes should be given tailored findings in my opinion. They haven’t necessarily encountered an SQLi before, so you need to contextualise it a lot more. So I recommend being so flexible that you can be rigid… I suppose?

Places where a penetration tester needs to be super clear is when dealing with potential false-positives. If the only evidence you have is from a vulnerability scanner then you have not done a good job. I implore you to always find some other means of confirmation.

In situations where the vulnerability is raised only based on banners.. Then your flow is to:

  1. Find a working exploit. If you can, then try to exploit a docker container or VM with the same software first to verify the payload works well. Ask the customer if you can use the exploit. If you have done it in your lab first you can explain that it works well without a risk to stability. Otherwise you can warn them that it may trigger an outage. They can then make the decision themselves as it is their risk.
  2. If no exploit is available. If you can, then execute OS commands to verify the installed patch. In most cases you do not have this access. You can either document the finding with caveats (as my report did), or.. and I appreciate this is a revolutionary idea. You can ASK the customer to confirm the installed version themselves and provide a screenshot. In my case the time was not available to do so and I was forced into the caveat approach.

I know, I know. I suggested you speak to the customer! Worse still I say you should ask them to support you improving the quality of how you serve them. You should not forget that a Penetration Test is a consultation, and that you are on the customer’s team for the duration of the engagement.

They say you should never meet your heroes. But it has been going really well for me when I speak to them so far.

Hope that helps.

Encrypting files with openssl using a password

I needed to send an encrypted file to a user with a Mac. They were unable to install additional software on their machine, and I have no Mac to verify things on.

By default Mac’s roll with openssl installed (thanks Google), so the solution seemed to be to use that.

You can debate the encryption algorithm choice and substitute as appropriate. But the basic syntax for encryption and decryption using AES-256 is shown below:

Encrypt file with password

openssl enc -aes-256-cbc -iter 30 -salt -in report.pdf -out report.enc

Note: running this command will result in a prompt to enter the password, and confirmation.

Decrypt with password

openssl enc -aes-256-cbc -iter 30 -d -salt -in report.enc -out report-decrypted.pdf

Note: again this command will prompt for the password to be entered before extracting.

Warning; running with scissors

This is securing with a password. Go big or risk exposure here. Someone could always try brute force and you want to make sure that takes way way longer than the validity of the information you are protecting. I recommend 72,000 characters long as a minimum to be sure.

Now you have a key distribution problem though. How to get the password to the other person securely? You cannot email them the password since this is the same delivery mechanism for my scenario.

  • Generally WhatsApp (or other end to end encrypted chat client to a mobile phone) is good.
  • Phoning and saying a long password can be awkward but works (so long as they promise to eat the paper they write the password on immediately).
  • SMS is less secure but still verifies that the user is in possession of that person’s phone.

Hope that helps.

Retiring old vulns

There I was finding a lovely Cross Site Scripting (XSS) vulnerability in a customer site today. Complete beauty in the HTTP 404 response via the folder/script name. So I started to write that up.

I peered at the passive results from Burp Suite and noticed a distinct lack of a vulnerability I was expecting to see:

I looked at the HTTP headers and saw this peering back at me:

X-XSS-Protection: 1; mode=block

Burp was correct not to raise that issue because it detects where that very header is insecurely set or non existent.

For the uninitiated the “X-XSS-Protection” header is supposed to tell web browsers to inspect content from the HTTP request which is then present in the immediate response. It had a laudable goal to make reflected XSS a thing of the past, or at least harder to exploit.

Chrome liked it so much it defaulted to having it enabled. Even if the server didn’t bother setting it. This caused much consternation.

Stawp making the world safer Google… Jeez!

I thought ah this is my testing browser (Firefox) I must have overridden the XSS filter.

  • So I try in Chrome.. *pop pop*.
  • So I try in Edge.. *pop pop*.

I think I google “Is X-XSS-Protection still a thing?” and stumble across this nugget:

Source: https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/X-XSS-Protection

No. It is not a thing. Has not been a thing for a little while.

The modern approach is to ensure that you use robust Content-Security-Policy settings. The radical approach is to prevent XSS by secure coding practices which will just never ever catch on.

Security tools and scanners including: nikto, burp suite, and nessus all still pull this header out as something to be reported on. Does it have any real relevance if user-agents simply ignore it now?

It may impact older browsers. But generally when you are talking about any web browser that is old. There will be some way to completely control the victim’s computer. Logically you should only concern yourself with where the herd is running at today.

My approach is to take this out the back to put it out of its misery with a few rounds through the head(er). Then I will stuff it and mount it onto my wall next to “Password Field with autocomplete enabled”. Which is itself deprecated based on browsers also choosing to ignore it.

Time rolls on and standards change. Lets have a round of applause for good old “X-XSS-Protection”. It has been a good sport. A brilliant contender but sadly it never truly saw its potential realised because Arsenal kept buying replacement wingers. It never got any game time.

Uploading files when all else fails: rdpupload

The short version:

  • A tool which works in Linux and Windows which will “upload” a file to an RDP or other remote session where copy and paste or drag and drop are disabled.

Get the tool here:

Details

This is a very old technique. All I have done is have a stab at making my own tool for doing this. I meet aspiring hackers who say they want to jump into coding, but don’t have any “ideas”. They seem unimpressed when I say write a port scanner.

If that is you then I say to you: re-invent the damn wheel!

Sometimes the wheel needs upgrading you know? Many of the tools we have now as the “goto” for something are about 17th in newness of technique. Any tool can be toppled by a better successor.

But world domination is not the goal. Implementing your own versions of old ideas is actually just for getting your skills in for the day you invent an entirely new wheel. It also teaches you how a thing works which is brilliant. At a job interview you will stand out if you actually know what the top tool does under the hood.

What I learned on this one

To make rdpupload I have learned:

  • argparse better (I have used this before)
  • how to simulate key presses in python
  • how to do a progress bar in a CLI
  • how to zip a file using python
  • how to play an mp3 in python (though it didn’t work on Windows, yolo).

But most importantly I learned how a file upload may work by typing it, along with how to decode that on the server side easily.

Technique Used

The following summarises the techniques used:

Attacker Side:

  1. Zip the file you want to upload (might save some characters depending on the file).
  2. Base64 encode that file (so every character we are going to use is available on a standard English Keyboard).
  3. Split the encoded file into chunks of size 256 characters (arbitrary length choice here).
  4. Spoof a keyboard typing each block of 256 characters until it is completed.
  5. Display a progress bar and optionally play the sound of a typewriter hammering away while the “upload” happens.

Victim Side:

  1. Place the cursor into “Notepad” within an RDP session.
  2. When the “upload” is complete save that as a “.txt” file.
  3. Open a command prompt and use “certutil.exe” to decode the base64 encoded file. The syntax for that is shown below.
  4. Use the zip feature of Windows to unpack the zip file.
  5. Profit.

The decoder on the server side relies on “certutil.exe”. Unless I am wrong this is available from Server 2003 upwards so is pretty useful for most use cases.


Syntax: certutil -decode &amp;amp;lt;inputfile&amp;amp;gt; &amp;amp;lt;outputfile&amp;amp;gt;

Example: certutil -decode nc.txt nc.zip

The decode command is also spat out on the Kali side for convenience once the upload is complete.