Category Archives: Pentesting

Solving a pentester’s pesky proxy problem

I usually test web applications using Firefox because it uses it’s own proxy settings and is easy to configure with burp. Chrome is then something that is used for googling answers, shitposting on Twitter etc to ensure that such traffic is not logged by Burp. This should sound familiar to most pentesters.

This process falls down when you need to test a thick client/os binary which uses only Internet Explorer’s proxy settings. Because Chrome also uses IE’s settings you will now see all your googling popup.

IE’s Proxy settings can be configured by PAC files. I have known this for a very long time. But I have never actually took the leap to think “oh that means I can tell it to only apply a proxy for the specific backend server the Thick Client uses” before. Proof, if more be needed, that I can be a pretty dull axe at times. I couldn’t chop a cucumber.

Here is a valid proxy configuration file:

function FindProxyForURL(url, host) {
// use proxy for specific domains
if (shExpMatch(host, "*.targetdomain.com"))
    return "PROXY localhost:8080";

// by default use no proxy
return "DIRECT";
}

Change the host you want to match for with your target domain. Save this as a “.js” file someplace you can type the path to and then import it into Internet Explorer’s proxy settings.

Revel in the freedom to live your best life on your terms.

Take care

Letsencrypt certificates for your python HTTP servers

Back in 2016 I blogged about how to do simple HTTP or HTTPS servers with python. You need to use these if you want to temporarily host files, and to investigate SSRF issues properly.

There my skills sat until recently the user-agent that was making the SSRF request was actually verifying the certificate. How rude! So I needed to up my game and generate a proper certificate.

Here are some caveats to the guide which you need to be aware of before you proceed:

  • My OS was Kali Linux (Good for standardisation for penetration testers but won’t be applicable to every one of you legends who are reading this).
  • The server that I am using has it’s own DNS entry already configured. This is the biggest gotcha. You need to have a valid DNS record. Now is the time to buy a cheap domain! Or maybe investigate a domain allowing anyone to add an A record such as “.tk”.

If you can point a DNS entry at your Kali server then you are going to be able to do this.

In one terminal:

mkdir /tmp/safespace
cd /tmp/safespace
python -m SimpleHTTPServer 80

NOTE: I created a new directory in “/tmp” to make sure that there is nothing sensitive in the folder I am about to share. If you share your “/home” folder you are going to have a bad time. While the HTTP listener is running anyone scanning your IP address will be able to list the contents of the folders and download anything you have saved.

From another terminal you need to use “certbot”

apt-get install certbot
certbot certonly
....
pick option 2
set the domain to <your_domain>
set the directory to /tmp/safespace

The “certbot certonly” command runs a wizard like interface that asks you how to proceed. I have given you the options above. What this does is create a file in your /tmp/safespace folder that letsencrypt can download on their end. This proves that you have write permissions to the web root of the server and allows them to trust the request is legit.

The output of the “certbot certonly” command will list the location of your new TLS certificates. They will be here:

/etc/letsencrypt/live/<your_domain>/fullchain.pem
/etc/letsencrypt/live/<your_domain>/privkey.pem

You can go back to your first terminal and kill that HTTP listener. We will no longer be needing it! We have a proper TLS setup so lets go in a Rolls Royce baby, yeaaaah!

You can use python’s “twisted” HTTPs server as shown below

python -m twisted web --https=443 --path=. -c /etc/letsencrypt/live/<your_domain>/fullchain.pem -k /etc/letsencrypt/live/<your_domain>/privkey.pem

That was it. I was able to browse to my new HTTPS listener and I had a shiny trusted certificate.

Hope that helps

Pentesting Electron Applications

I recently came across my first Electron application as a target. As is the case I try and take notes as I go and here they are so that I am ready for the next time.

When you are targeting an “app” (various thick client or mobile application targets) I always want to:

  • Decompile it if possible – to enable source code review
  • Modify and recompile if possible – to enable me to add additional debugging to code or circumvent any client side controls.
  • Configure middling of communication channels – to enable testing of the server side API.

That is the general process for this kind of task and the blog post covers how to do all of these for Electron applications.

Extracting .asar Files on Windows

When going to the application’s folder I found that it had a “resources” directory containing a couple of “.asar” files. A quick google uncovered that this is a tar like archive format as per the link below:

This is similar to APKs in the Android app testing realm. To get serious we need to extract these to have a proper look.

First get NodeJs:

After installing this open a new command prompt and type “npm” to check that you have the Node Package Manager working right.

Then install “asar”

npm install --engine-strict asar

As a new user of npm I want to just marvel at the pretty colours for a second:

Oh NPM, you are gorgeous

Also an inbuilt check for vulnerabilities? NPM you are crushing it.. Just crushing it. So how come so many apps have outdated dependencies?

Turns out the above command – while gorgeous – did not stick the “asar” command into the windows path as intended. To do that I subsequently ran this command:

npm install -g asar

Then it was found the command in the path:

Oh asar, I found you, I never knew I needed you 10 minutes ago

I was unclear as to the operation of asar. Specifically, would this extract into the folder in a messy way? In the doubt I created a new folder “app” and copied my target “app.asar” file into that before extracting it:

mkdir app
copy app.asar app
asar e app.asar . 
del app.asar 

Note: that del is important because it stops you packing a massive file back into your modified version later.

I was glad I did copy the target into a folder because extraction did this:

Files Extracted to the Current Directory

Had I done the extraction a directory higher then the target application folder would have been messy.

With access to the source you can go looking for vulnerabilities in the code. I cannot really cover that for you since it will depend on your target.

Modifying the app

If you are half way serious about finding vulnerabilities you will need to modify that source and incorporate your changes when running the target. You need to modify the code and then re-pack the “app.asar” file back.

The best place to start is where the code begins its execution. That is in the “main.js” file within the “createWindow” function as shown:

Showing the createWindow function

I modified this to add a new debugging line as shown:

Added console.log debugging line

The point of this was to tell me that my modified version was running instead of the original. I started a command prompt in the “resources” folder and then executed these commands:

move app.asar app-original.asar
asar pack app app.asar

Note: that move command was about ensuring I have the original “app.asar” on tap if I needed to revert back to it. The pack command would otherwise overwrite the original file.

It turns out that “console.log” writes to stdout which means you can see the output if you run the target application from the command prompt. I prefer to create a .bat file for so that I can view stdout, and stderr easily in a temporary cmd window.

The contents of my bat file were:

target.exe
pause

The first line ran the target exe, and the second ensured that the cmd window would stay open in the event of the target being closed or crashing (until a key is pressed). I saved this in the folder with “target.exe” and called it “run.bat”. Instead of clicking on the exe I just double clicked on run.bat and got access to all of the debugging output.

Much success the first line of output when I load the application was now:

[*] Injecting in here

You can now see if there are any client side controls you can disable.

Allow Burp Suite to proxy the application

You will want to be able to see the HTTP/HTTPS requests issued by your target application. To do that you need to add an electron command line argument. If we look again at the “createWindow” function you can see two examples already:

This image has an empty alt attribute; its file name is image-9.png
createWindow function showing commandLine.appendSwitch

At line number 58 I added this switch which configured localhost port 8080 as the HTTP and HTTPS proxy:

electron_1.app.commandLine.appendSwitch('proxy-server', '127.0.0.1:8080');

Again I had to pack the app folder into app.asar as shown in the previous section.

Electron loads chromium and therefore you will need to install burp’s CA certificate as per this URL:

Once you have done that you should see all the juicy HTTP/HTTPS traffic going into burp.

Congratulations you can now go after the server side requests.

I have covered extracting the Electron application to let you see the code, how to modify the code, and how to middle the traffic. That is more than enough to get going with.

Happy hunting

Verifying Insecure SSL/TLS protocols are enabled

If a vulnerability scanner tells you that a website supports an insecure SSL/TLS protocol it is still on you to verify that this is true. While it is becoming rarer, there are HTTPS services which allow a connection over an insecure protocol. However, if you issue an HTTP request it will respond to the user with a message like “Weak cryptography was in use!”.

I think this was an older IIS trait. But I am paranoid and in the habit of making damn sure that a service which offers say SSLv3 will actually handle HTTP requests over it. If there is a warning about poor cryptography then the chances are a user won’t be submitting passwords which will reduce the risk.

This blog post explains how to use openssl to connect using specific SSL/TLS protocols. It also provides a solution to a major gotcha of the process; the version of openssl that ships with your OS cannot issue insecure connections.

Using openssl to connect with an old protocol

The best way to do this, for me, is to use “openssl s_client” to connect to the service. The command supports the flags “-ssl2” “-ssl3” and “-tls1” to enable you to confirm those respectively. Say you want to verify SSL3 is enabled on a target you would do this:

openssl s_client -connect <host>:<port> -ssl3 -quiet

If the server supports the protocol it will now say this the line above where your cursor landed:

verify return:1

Your cursor will be paused there because you have established an SSL/TLS connection and now it is waiting for you to enter data.

Issuing an HTTP request through that connection

Having established your connection you will want to issue an HTTP request. A simple GET request should be sufficient:

GET / HTTP/1.1
Host: <host>

This simple request will return the homepage of any HTTP server. The subtle part is that there are invisible newline characters. In truth the above can be expressed in a single line as:

GET / HTTP/1.1\r\nHost: <host>\r\n\r\n

Here a pair of “\r\n” is representing the newlines. The exchange ends with “\r\n\r\n” because HTTP wants a blank line after all the headers are done to indicate when the server should start responding.

To be easier we can use “printf” to issue the GET request directly through our openssl connection:

printf "GET / HTTP/1.1\r\nHost: <host>\r\n\r\n" | openssl s_client -connect <host>:<port> -ssl3 -quiet

If the server responds without warning the user about insecure cryptography then you have just confirmed it as vulnerable.

Major Caveat

Most operating systems rightly ship with secured versions of “openssl”. This means that they have been compiled to not support these insecure protocols. The reason for doing this is to protect users from exploitation! This is good for us. If fewer clients can even talk SSLv3 then how much more unlikely is it that exploitation will occur?

To solve this problem I suggest you compile a version of openssl which has been compiled to support insecure versions:

wget https://openssl.org/source/openssl-1.0.2k.tar.gz
tar -xvf openssl-1.0.2k.tar.gz
cd openssl-1.0.2k
./config --prefix=`pwd`/local --openssldir=/usr/lib/ssl enable-ssl2 enable-ssl3 enable-tls1 no-shared
make depend
make
make -i install

Doing it this way creates a local binary (in “openssl-1.0.2k/local/bin”) for you which is not then installed over the system’s maintained one. This keeps your every day “openssl” command in your path secure while giving you the opportunity to confirm vulnerable services.

This solution is adapted from the Gist available here:

For ease I choose to then create an alias for the command as shown below:

alias "openssl-insecure"="/<path>/<to>/openssl-1.0.2k/local/bin/openssl"

I can now call the insecure version by using “openssl-<tab>” to complete with insecure.

Happy insecure protocol verifications to you all.

Firefox Add-Ons that you actually need

In this blog post I will introduce you to a few Firefox Add-Ons which are useful when assessing the security of web applications. There are many, many more Add-ons that people swear by but these ones help me out a lot.

To test a web application you are going to need a web browser to do so. That browser will need to be passed through a local proxy such as OWASP’s Zap or PortSwigger’s Burp Suite Pro if you are on someone’s payroll. I suggest that you pick Firefox for this purpose and that you use a completely separate web browser for keeping up-to-date with Twitter, idling in slack channels etc.

*STOP* In addition to the main point of this post let me park up in this lay by and drop an anecdote on you.

Many moons ago (~2006 I think) I was helping a newbie start their career. I told them to use one web browser for testing and another for their browsing. They didn’t listen to that advice. So when they uploaded their test data for archive it included their proxy logs. As I QAed their report I opened up the proxy logs to check some details and spotted that it included a whole raft of personal browsing and therefore their password which they reused on everything at the time.

I didn’t overly abuse that privileged information before the point was made that you need to keep things separate. Shout out to newbie who still newbs, though they never write or visit anymore. I still love you. Not least because every newbie since has had this anecdote told to them and it has rounded out the point nicely.

Anecdote dropped. Lets discuss the four Add-Ons that help me out loads.

Multi Account Containers

URL: https://addons.mozilla.org/en-GB/firefox/addon/multi-account-containers/

This is amazing. You can setup containers which are completely separate instances of Firefox. This means you can setup one tab to login as an admin level user and another tab to operate as a standard user:

Configuring multiple containers

These containers are marked by the colour you have assigned them and display the name on the far right:

Loading a site in two containers showing the different user levels

This is a game changer honestly. I feel like the way I worked before was in a cave with no light. Now I can line up access control checks with improved ease and more efficiently test complicated logic. Absolutely brilliant.

A shout out to Chris who showed this one to me.

Web Developer Toolbar

URL: https://addons.mozilla.org/en-GB/firefox/addon/web-developer/

I have used this for a very, very long time. It is useful if you want to quickly view all JavaScript files loaded in the current page:

Viewing all JavaScript Files Quickly

You can achieve a lot of other useful things with it. My need for this has diminished slightly as the in-built console when you press F12 has improved over the years. But I still find it useful for collecting all the JavaScript.

Cookie Quick Manager

URL – https://addons.mozilla.org/en-US/firefox/addon/cookie-quick-manager/

Technically you can manipulate cookies using Web Developer toolbar. I just find the interface with this Add-On much easier to use for this one:

Using Cookie Manager to add a new cookie

When you just want to clear a cookie, or maybe try swapping a value with another user this is quick and simple.

User-Agent Switcher and Manager

URL – https://addons.mozilla.org/en-GB/firefox/addon/user-agent-string-switcher/

Sometimes an application responds differently to different User-Agent strings. You can use a Burp match and replace rule or you can use this add-on which has the benefit of a massive list of built in User-Agent strings.

You can also add a little bit to you User-Agent to differentiate your users like this:

Add String to User-Agent

By applying the setting to the container you can mark up which level of user made the request. Now that I do this I have found it absolutely invaluable in sorting out what I was doing.

When you view the requests in your local proxy you will instantly know which user level was making that particular request. This is vital particularly where apps issues lots of teeny tiny annoying requests per minute. When it is otherwise easy to lose which browser container was saying what.

I hope that has helped you. If you have any other Add-ons you think are vital please sling me a comment or a Tweet. I’d like to look into more.

Regards

API testing with Swurg for Burp Suite

Swurg is a Burp Extender designed to make it easy to parse swagger documentation and create baseline requests. This is a function that penetration testers need if they are being asked to test an API.

Our ideal pre-requisites would be:

A Postman collection with environments configured and ready to go valid baseline requests. Ideally setup with any necessary pre or post request scripts to ensure that authentication tokens are updated where necessary.

— Every penetration tester

Not everyone works that way so we often have to fall back to a Swagger JSON file. In the worst cases we get a PDF file with 100s of pages of exposition and from here we are punching up hill to even say hello to the target. That is a cost to the project and isn’t a great experience for your customers either.

If you are reading this post and you are somehow in charge of how you distribute API details to your customers. Then I implore you to NOT rely on that massive PDF approach. This is for your sanity as much as for customers. Shorten your guides to explain how to authenticate and what API calls are required in a sequence to achieve a specific workflow. Then by providing living breathing documentation which is generated from your code you will rarely have to update the PDF. With the bonus that your documentation will be easier to interact with and accurate to the version of the code it was compiled against.

Anyway you have come here to learn how to setup and start using Swurg.

A shout out and thank you to the creator Alexandre Teyar who saw a problem and fixed it. Not all heroes wear capes.

This extender is now in the Burp app store under the name “OpenAPI Parser” so you can install it the easy way.

But if you want to make any changes to the Extender or others in general then the next few sections will be useful.

Check that you have Java in your Path

Open a command prompt and type:

java --version

If you get a warning that the command cannot be located then you need to:

  1. Ensure that you have a version of the JDK installed.
  2. That the path to the /bin folder for that JDK is in the environment’s PATH variable.

Note: after you have added something to the PATH variable you need to load a new command prompt for the change to take effect. There is probably a neat way to bring altered environment variables into the current cmd.exe session but honestly? I have so rarely needed to set environment variables on windows I would not retain the command in memory anyway so a restart suits me.

Installing Git Bash

I already had Git Bash installed but you might need it:

This has a binary installer which works fine and I have nothing more to add your honour.

Installing Gradle on Windows 10

There is a guide (link below) but it missed a few beats for Windows 10:

Step 1 download the latest binary only release from here:

There is no installer for the binary release so you have to do things manually. You will have a zip file. It tells you to extract to “c:\gradle”. Installing binaries in the root of c:\ has historically been exploitable in Windows leading to local privilege escalations. So I get nervous when I see this in the installation guide!

Usually “C:\Program Files\gradle” would be the location for an application to be installed. In Windows 10 you are going to need admin privileges to write to either of these locations. It is generally assumed that basically all developers have this but that is often not the case.

Based on the installation steps you should be able to unzip anywhere you have write access such as “C:\Users\USERNAME\Desktop” or other location.

Having extracted the Zip you should add some environment variables:

  • GRADLE_HOME – set this to point to the folder you extracted. The location should be the parent folder of “/bin”.
  • JAVA_HOME – set this to point to the root folder of a JDK install. This is also going to be the parent folder of “/bin”.

Finally you need to add this this to your PATH variable:

%GRADLE_HOME%/bin

If you ever upgrade to a newer version of gradle (and from the installer I expect there is not an automated update process) then you unzip the new version and change where GRADLE_HOME points to and your updated version will work.

Open yourself a new cmd prompt to ensure the env variables are applied. Type “gradle” and get your rewards:

Now lets get back to Swurg!

Building Swurg

The repository has excellent install instructions here:

But to tie it all together in my single post I’ll replicate what I needed to do.

I used git bash to clone the repository down and then gradle to build the jar:

git clone https://github.com/AresS31/swurg
cd swurg
gradle fatJar

That worked an absolute treat:

That process completes and leaves you a fresh new jar file in the “\build\libs” folder:

Installing Swurg in Burp

Use the “Extender” -> “Add” functionality to select the “swurg-all.jar”:

How to install a plugin manually

Using Swurg

You should now have a new tab and the opportunity to load a swagger file:

We have Swurg working away merrily here

If you load a valid swagger file this will create a full list of endpoints that you can explore.

Right click on an endpoint and you have an excellent place to start launching things from:

Sending things to Burp tabs

That is definitely enough to get going with. In my case I had replaced my target host with localhost to keep things anonymous as to what I was testing.

This worked well for me and was probably worth the setup. I prefer this to using Swagger-EZ which I have been using in the past.

If we are honest what we all want is a properly configured Postman collection which allows you to have fully configurable environment variables and run pre/post scripts for things such as taking the current Bearer token automatically into all subsequent requests.

In lieu of that this is a reasonable starting point which is embedded where you want it right into Burp suite. If I was to make any changes to the Extender I would probably want an option to globally set the host name and base folder locations. One of those “If I ever get the time” projects.

Hope this helps someone.

Preload or GTFO; Middling users over TCP 443.

Your website only has TCP 443 open and has a bulletproof TLS configuration. I hear you scream that I cannot middle your users to exploit them! On the surface of it you are correct. Let me lay out some basics, explain how we got here, and then show you that you are incorrect. We can middle your users (but it is unlikely).

Laying the basics about HTTP and HTTPS

The default port of the Internet is TCP 80 which is where requests prefixed with “http://” will go. This is a plain-text protocol and offers neither confidentiality or integrity of data being sent between the client and server.

The default port for the “https://” protocol is TCP 443. This is an encrypted protocol with the “s” meaning “secure”.

As the Internet matured it became apparent that pretty much every request needed to be secured. An attacker using man-in-the-middle techniques can easily subvert plain-text communication channels. Any personal information being exchanged would be theirs to steal. They would also be able to alter server replies to serve either phishing or malware payloads straight into their victim’s browser.

This opened up a front in the cyber war to force encryption for every connection.

Question: What … all of them?

Answer:


Redirect to secure!

A common strategy has been to leave both TCP 80 and 443 open but to configure a redirect from 80 to 443. Any request over plain-text (http://) is immediately redirected to the secure site (https://).

The problem with this strategy is that the victim’s web browser will issue a plain-text request. If that attacker was there when they did this, then they could still compromise the victim. It only takes a single plain-text request and response to enable them to do so.

Only offer secure

To get around this savvy administrators make no compromises and simply disable TCP port 80. If a web browser attempts an “http://” request the port simply is not open. It cannot establish a TCP session and so will not send the plain-text HTTP request.

The downside of this is that the user might assume the target application is not online. They would go and try and find another domain to buy whatever it was they wanted. This is why redirecting to secure has been such a pervasive strategy. Vendors simply do not want to lose out on important traffic which can drive this quarter’s sales chart.

What is this HTTPS Strict Transport Security (HSTS) stuff?

You can learn more about HSTS here:

My understanding is that HSTS was created to reduce the number of plain-text HTTP requests being issued. There are two modes of operation:

  1. A URL is added to a preload list which is then available to modern web browsers.
  2. An HTTP header (Strict-Transport-Security) is added to server responses which tells the web browser to redirect all “http://” to “https://” before issuing the request.

When a user types a URL into the address bar and hits enter the browser will check to see if the redirection must happen. Where required the redirect happens in memory on the user’s computer BEFORE the TCP connection is established.

For strategy 1. the target site is in the preload list. A well behaved web browser will never issue a single “http://” request to the target site. The problem of middling the connection has been successfully resolved.

For strategy 2. we are arguably no better than the server redirecting from “http://” to “https://“. A single plain-text request will be issued. If the attacker is middling at that point they can alter the response as desired to exploit users.

However, strategy 2. is likely to lead to fewer plain-text requests overall since the browser will not request via “http://” until after an expiry date. Relying on “redirect to secure” alone will result in a single plain-text request per visit the user makes to the site. This increases the number of opportunities to middle the victim’s connection.

Gap Analysis

The reason for writing this blog was because I had an interesting conversation with a customer. They enabled only TCP 443 (https://). They saw this as sufficient and did not want to enable HSTS as recommended in my report. I was challenged to show an exploit route that could work or they would not bother.

Fortunately the edge case I am about to explain has been public knowledge for a long time. So I didn’t have to think too hard to add it in. I am just adding my voice to bounce that beach ball up again for visibility.

Exploit Steps

The exploit route is like this:

  1. An attacker must be able to middle the victim’s traffic.
    • Chances are this is on the same network as the victim.
    • For this reason mass exploitation of users is unlikely and the risk is small as a result.
    • Lets proceed with the steps assuming that this attacker is ABSOLUTELY DETERMINED to exploit this one person.
  2. An attacker crafts a link and sends it to the victim to click on.
    • That link is: http://target:443.
  3. The victim clicks on the link and their browser dutifully establishes a TCP connection to port 443. Because the browser sees a service it can talk to it fires a plain-text “http://” connection.
  4. The server then rejects the connection because it is expecting “https://“. However, the damage had already been done. Our attacker had the single request that they needed for exploitation to occur.

The following screenshot shows the Wireshark capture when this example URL was requested:

URL: http://www.cornerpirate.com:443
DNS lookup and then HTTP request being captured

The only requirement for this to work is that the targeted TCP port is open. It is most likely that 443 is used but you can do the same thing with any open TCP port.

What is the solution?

The optimal solution is to enable HSTS via the preload method. Even if your website only has HTTPS enabled.

Adding a site to the preload list can done here:

All other solutions leave a victim’s web browser issuing at least a single HTTP request.

Unfortunately it takes time for a site to be added to the preload list. Therefore at the same time you should also enable the “Strict-Transport-Security” header as described:

That is the famous belt and braces manoeuvre to reduce the chances of the world seeing you butt.

And you should definitely do as I say and not as I do:

Hope that helps

Basic code review tools for Ruby

This blog post is to document how to get started analysing a Ruby code base for trivial security vulnerabilities. Particularly in the case, like me, when you have absolutely no ability in Ruby. If you are being asked to do an actual code review then I feel sorry for you dear reader. This will help you get started, but you cannot replace having developed something sizeable within the target language and elbow grease.

The sum total of my Ruby experience was my entirely unpopular module for metasploit a few years ago called “git_enum“. This is a post exploitation module which will seek to rob any stored git passwords or authentication tokens from a user’s home folder. I wanted to merge it into MSF but I am locked in anxiety about how awful that was to write, and assuming it will be laughed out of town if I dared try and contribute it!

I digress. My point was that I am not going to be getting scheduled on any Ruby source code reviews any time soon. The syntax is just alien enough to successfully spurn my interest.

This has been prompted by me having access to source code during an application test. This is a move from a black-box to white-box methodology to aid defence in depth recommendations to be made. There is no assumption that I am reading everything line by line. In saying that, when I have access to source code so I like to leverage automation where possible to maybe point toward weaknesses.

Overview of the process

  1. Obtain the source and save it locally
  2. Identify Static Code analysis tools for the target language
  3. Identify tools to check dependencies for known vulnerabilities

I don’t need to say much about 1. so I will move right on to discussing 2. and 3. below.

Static Code Analysis Tools for Ruby

There is almost always some very expensive commercial tool for doing automated static code analysis. They are probably very good at what they do. However, they always have eye watering license fees and I have never actually had the privilege of using one to find out!

As this is not a full code review you will likely have no budget and so you need to find open source projects that support your target language. A great place to start is this URL from OWASP:

I picked two tools from that list which were open source and which seemed active within the last 2 years of development:

Both were easy to install and use within a Kali host. The other tools may be as good but for me I had two static analysers and that was enough for me.

Brakeman installation and usage

gem install brakeman
brakeman -o brakeman_report.html /path/to/rails/application

Dawnscanner installation and usage

gem install dawnscanner 
dawn --file dawn_report.html --html /path/to/rails/application

Dependency Scanning Tool for Ruby

A dependency is an extension from the core language which has been made by a project and then made available for others to use. Most applications are made using dependencies because they save development time and therefore cost.

The downside of using dependencies is that they are shared by hundreds, thousands, or millions or of other applications. They therefore get scrutinised regularly and a vulnerable dependency can be a bad day for many sites at a single time. One thing you have to stay on top of is the version of dependencies in use and that is why it is an important check to make even if you are not doing a full code review.

The best dependency scanner out there is OWASP’s own Depedenency-check. This tool is getting better every time I use it. It integrates with more dependency management formats all the time. As per the URL below:

This is capable of doing Ruby but to do so it uses “bundler-audit“. For this one I went straight to Bundler-Audit.

Bundler-Audit Installation and Usage

gem install bundler-audit
cd /path/to/rails/application # folder where the Gemile.lock file is.
bundler-audit chec

I would include one vulnerability in my Report for the outdated dependencies which summarises in a table the known vulnerabilities and the CVSS risk rating taken from the CVE references from bundler-audit. If there are hundreds of known vulnerabilities you should prioritise and summarise further.

That is it for this blog post. You have to interpret the results yourselves.

Hope that helps.

Persistent SSH Sessions

If you win the lottery and start a job working as a penetration tester the chances are you will need to learn a couple of vital lessons sharpish. One that I like to drill into people is about SSH sessions that persist even if your client connection dies. A complete rookie mistake – that we all make – is to lose data when our SSH connection dies. Maybe the Wi-Fi disconnects or you close your laptop to go for lunch? Who knows.

Don’t blame yourself. The chances are you partly educated yourself and you were using either a Linux base machine or a VM. In that scenario your terminal lives as long as you want it to with no questions asked.

Now that you are on someone’s payroll the chances are you have a fancy “penetration testing lab” that you have to send connections through for legal reasons. While I like that I won’t lose my liberty it does introduce this complexity into our lives.

Tmux

I am a relative noob to Tmux but it really seems to be worth the investment of time.

If I had a time machine I would get future me who understands Tmux completely to come and teach me. Maybe in a fancy silver car with a… I am gonna say it… *pffft*. Ok ok, calm down. In a fancy silver car with a tmux-capacitor! I know some of you liked that pun and that means you are as bad as me.

The absolute basics are these three commands:

tmux new -s <session_name>    # used to establish a new session.
tmux new -s customerA         # I name a session after the project for ease.
tmux ls                       # list the sessions that exist
tmux attach -t <session_name> # used to attach to your previous session
tmux attach -t customerA      # attaching back to the session created last time.

If you create a new session you can then kill your client SSH connection by disconnecting from Wi-Fi or whatever. On reconnecting when back online you attach to that and you have lost nothing (assuming the server has remained online and the issue was client side only).

For the purposes of this tutorial you have done all you need to do to prevent yourself losing work. Go you.

However, Tmux is capable of lots more things such as splitting an SSH session horizontally and/or vertically when you want to show two processes at once in a screenshot. Or what about having multiple “windows” in a single SSH session and a relatively easy way to move between those windows? Instead of having additional instances of Putty on windows or tabs in “MTPutty” you can do everything over a single SSH session inside of Tmux.

There is a full cheat sheet here.

https://tmuxcheatsheet.com/

Totally worth the learning curve.

Getting started with iOS testing

Jailbreak a device (At your own risk)

Disclaimer: I would never jailbreak a device that was going to carry my personal information. You should not either. It is absolutely at your own risk.

This blog post is about getting started with assessing iOS apps. I had not done this in a few years and so this is notes to bridge the past with modern which may be of use to you.

There is currently a stable root exploit called “checkra1n“. This works at the bootloader level and so long as you prevent your rooted handset from rebooting you will have a rooted handset. There is stable exploitation tools for Linux and now for Windows.

I use Windows as a host OS. I do this for many reasons but the simplest one is because Linux works better in a VM than windows does in my experience. I tried checkRa1n in a kali VM with the phone passed over USB directly to the VM. This was a dead end. The exploit process looked like it was working but it never completed, do not enter this cul-de-sac.

To get around that I could have tried the Windows exploit tools. But I selected to use “bootra1n“. This was a bootable USB Linux distro which included checkRa1n and it worked exactly as advertised.

Install the device via app store

  • Setup a test account without any of your real personal info.
  • Sign in to the app store, and then install your target app on the device.

There are other ways to install apps including “3uTools” (see section later). For me this did not work as my target app was not available in the app store they maintain. If your target is available for install then you will find an easier process where you don’t need to dump the IPA file as described in the next section.

Dump IPA file from handset

  • On Jailbroken Handset
    • Open Cydia and install “frida-server” as per this guide.
  • Inside a Kali VM (I used a VM, you can go barebones. Process did not work on Windows).
    • Install frida
pip install frida-tools
  • Inside Kali install “frida-ios-dump”
apt-get install libusbmuxd-tools
ssh -p 2222 root@localhost # leave yourself connected to this session
git clone https://github.com/AloneMonkey/frida-ios-dump.git
cd frida-ios-dump
pip install -r requirements.txt

Now all you need to do is run “dump.py” against your target as shown:

python3 dump.py <target_app_name>

To obtain the correct target app name use “frida-ps” as shown:

frida-ps -Uai

Getting MobSF The Quick Way

MobSF is an excellent tool for gathering some low hanging fruit. As a minimum I would advise throwing every IPA (and Android APK) through this for static analysis. It does a good job of finding strings which may be of use, as well as analysing permissions and other basics. This post is about getting you started and MobSF will be an excellent place to end this post.

Install docker as per this guide. Then after you have that up and running you can get access to MobSF using this:

docker pull opensecurity/mobile-security-framework-mobsf
docker run opensecurity/mobile-security-framework-mobsf

This will start an HTTP listener bound to 0.0.0.0 which is great. But you need to know what IP address Docker just gave you. First list your running containers:

docker ps

Then use docker inspect with a grep to get that for you:

docker inspect <container_id> | grep IPAddress

Fire up your web browser at http://YOUR_IP:8000/ you can now upload the IPA file and it will give you that static analysis juice.

3uTools

This is a beast which gets around having to install iTunes. A bit of software I have a ~15 year old past with which I frequently refer to as a “virus”. It is simply not possible for iTunes to be as shit as it is/was. Therefore, it must have been maliciously generated.

3uTools allowing you to dodge the virus that is iTunes

A lot (but not ALL) of apps from the app store are available for install using this. You will still need to supply legit app store creds to use that feature. If you can install using 3uTools then you get a super easy way to export the IPA file. But it only works on apps installed via 3uTools. In my case the app I needed to examine was in the app store, but not in the 3uTools equivalent.

Thats it from me, I am not going to rehash how to test an iOS app here as there are excellent resources explaining how to do that.

Your next steps would be to Google the heck out of these things:

Best of luck on your road to pwning iOS.

References