Posts Tagged ‘security’


The anonymization is the process where all sensitive data is removed from the production data before it is used at testing. That is supposed to produce test data which is anonymous, and can be used without security and privacy risks. It has to be done really carefully, and the structure of production data must be understood very well before this can success. At this blog entry I show how difficult it actually is and how it can fail.

Software under testing is Twitter-like website. There is possibility to send public and private messages between users. Our simplified forum includes following three database tables:

User profile – it has numerical identifier for user (UID), username, password hash and e-mail address
Message – it has ID for message, UID of submitter, content as text, timestamp
Private message – ID for private message, UID of submitter, UID of receiver, content, timestamp

The site is already in production and open, so anyone can go to check what others has written in the public messages. Also the profiles are public. The UID is used to identify them.

Let’s start to anonymize this data. If we start from the username and message contents, are those really enough to make data anonymous? Definitely not If the original forum is public, anyone can still check private information like who has messaged and to who. Numerical UID is still the same, so we have change that also. If we want to keep statistics correct, we can’t just assign random number to messages and user profiles. E.g. if account “Teme” had UID 1, then to maintain proper statistics we have to convert all UID 1 to e.g 234.

It is still very easy to find that UID 1 is changed to 234. The features which reveal the information are timestamps and amount of messages. So we have to change also all timestamps. The new timestamps must change the order of message to keep things anonymous. We have change the number of messages of each profile also.

Even after this change we can still in some cases find the real profile from test data. Small external information like “I know that this person has messaged to that person” can help the evil person to find the real profile.

Instead of anonymizing the production data the test data should be based to model of production data. For example at production data we should have same amount of users and messages as the production data. But then on the other hand it usually doesn’t matter if the users have proper distribution of messages.

Instead of using production data like that, generate your own data and inspect, what kind of properties are the most important for your testing. Good test data increases the test coverage and possibility to find the bugs.

See also Broken Promises of Privacy: Responding to the Surprising Failure of Anonymization by Paul Ohm

Advertisements

Security is important part of modern systems. But as the testers we usually think only about application security. Unfortunately reality is that easiest way to intrude to the system is to cheat people. This is story about me and Santa Claus.

This year I won’t be without Christmas presents! Actually I managed to find out, how to change the present allocation to 10% of Santa’s budget. It started with the e-mail I got. I investigated its headers and there was domain santa-claus-intelligence-agency.com. (I’ll call the agency as SCIA.) It got me really excited! So I started to dig around.

First thing I found was their beta-site. Its address was public-beta.santa-claus-intelligence-agency.com. There was still some debugging options on, so the web application revealed the structure of file system. The most interesting path was ../snippets/handlefiles.php. I started to investigate what that contained. It seemed that developers of SCIA had poor knowledge about version control. There was backup file handlefiles.php~. I read it and there was mention about browser accessible directory called ../uploads/. That directory was misconfigured and I got the directory listing. I decided to check every file from that. The most interesting file was birthdays.xls.

I opened the birthdays.xls. It was real treasure! It listed all SCIA elves, their birthdays, and e-mail addresses. I was much wiser about their internal structure after that. I tried to google more information about them but I found only couple blogs – one was about fishing, and another one about snow sports – and couple LinkedIn-accounts. The birthday of “fisher elf” was at 8th of June. I found that file at May.

When 8th of June approached, I bought domain fishing-stuff.com and set up the shopping site. Then at 7th of June I sent mail to the fisher elf. I wrote to it:

Congratulations! Your birthday is near, and we at Fishing-Stuff.com want give you the present. You’ll get -40% discount from all items which you order during this week.

Hook was there! At very same day it happened: Someone from secret-nat.santa-claus-intelligence-agency.com logged in. Now I had some additional information about their network.

First I decided to check names of the IPs they had. I found webmail and proxy. So I directed to my browser to the webmail first and I realised that their mail application was vulnerable to open redirect! It means that if I manage to get user to click specific link, he can end up to ANY website I want him to end up. I continued the investigating. I read carefully the LinkedIn profiles. From them I managed to find code names of their project.

The Lady of Chaos has given me the imagination. So I started to think how to get someone to click my malicious link and submit his e-mail credentials to me. First step was to create “login page” which looked exactly like the one they used at their web mail. Only difference was, that it stored the username and password to the file which I was able to read and then went back to their original e-mail application.

Mail was:

Hi, higher official elf Jack Elvish. Our new project XyzEdfg requires your input. Could you please check our site h‌ttps://webmail.santa-claus-intelligence-agency.com/index.php?redirect=http://10.128.34.53/index.php.

I used e-mail address of Jone Lesselvish which was Jone.Lesselvish@santa-claus-intelligence-agency.com. His LinkedIn profile mentioned XyzEdfg. So I thought that it would be safe to use. E-mail protocol is insecure protocol, where you can use any name you want to. It does not ask passwords. It just trusts what it gets.

I had to wait only few hours to get username and password of Jack Elvish. Next step was easy: I just logged into his email, read what kind of systems they had. The most interesting was the “accounting” system. It was part of internal network. To intrude to there I had to find some way to access to their internal networks. Next step was to find error at firewalls or another insecure application. I decided to take the risk and investigate computer named proxy. Port 8090 had open HTTP-proxy. It was also able to access to the internal network.

At this point I decided that it is too risky to attack to it right away. I waited until end of September. If they had noticed me before, they couldn’t connect my new attempts to my older attacks. At the end of September I finally logged in to e-mails of Jack Elvish. The password was still working! Then I tried to log in to the accounting system. It asked e-mail address, so I used Jack’s address and tried the same password as at e-mail. Oh well. It didn’t work. I used “I forgot the password”-functionality. Just password, and answer for question “What’s the month of your birthday?” And Jack had new password which I also knew right away.

I had full access the accounting system. There was all kind of secrets! SCIA had done wonderful job at tracking people. Even NSA couldn’t do the same. Now I know all YOUR secrets! I was able to tune everything. After my modifications I had cured cancer and AIDS and managed to get the peace between Palestine and Israel. At the end my present allocation was 10%!

Insecurity is not just the application security. It is more or less human aspect. Misconfigured servers, small security issues, fooling the people with different techniques and insecure protocols – they all managed to make this task so simple. No matter what kind of testing and where we are working, we should always ask:”Can someone misuse this? Can he break the confidentiality, integrity or accessibility of the system?” If answer is “Yes” at any part, then there is the problem and risks should be estimated. If the service has public interface, overestimate the risks rather than underestimate.

This was originally written to Qentinel‘s internal newsletter.


Yesterday something went wrong – Below is screenshot pointing out the problem. Read the rest of article to find out more details.

The screenshots had very common mistake – redirection. It’s very good tool for phishing attacks. If attacker has any possibilities to send e-mail to registered user, he is able to attack against the user. For example how many would have noticed the redirection bug if he got the e-mail message:

From: root@trusted-mail.net

Subject: E-mail validation

Please, validate that your e-mail address is still valid by logging in to the Trusted Mail. We are not requesting any passwords by e-mail and as you can see from link below, it is pointing to Trusted-mail.net. Make always sure that address bar has correct address.

So please, log in with this validation link:

https://trusted-mail.net/horde/imp/login.php?url=%68%74%74%70%73%3A%2F%2F%65%76%69%6C%68%61%78%6F%72%2E%63%6F%6D%2F%68%61%78%2F%74%72%75%73%74%65%64%2D%6D%61%69%6C%2F

Sincerely,

Mail admin of Trusted-Mail.com

That mail looks valid – doesn’t it? When redirection goes wrong, the attack against the service users is simple. It just takes few minutes to copy the original login page, change the form processing so that submitted data is stored to local file or database, and then redirect back to original site. It’s very difficult to notice, that the target site is actually new site and not the original one. At least I’m mistyping my passwords every now and then, and when the system says that password is wrong, I try to write it a bit more careful but without checking where my browser actually is.

So the failures are actually at many level. Bad web application is not validating its input correctly. I have written about that before at post “Phun with redirects“. But second is more fundamental failure. It’s at e-mail protocol. The From field is not validated. Attacker can give any From address and receiver wouldn’t notice that.

The original SMTP protocol was defined at RFC 821 year 1982. Back then there wasn’t need for security. And now it would be just too expensive to add new security layer, because everything has been implemented to use old and insecure protocol. Maybe at next bug report I should state:”The root cause for bug is Internet and its protocols.”

Session handling gone wild

Posted: 11.7.2011 in Security
Tags: ,

At web application (security) testing I like to investigate how sessions and cookies are handled. I’ve noticed that there are some common error patterns which are havocking the security of the session. Here’s brief list of the most common issues I’ve noticed. There is more issues than these five, but these are the most common once I’ve seen.

Randomness and length of session id are very important. If the randomness is poor or length is too short, the attacker can guess the session id which is already in use. I usually check this just in case. Luckily most of the frameworks are generating the session id internally, and developer cannot weaken that too much. But I’ve seen some coders creating own session handling, where length is too short. In that case different users are getting same session id and sessions gets hijacked.

When the authentication state of user changes, it is very important to change also the session id. Usually when user enters to pages without authentication, session id is generated. Even after logging in the same session id stays. This behavior is not very good. If the application were developed correctly, the first session id is given when user enters to web pages. When user logs in the session id is invalidated and removed from the user. During log in process the user receives new session id which differs from original one.When user logs out, the old session id is invalidated and new is created.

Too often I’ve seen that single cookie is used to authenticate user. That single cookie contains some kind of reference to user id and password, and it never changes. At application there can be session id which is secured, it changes as it should, it’s random enough and so on. But there is also cookie like XYZ_USERDETAILS which is some random looking string. When user logs out, session id is invalidated, XYZ_USERDETAILS is removed. But when used logs in again, the XYZ_USERDETAIL becomes same as it were after previous log in. Session id differs. If that kind of vulnerability exists, then if attacker can get the XYZ_USERDETAIL somehow, he can access with that information any time to target’s information. There shouldn’t be any static information which identifies the user.

Web application should always make sure that it is accepting only such session ids which are generated by it. I’ve seen many PHP applications which are accepting any session id which browser sends as long as syntax is correct. Instead of generating new session id it accepts whatever the browser is telling about its session and uses that to store session information. In those cases if attacker can get Javascript access (thru XSS) or local access, the attacker can set permanent session id for the browser which never changes. (Now I’m expecting that session id is not changed when user’s privileges change and web application never intentionally sets session id to empty.) So if browser tells that its session id is “aaaaaaaaaaa”, the web application must check from internal systems, that “aaaaaaaaaaa” is generated by it and is valid session id. If it is not, then system should change it to valid one.

Session ids (and all other cookies) must be protected as well as possible. There are two flags for that. If web application is using only https-protocol, then session id must be protected with secure-flag. Browser won’t send cookies with that setting over http-connection. If Javascript is not supposed to read it, then httponly-flag should be enabled.  From cookie security point of view the same domain should not be for http- and https-protocols. Mixing confuses the end user and prevents good cookie security. See e.g. Facebook from confusing system. If you don’t look carefully, you might end up to http-protocol even if you expect it to have https-protocol.

As the testers we should be familiar with basics of security and investigate what the application does below the user interface.


Usually we know what bugs should be fixed. But that’s not always clear to other people at project. This problem appears usually with security issues. Project and product managers don’t usually see how some security issue or risk can be so destructive, that they should be high priority issues. Most proof of concepts from security issues are not real exploits, but rather “Hello world”-types because creating good proof of concept just costs too much to be justified.  Instead of writing POC, one should know how to write story about problem. And that’s what I’m now days usually doing when I submit security bug.

Every story has some common components. So must have every bug report which is telling the story. They should be:

  • Technical details at the beginning. The problem must be found easily by developer.
  • Actors. I’m always giving some kind of description who they are. Almost always there is one villain and victim, but villain can be also some innocent person, who just tries to get her work done more quickly.
  • Motivation. The villain has to have good motivation to exploit the bug.
  • Result. It should hurt someone, kill the business or reveal sensitive data.

For example:

Many PHP-applications has weak session handling. If the attacker can give own value for PHPSESSID, it is accepted and used by the application. Those can happen with cross site scripting bug or local access to the computer and tempering directly the cookie storage of Firefox. The tested application – women’s social site about home violence WSSAHV.COM – has weak session handling, but the tester didn’t find XSS. So report with story would be:

(Place technical details before the story. Also describing how the session should work.)

There is family and members are Hilma (wife) and Onni (husband). Onni is jealous. At some point he finds out that Hilma is using site WSSAHV.COM. Of course he suspects that she’s cheating or telling lies about him there. He directly doesn’t want to attack against her and force to give the password so he starts hacking and notices the weak session handling.

He has the administrator rights to their shared Windows desktop; he has also access to Hilma’s personal files. He goes to Firefox profile files, opens cookies.sqllite with sqllite. There he inserts new cookie for WSSAHV.COM. Domain is ‘WSSAHV.COM’, path ‘/’, name is ‘PHPSESSID ‘ and value is ‘aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa‘, expiration is set to July 15th, 2020.

Next time when Hilma logs in to WSSAHV.COM, Onni has access to her profile from different computer. He just sets PHPSESSID at his machine to ‘aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa‘. Onni can alter the profile, read the private message, and see the hidden areas where Hilma might have access and so on. This violates the privacy and meaning of the site. It also places the users to danger.


Test automation tools are much like the hunting gun. You have real uses for it, but you can use it to do nasty things. This blog is about Skype but same kind of issues can be at any other application.

Let’s imagine that I’m evil h4x0r and I’d want to find new ways to extend my bot network. Everyone has become more and more careful with e-mail attachments so spamming is not the option anymore. Let’s take a quick look at Skype. Could we use it to automate our malware distribution? If I tried to register 1 million new user from web page, I’d have to give correct Captcha for each one of them. But if I do registration thru the Skype, the registration screen is shown below. No Captcha. When I registered, I also noticed there isn’t any e-mail confirmation.

If I already have small bot network, I can use those weaknesses to register plenty of Skype users. Easiest way to do that is to use automation. My proof of concept was done with AutoIT which is free test automation library. If the desktop application doesn’t have same bot-prevention systems as web application, small automation script can create new users. If you are able to create users, you are also able to do any other task. So my bot could start to call to people, start to add contacts, send files and so on. Chat could start with:”Hello. I am Jack Nicholson from Skype security contact.  We have noticed that you have major security risk which can be fixed by installing the patch which I’m going to send you.”

Skype has two major security related failures at registration thru the application:

  1. No Captcha which would have prevented automated guessing.
  2. No e-mail confirmation. Confirmation would have required exploiting the weaknesses of some free e-mail service.

Those two steps would increase the cost a lot and simple few hours bot coding wouldn’t be enough. I reported these as security issue to Skype at mid-July.

And how short the script is which creates the new user? Well… Here is my full proof of concept. It works only at my laptop and machines with same resolution and other visual settings. The attacker can make the script more generic with some work.

Run("C:\Program Files\Skype\Phone\skype.exe")
Sleep(5000)

MouseClick("left", 628, 437)
WinWaitActive("Skype™ - Luo tili")
Send("Evil Robot")
MouseClick("left", 500,501);
Sleep(500)
Send("q1w2e3")
MouseClick("left",794,496)
Sleep(500)
Send("q1w2e3");
MouseClick("left",485,549)
Sleep(500)
Send("something@kiva-mesta.net")
MouseClick("left",778,549)
Sleep(500)
Send("something@kiva-mesta.net")
Sleep(500)
MouseClick("left",1051,678)

I ended up to the bed with IEC 61066. It is standard for the safety related systems, so I thought it could be nice standard. Oh well – it is nice example of the standard which should definitely not exist. If we have safety related item which has security related quality requirement, it should not conflict with itself. Let’s quote (p109, 14.2.3.5):

In the case where data are not transmitted via a simple electrical data connection from one device to another, for example transmission via a network, it shall not be  possible to modify, delete or add something to these data. In addition, the receiving part of the dosimetry system, for example the computer, shall make sure that the received data are authentic. That means it shall be recognised if the data come from another device than the reader assigned to the dosimetry system.

NOTE One possible technical solution is: All transmitted data are combined in well-defined data sets including date and time of the generation of the data set, a running number, an identification of the transmitting part, for example serial number of the reader, and the relevant data. The whole data set is protected by an electronic signature (CRC, at least CRC-16 with a secret start value). The receiving part, for example computer, checks the contained data by making sure that no running number is missing (or double) and that the identification of the transmitting part is the correct one.

One possible technical solution is in conflict with the requirement. CRC is insecure algorithm for signing the data. It is usable for error detection. First the problem is size. To get the correct CRC-16 value with brute force requires only 2^16 steps, which is 65536. It doesn’t take much time to walk thru all of those. Actually the easier solution is to calculate the secret start value which also takes in worst case those 65536 steps. If that is too many steps, then there is 50% change to find the real start value with only about 256 steps.

The example solution does not honor the spirit of the requirement. With the example solution there isn’t any guarantees that the origin of the data is from the original device. Much better signature algorithms are presented at [1].

I really wonder why that kind of mistake has been made. The error detection algorithm is mixed with the signature. My guess is, that the creators of the standard are professional at embedded systems and communication. They are not security experts. The standard contained cross-domain issues, which would have required also the knowledge of security. The standard creation is too often too closed so the input from different domains is not received when they were needed.

[1] Ferguson N., Schneier B.: Practical cryptography, Wiley Publishing, 2003