Stephen Sclafani

Posts in 'Webappsec'

Stealing Login Nonces

March 21st, 2017

The website provides a nonce based login flow to allow a user who is already logged into their Facebook account to login to the site without having to re-enter their password. It was possible to create a URL that when loaded by a user who was logged into their Facebook account would redirect a nonce for their account to another site. The nonce could then be used to create a session for the user. Since session cookies are interchangeable with this gave full access to the user’s Facebook account.

Overview of the Login Flow

When a user visits the Facebook endpoint is loaded in an iframe:

The identifier and initial_request_id parameters are generated by and are tied to the user’s datr cookie.

If the user is logged into a Facebook account the endpoint redirects to with a secret nonce:

The user is then prompted if they want to continue as this Facebook user. Login

If they choose to continue a POST request is made to the endpoint with the nonce:

The same datr cookie that was used to generate the identifier and initial_request_id parameters is required in this POST request. The endpoint uses the nonce to create a session and sets cookies on

Stealing Nonces

When looking for flaws in a nonce based login flow where the redirect URL is controlled by a parameter the first thing I like to test is how strict it is on modifications to the redirect URL. In the case of the Messenger website the Facebook endpoint contained a redirect_uri parameter which controlled where the nonce was redirected to. The endpoint did not allow the path of the redirect URL (/login/fb_iframe_target/) to be changed or query string parameters to be added; however, a # could be appended to the path. Additionally, any subdomain could be used.

When a nonce is delivered via a query string parameter in the redirect URL ( rather than as a hash fragment (, it’s not required to actually redirect the nonce offsite in order to steal it. If you can get a URL containing the nonce set as the referrer before redirecting you can extract it from the request.

The Messenger website supported #!/path javascript redirects. When a #!/path was appended to a Messenger URL javascript on the page would redirect to that path after the page was loaded. Since the endpoint allowed a hash to be appended to the redirect URL it was possible to append a #!/path which would be redirected to after was loaded.

In order to steal a nonce a way to redirect offsite was needed. I knew that the Messenger website used the /l.php endpoint for redirecting to links. Normally this endpoint uses javascript to remove the referrer before redirecting; however, when redirecting to links just a 302 redirect is used.

Through some Google searches I was able to find the Facebook endpoint Given a Facebook app ID and the redirect URL set for the app, the endpoint would automatically redirect to the URL. By creating a Facebook app and setting a redirect URL it was possible to use the endpoint to redirect to any URL.

Combining these issues resulted in the URL:!

Unfortunately this didn’t work how I expected. The referrer my PoC was receiving was This was because every page includes the meta tag:

This meta tag prevents the referrer from leaking data such as the nonce by setting the referrer to the origin ( in cross-origin requests. This would have been a game-over if not for the endpoint allowing any subdomain to be used in the redirect URL. Through a search on I was able to find the subdomain This subdomain also included the meta referrer tag in its pages; however, it did not use the origin-when-crossorigin policy.

With this subdomain the final PoC URL looked like this:!

When loaded by a user who was logged into their Facebook account:

1. The endpoint redirected to with a nonce for the user’s account:!/l.php?

Because the subdomain did not use the origin-when-crossorigin referrer policy in its pages this URL got set as the referrer for the next requests.

2. The #!/l.php appended to the redirect URL caused javascript on the page to redirect the user’s browser to the endpoint:

Because the link is to a URL a 302 redirect was used to redirect the user to, preserving the referrer containing the nonce.

3. The endpoint automatically redirected the user to the URL in the redirect_uri parameter:

4. The PoC script I created extracted the nonce from the referrer and used it in a POST request to the endpoint to create a session and displayed the cookies:

The cookies could be used to access the user’s account on as well as on

Note: The identifier and initial_request_id parameters included in the PoC URL were generated from a datr cookie that was also used by the PoC script in the POST request to to create the session with the stolen nonce. The parameters didn’t expire and could be reused to create sessions from multiple nonces.

Note: This attack worked even if a user was already logged into

The Fix

Facebook’s initial fix was to block the ability to add a hash to the redirect URL; however, this fix could be bypassed with the following URL:!/l.php?

In this URL the #!/l.php is appended to the endpoint URL. This worked because modern browsers preserve an appended hash through a 302 redirect, even across sites. A hash appended to gets appended to after the redirect.

To prevent this a site can append its own hash to its redirects:

Hash Redirect

This hash will replace any hash that’s been appended to the parent URL.


I reported this issue to Facebook on Sunday, February 26. When the issue was confirmed by Facebook on Monday morning an initial fix was put in place in less than two hours. Facebook awarded me with a bounty of $15,000 as part of their Bug Bounty Program.

Sun, Feb 26, 2017 at 5:12 AM   – Report sent
Mon, Feb 27, 2017 at 8:01 AM   – Confirmation of issue by Facebook
Mon, Feb 27, 2017 at 9:35 AM   – Temporary fix pushed by Facebook
Mon, Feb 27, 2017 at 10:09 AM  – Notification by me that fix could be bypassed
Mon, Feb 27, 2017 at 11:56 AM   – Confirmation by Facebook that they are working on the fix
Mon, Feb 27, 2017 at 4:29 PM   – Permanent fix pushed by Facebook
Mon, Feb 27, 2017 at 9:39 PM   – Verification of fix by me
Fri, Mar 3, 2017 at 4:36 PM   – $15,000 bounty awarded by Facebook

9 CommentsCategories:Webappsec

Hacking Facebook’s Legacy API, Part 2: Stealing User Sessions

July 29th, 2014

This is part two of my research on Facebook’s legacy REST API. If you’re not familiar with the REST API an overview is contained in part one.


To make REST API calls for a user a Facebook application must first obtain a session key for the user. The REST API provided two login flows for applications to obtain a session key, one for Web applications (websites) and one for Desktop applications (JavaScript, mobile, and desktop applications).

Both flows contained vulnerabilities that allowed an attacker to steal user sessions. Once a user’s session had been stolen it was possible for the attacker to elevate their access from the limited REST API to the Graph API to being able to reset the user’s password and take full control of their account.

The Web Login Flow

To obtain a session key for a user a Web application would direct the user to the Facebook login URL with its API key:

If the user had not already authorized the application they would be prompted to do so. Once the user had authorized the application they would be redirected to the callback URL that had been set for the application along with an auth token:

The application could then exchange the auth token for a session key for the user by calling the method auth.getSession.

Unlike with the Graph API’s login flow, the callback URL could not be overridden via the login URL. At first glance this made the flow unexploitable, however, for many of Facebook’s own internal applications no callback URL was set. When their API keys were used in the login URL it would redirect the auth token to the Facebook domain:

While it was not possible to override the callback URL, an optional next parameter could be passed a relative path which would get appended to it:

When used with an internal application that had no callback URL set the path would get appended to the Facebook domain:

To steal a user’s auth token an attacker needed to redirect it to their own website. When a Facebook application is loaded from the Facebook mobile website it automatically redirects to its set website URL without any security prompt. It was possible for an attacker to exploit this to steal auth tokens:

When loaded by a user who was logged in to their Facebook account, this URL would redirect the user’s auth token to the attacker’s application which is passed as a relative path in the next parameter. When used with an internal application the login URL would redirect to the Facebook subdomain that it was requested from (in this case rather than always redirecting to

The attacker’s application would then redirect to its website. It’s a feature of Facebook to include any query parameters in this redirect. This included the auth token (even if Facebook did not include query parameters in the redirect the token would still have been included in the referer):

Once an attacker had stolen a user’s auth token they could call auth.getSession themselves to get a session key for the user. In the above example the Facebook for Android application’s API key (882a8490361da98702bf97a021ddc14d) is used in the login URL. This is an internal application used by Facebook’s Android app. Like many of Facebook’s internal applications, the Facebook for Android application has been authorized and granted full permissions for every user. This was important because a user must have already authorized the application being used before their auth token could be stolen. There were other important reasons for using the Facebook for Android application which I discuss later in this post.

The Desktop Login Flow

In the Web login flow an auth token was generated by the login URL and passed to the application via a callback URL. This was not possible for Desktop applications. In place of a callback a Desktop application would generate an auth token by calling the method auth.createToken. A user would then be directed to the login URL in their browser with the token:

Like with the Web login flow, if the user had not already authorized the application they would be prompted to do so. Once the application had been authorized the auth token would be bound to the user’s account. The user would then be prompted to return to the application:


The application could then exchange the auth token (which it already had) for a session key for the user by calling auth.getSession.

There were two problems with this flow: The auth token returned by auth.createToken was just a random 32 character hexadecimal string, an attacker could generate their own token without calling the method. If a user had already authorized the application the auth token would be bound to their account automatically.

An attacker could get a user to load the login URL with the API key of an internal Facebook application that had been authorized for every user and an auth token that they had generated. The token would be bound to the user’s account. Since the attacker had generated the auth token they could then get a session key for the user by calling auth.getSession themselves. Because the call to auth.getSession did not have to be made from the user’s browser, the login URL could be loaded from anywhere where images can be embedded (a webpage, an email, a blog, a message board thread, in comments, etc.):

<img src="http://attackerswebsite/exploit">

When a user’s browser attempted to render this image it would load the attacker’s URL. Upon loading, the URL would generate an auth token and pass it to an asynchronous task. It would then redirect to the Facebook login URL with the token. When loading an img tag’s URL browsers will automatically follow a certain number of redirects. When the user’s browser followed the redirect to the login URL it would include the user’s Facebook cookies in the request. If the user was logged in to Facebook the auth token would be bound to their account. This worked because the login URL only had to be loaded for the auth token to be bound to a user’s account, it did not have to be rendered. Back on the attacker’s server, the task would wait long enough for the login URL to have been loaded by the user’s browser and would then attempt to get a session key for the user by making calls to auth.getSession with the auth token that it was passed. Once a session key had been obtained the task would log it. The end result was that any user who loaded a page that contained an img tag with the attacker’s URL while logged in to their Facebook account would have their session stolen.

From Auth Token to Account Takeover

Once an attacker had stolen a user’s auth token it needed to call auth.getSession to exchange it for a session key for the user. According to the REST API documentation this should have been impossible as the call to auth.getSession must be signed with an application’s secret, which the attacker didn’t have. Web applications can safely embed their application secret in their code but Desktop applications cannot (because client-side code is easily reverse engineered). The REST API’s solution to this problem was to require Desktop applications to have a server-side component which would make the call to auth.getSession and return the session key to the application. With the introduction of the Graph API Facebook introduced Client Tokens which replaced the need for the server-side component:

The client token is an identifier that you can embed into native mobile binaries or desktop apps to identify your app. The client token isn’t meant to be a secret identifier because it’s embedded in applications. The client token is used to access app-level APIs, but only a very limited subset. The client token is found in your app’s dashboard. Since the client token is used rarely, we won’t talk about it in this document. Instead it’s covered in any API documentation that uses the client token.

One of those app-level APIs is the auth.getSession method. A client token for the Facebook for Android application is embedded in the Facebook Android app’s APK. An attacker could decompile the APK and extract the token. The attacker could then use the token to sign calls to auth.getSession:

The call returns a session key, a session secret, the user’s ID, and an expiration time:

Sessions for the Facebook for Android application have an expiration of 0 which means that they never expire (even if a user logs out). A session is only invalidated if the user changes their password. Sessions are also granted full permissions. Using their session key an attacker could call any of the REST API methods on behalf of the user. Being a deprecated API, the REST API is limited in its access as compared to the current Graph API. As part of the migration from the REST API to the Graph API Facebook provided an endpoint for application developers to convert their session keys into Graph API access tokens. This endpoint, however, requires having the actual application secret, not just a client token.

Facebook’s mobile apps use a number of private REST API methods for authentication and user functionality. These methods could be called by an attacker using a user’s stolen session key. One of these methods is auth.getSessionForApp. This method is used by the mobile apps to get new session information for a user from a cached access token. While the mobile apps call this method with an access token, it could also be called with a session key:

The call returns an access token and session cookies for the user:

The access token is granted full Graph API permissions. An attacker could use the access token to call any of the Graph API’s endpoints. The session cookies could be used by the attacker to login to the user’s account directly.

Even with the ability for an attacker to login to a user’s account, there are still some features that require knowing the user’s password. Facebook’s Android app allows a user to add a new phone number to their account. It does this by calling the method user.confirmPhone. This method does not require the user’s current password. An attacker could call this method to add their own number to a user’s account:

The method is passed the phone confirmation code that is returned from texting F to 32665 (in the US). Once a phone had been added to a user’s account, the attacker could initiate a password reset request for the user and use the “Text me a code to reset my password” option to have a code sent to newly added number:

reset password


I reported the vulnerability in the Desktop login flow to Facebook on May 3rd and the vulnerability in the Web login flow on May 9th. A temporary fix for Desktop login flow was put in place by Facebook on May 4th. Both issues were fixed permanently on May 21st. The issues took longer to fix than the API endpoint issue that I documented in part one as the flows were still being used by many older Facebook applications and could not simply be disabled. For the two issues a combined bounty of $20,000 (2x $10,000) was awarded by Facebook as part of its Bug Bounty Program.


May  3, 2014  7:33am – Desktop login flow report sent
May  3, 2014  9:31pm –  Confirmation of issue from Facebook
May  4, 2014  3:53pm –  Temporary fix for Desktop login flow pushed by Facebook
May  6, 2014 11:22pm –  Notification by Facebook that it would take a couple of days for a permanent fix to be put in place
May  9, 2014  4:42pm –  Web login flow report sent
May  9, 2014  5:06pm –  Confirmation of issue by Facebook
May  9, 2014 10:59pm –  Notification by Facebook that it would take a couple of more days for a permanent fix to be put in place for both issues
May 13, 2014  1:22pm –  Notification by Facebook that they were still working on a permanent fix for both issues
May 17, 2014  9:28am –  Notification by Facebook that a permanent fix for both issues would be pushed on the 20th
May 21, 2014  1:02am –  Permanent fix for both issues pushed
May 22, 2014  1:29am –  Confirmation of fix sent
May 30, 2014  5:17pm –  $20,000 (2x $10,000) combined bounty awarded by Facebook

5 CommentsCategories:Webappsec

Hacking Facebook’s Legacy API, Part 1: Making Calls on Behalf of Any User

July 8th, 2014


A misconfigured endpoint allowed legacy REST API calls to be made on behalf of any Facebook user using only their user ID, which could be obtained from their profile or through the Graph API. Through REST API calls it was possible to view a user’s private messages, view their private notes and drafts, view their primary email address, update their status, post links to their timeline, post as them to their friends’ or public timelines, comment as them, delete their comments, publish a note as them, edit or delete any of their notes, create a photo album for them, upload a photo for them, tag them in a photo, and like and unlike content for them. All of this could be done without any interaction on the part of the user.

An Interesting Request

When starting a pentest I like to browse the target site with Burp open to get a feel for how the site is structured and to see the requests that the site is making. While browsing Facebook’s mobile site the following request caught my attention:


The request was used to get your bookmarks. The request was interesting for three reasons: it was making an API call rather than a request to a dedicated endpoint for bookmarks; it was being made to a nonstandard API endpoint (i.e. not; the call was not Graph API or FQL. Doing a Google search for bookmarks.get turned up nothing. After some guessing I found that the method notes.get could also be called which returned your notes. Through some more searching I found that the endpoint was using Facebook’s deprecated REST API.

The Facebook REST API

The REST API was the predecessor of Facebook’s current Graph API. All of the documentation for the REST API has been removed from Facebook’s website but I was able to piece together some of it from the Wayback Machine. The REST API consists of methods that can be called by both Web applications (websites) and Desktop applications (JavaScript, mobile, and desktop applications). To make a call an application makes a GET or POST request to the REST API endpoint:



The request consists of the method being called, the application’s API key, a session key for a user, any parameters specific to the method, and a signature. The signature is a MD5 of all of the parameters and either the application’s secret, which is generated along with the API key when the application is registered with Facebook, or a session secret which is returned with a session key for a user. Web applications sign requests with their application secret. Requests signed with the application secret can make calls on behalf of users and to administrative methods. Desktop applications sign requests with a user’s session secret. Requests signed with a session secret are limited to making calls only for that user. This allows Desktop applications to make calls without exposing their application secret (which would have to be embedded in the application). An application obtains a session key for a user through an OAuth like authentication flow.

Making Calls on Behalf of Any User

From reading the documentation I knew that the actual REST API endpoint was which meant that the endpoint had to be acting as a proxy. This raised the question: What Facebook application was it proxying requests as and what permissions did the application have?

So far I had only called read methods. I attempted to call the publishing method users.setStatus:


Calling this method updated the status on the account that I was logged in to. The update was displayed as being made via the Facebook Mobile application:


This is an internal application used by the Facebook mobile website. Many internal Facebook applications are authorized and granted full permissions for every user. I was able to confirm that this was the case for the Facebook Mobile application by calling the methods friends.getAppUsers and fql.query. Calling friends.getAppUsers showed that the application was authorized for every friend on the account that I was logged in to. Calling fql.query allowed me to make a FQL query on the permissions table to lookup the permissions that the application had been granted.

That I was being authenticated with the REST server as the account that I was logged in to meant that the proxy had to be generating a session key from my session and passing it with each request. This should have limited my ability to make calls only for that account, however, I noticed that in the documentation for many of the methods a session key is optional if the method is being called by a Web application (i.e. the request is being signed with the application’s secret). For these methods a uid parameter can be passed in place of a session key and set to the user ID of any user who has authorized the application and granted it the required permission for the method being called.

Through calling users.setStatus I had been able to find out what Facebook application the proxy was using, but more importantly I had been able to confirm that the proxy would pass any parameters to the REST server that I included in a request. The question now was: Was the proxy signing requests with the Facebook Mobile application secret or my session secret? And if the proxy was using the application secret, would the REST server accept the uid parameter? Including the uid parameter in a request would not stop the proxy from also passing a session key and there was the possibility that the REST server would reject the request if both were passed.

To test it I tried updating the status on a different account than the one I was logged in to by calling users.setStatus with the uid parameter set to user ID of that account. It worked. The status on the account whose user ID I passed was updated. Not only was the proxy signing requests with the application secret, but equally as important, when passed both a session key and the uid parameter the REST server would prioritize the uid.

The documentation for the REST API states that the use of the uid parameter is limited to only those users who have authorized the application and granted it the required permission for the method being called. Since the Facebook Mobile application had been authorized and granted full permissions for every user, it was possible to use the uid parameter to make calls on behalf of any user using any of methods that supported it.

The following methods can be called with the uid parameter:

message.getThreadsInFolder Returns all of a user’s messages.
users.setStatus Updates a user’s status. Posts a link to a user’s timeline.
stream.publish Publishes a post to a user’s timeline, friend’s timeline, page, group, or event.
stream.addComment Adds a comment to a post as a user.
stream.removeComment Removes a user’s comment from a post.
notes.create Creates a new note for a user.
notes.edit Edits a user’s note.
notes.delete Deletes a user’s note.

This method is only supposed to delete notes that were created by the user through the application. However, in my tests, when called through the proxy it would delete any note.

photos.createAlbum Creates a new photo album for a user.
photos.upload Uploads a photo for a user.
photos.addTag Tags a user in a photo.
stream.addLike Likes content for a user.
stream.removeLike Unlikes content for a user.

Some methods that required a session key would return additional information when called through the proxy:

users.getInfo Returns information on a user.

This method is only supposed to return the information on the user that is viewable to the calling user. However, when called through the proxy it would return the user’s primary email address regardless of the relationship between the user and the calling user.

notes.get Returns the notes for a user.

This method is only supposed to return the notes for the user that are viewable to the calling user. However, when called through the proxy it would return all of the user’s notes, including their drafts.

In addition to the above user methods, the following administrative methods could be called through the proxy on behalf of the Facebook Mobile application:

admin.getAppProperties Gets the property values set for the application.
admin.setAppProperties Sets the property values for the application.
admin.getRestrictionInfo Returns the demographic restrictions for the application.
admin.setRestrictionInfo Sets the demographic restrictions for the application.
admin.getBannedUsers Returns a list of the users who have been banned from the application.
admin.banUsers Bans users from the application.
admin.unbanUsers Unbans users from the application.
auth.revokeAuthorization Revokes a user’s authorization of the application.
auth.revokeExtendedPermission Revokes a extended permission for a user of the application.
notifications.sendEmail Sends an email to a user as the application.


I reported this issue to Facebook on April 23rd. A temporary fix was in place less than three hours after my report. A bounty of $20,000 was awarded by Facebook as part of their Bug Bounty Program.


April 23,  4:42pm – Initial report sent
April 23,  5:50pm – Request for clarification from Facebook
April 23,  6:08pm – Clarification sent
April 23,  6:49pm – Acknowledgment of issue by Facebook
April 23,  7:38pm – Notification of temporary fix by Facebook
April 23,  8:39pm – Confirmation of temporary fix sent
April 29, 11:03pm – Notification of permanent fix by Facebook
April 30, 12:58am – Confirmation of permanent fix sent
April 30,  8:35pm – Bounty awarded

Update: Part 2 has been posted.

40 CommentsCategories:Webappsec

Obtaining The Primary Email Address Of Any Facebook User

July 9th, 2013

Given only their ID, it was possible to obtain the primary email address of any Facebook user regardless of their privacy settings.

Anyone who has subscribed to a public mailing list knows the problem of members inviting their entire contacts list, including the mailing list, to every new social site and app. This has turned mailing list archives into a Wayback Machine for email notifications. Searching through some old mailing lists I came across a Facebook invitation reminder circa 2010:


Clicking on the link in the email, a sign up page filled in with the list’s address and the name of a person who used the link to sign up for an account was displayed:


The link contained two parameters: “re” and “mid”:

Changing the re parameter did nothing; however, changing parts of the mid parameter resulted in other addresses being displayed. Taking a closer at the parameter, its value was actually a string of values with “G” acting as a delimiter:

59b63a G 5af3107aba69 G 0 G 46

Only the second value was important. The value was an ID associated with the address that the invitation was sent to in hex. A Facebook user’s numerical ID could be put as this value and their primary email address would be displayed. A user’s numerical ID is considered public information and can be obtained from the source of their profile or through the Graph API.


This issue was reported to Facebook on March 22nd and was fixed within 24 hours. A bounty of $3,500 was rewarded as part of their Bug Bounty program.

20 CommentsCategories:Webappsec

Vulnerabilities in Heroku

January 9th, 2013

Recently, while contemplating hosting options for my startup, I decided to take a look at Heroku. Upon signing up, I noticed that Heroku used a two-step sign up process. Multi-step sign up processes are notorious for containing security vulnerabilities, and after taking a closer look at Heroku’s I found that it was possible, given only their user ID, to obtain any user’s email address and to change their password.

Sign Up Vulnerability

In the first step of Heroku’s sign up process a user enters their email address:


Upon submitting the form, the user is sent a confirmation email containing a link to activate their account. The activation link consists of the user’s ID and a token:

Upon loading the activation link, the user is prompted to set a password for their account. The email address that the user entered in the first step is displayed:


When the form is submitted a POST is made containing the user’s ID and the token from the activation link:



If the POST was made with the token parameter removed and the password fields left blank, the resulting error page would display the email address of any user whose ID was put as the value of the “id” parameter:



If the POST was made with the token parameter removed and the password fields filled in, the user’s password would be changed:



Reset Password Vulnerability

A second vulnerability was found in Heroku’s reset password functionality. By modifying the POST request it was possible to reset the password of a random (nondeterministic) user each time that the vulnerability was used.

If a user has forgotten their Heroku password they can use the Reset Password form to reset it:


Upon submitting the form, the user is sent an email containing a link with which they can reset their password. The link consists of an ID:

Upon loading the link, the user is prompted to set a new password for their account:


When the form is submitted a POST is made containing the ID in both the URL and body of the request:



If the POST was made with ID removed from both the URL and body, the password of a random account would be reset and the account automatically logged in to:




I reported these issues to Heroku on December 19. Initial fixes were in place within 24 hours. Heroku asked me to hold off on publishing a public disclosure so that they could do a review of their code which I agreed to.

Update: Heroku’s official response.

Despite finding these vulnerabilities I plan to host my startup at Heroku. Security vulnerabilities happen and Heroku handled the reports well.

Note: All of Heroku’s forms are protected against CSRF with an “authenticity_token” parameter. I removed the parameter from the above examples for clarity.

18 CommentsCategories:Webappsec

CSRF Vulnerability in OAuth 2.0 Client Implementations

April 6th, 2011

OAuth 2.0 is the next generation of the OAuth protocol. OAuth 2.0 has been designed to make implementation simpler for both service providers and clients. Unfortunately, this simplification has led to the implementations of the majority of client websites to be vulnerable to cross-site request forgery.


Facebook is currently the largest service provider using OAuth 2.0. Facebook offers OAuth 2.0 as an authentication option for its API. When a client website that has implemented the Facebook API authenticates a user using OAuth 2.0 the website redirects the user to an authorization URL on Facebook:

The user is prompted by Facebook to login and authorize the website. If authorization is granted the user is redirected to the callback URL in the redirect_uri parameter along with a code:

The website can then exchange the code for an OAuth access token and use the token to make API requests on behalf of the user.

Unlike with OAuth 1.x where a request token is passed through the entire flow, there is nothing that ties the request to authorize with the returned code. An attacker can generate a code for their own Facebook account for a target website and can then get a victim to load the code in the website’s callback URL. If the victim is logged in, the website will automatically use the code to link the victim’s account to the attacker’s Facebook account. If the website has implemented Facebook as a secondary login option the attacker can then login to the victims account using Facebook.

I tested this attack against The New York Times, Photobucket, TripAdvisor, StackOverflow, Digg and Formspring all of which were vulnerable.


The OAuth 2.0 spec defines an optional “state” parameter which clients can use to maintain state between the authorization request and the callback. Clients can use this parameter to protect themselves against this attack by passing a unique-to-user nonce as the value of the parameter when redirecting a user:

The nonce is included with the code in callback if authorization is granted:

A client can then check if the nonce is valid and that it belongs to the logged in user before taking action.


Facebook was notified of this issue in January. Since being notified they have taken the following actions:

  • Notified major developers of the issue.
  • Added a section detailing the issue and the use of the “state” parameter for mitigation to their authentication documentation.
  • Began updating their SDKs to use the “state” parameter.

Beyond Facebook

While Facebook is the largest service provider using OAuth 2.0 this is not a Facebook specific issue. Any website that implements a third party service that uses OAuth 2.0 for authentication can be vulnerable to this attack. In addition to Facebook, members of the OAuth community were notified of this issue and discussion to add details of the issue to the OAuth 2.0 spec is ongoing.

2 CommentsCategories:Webappsec

A Parsing Quirk and a #NewTwitter XSS

October 4th, 2010

I generally don’t blog about individual XSS issues, however, this particular one was made more interesting as it took advantage of a parsing quirk of browsers that I’ve seen increasingly be an issue as web applications become more “application” like with the help of javascript.

In 2009 the Mikeyy worm was able to inject javascript into user profiles by exploiting the lack of a validity check in the custom colors functionality and the lack of sanitation when displaying the custom colors on the profile. In response Twitter added sanitation and has since implemented a check on the validity of custom colors. An attempt to submit invalidly formatted colors results in an error being returned. In addition to setting custom colors a user can upload a custom background image. When the image is uploaded the user’s current colors are also sent. Twitter failed to extend their validity check to this functionality allowing the colors to be set to any arbitrary value. On both the old and new Twitter custom colors are sanitized when displayed on the profile, however, the profile is not the only place on the new Twitter where a user’s colors are used.

On the loading of the new Twitter a call to the javascript function twttr.API._requestCache.inject is made to display the user’s timeline. Passed to this function is the metadata of the most recent tweets made by the users that the user is following. The metadata of a tweet includes much data that is not displayed, including the profile colors of the user who made it. The colors were included unsanitized.

twttr.API._requestCache.inject("statuses/home_timeline",[{'contributor_details': true, 'include_entities': 1}],[{"retweeted":false,"truncated":false,"geo":null,"entities":{"hashtags":[],"user_mentions":[],"urls":[]},"place":null,"retweet_count":null,"source":"web","favorited":false,"contributors":null,"user":{"contributors_enabled":false,"profile_sidebar_fill_color":"DDEEF6","description":null,"geo_enabled":false,"time_zone":null,"following":true,"notifications":false,"profile_sidebar_border_color":"C0DEED","verified":false,"profile_image_url":"","follow_request_sent":false,"profile_use_background_image":true,"profile_background_color":"\"</script><script>alert('XSS')</script>","screen_name":"stephensclafani","profile_background_image_url":"","followers_count":0,"profile_text_color":"333333","protected":false,"show_all_inline_media":false,"profile_background_tile":false,"friends_count":0,"url":null,"name":"Stephen Sclafani","listed_count":0,"statuses_count":1,"profile_link_color":"0084B4","id":1598801,"lang":"en","utc_offset":null,"favourites_count":0,"created_at":"Tue Mar 20 07:11:24 +0000 2007","location":null},"id":25168131043,"coordinates":null,"in_reply_to_screen_name":null,"in_reply_to_user_id":null,"in_reply_to_status_id":null,"text":"xss","created_at":"Tue Sep 2120:38:26 +0000 2010"}], 1);

One might expect the injected javascript not to be executed as it’s within a quoted javascript string and that the injected double quote is escaped with a backslash. However, due to a quirk in browser parsing this is not the case. When a browser parses a document and finds a <script> tag it looks for the following </script> tag and ends the block of javascript even if that </script> tag is within a quoted string. By including a </script> tag before the injected block of javascript, the block of javascript being injected into is ended prematurely. This breaks the quoted string allowing the injected block of javascript to be executed.

#NewTwitter XSS

By combining these two issues it was possible for a user to inject javascript into the timelines’ of all of their followers. Twitter was notified of the two issues and has since deployed fixes for both.

For a second example of the quirk see Mike Bailey’s post on a XSS.

2 CommentsCategories:Webappsec

Ruby on Rails: Secure Mass Assignment

January 4th, 2010

The security implications of mass assignment have been documented since Rails’s inception and yet many applications are still vulnerable. In a survey of the top Rails websites I found that the majority were vulnerable to the issue. At three I was able to gain full admin privileges, at others I was able to gain access to other users’ data, in one case to partial credit card information.

Mass Assignment

@user =[:user])

In the above line of code mass assignment is used to populate a newly created User from the params hash (user submitted data). If no precautions are taken an attacker can pass in their own parameters and set any User attributes.

t.column :admin, :boolean, :default => false, :null => false

Consider an application that has a users table containing an admin column. When creating a new account an attacker can pass in the parameter user[admin] set to 1 and make themselves an admin.

has_many :blog_posts

It’s not only database columns that can be attacked in this fashion. Consider an application that has the above line of code in its User model. Because has_many allows the setting of ids via mass assignment, an attacker can pass in the parameter user[blog_post_ids][] and take control of other users’ blog posts.


Rails provides the class method attr_accessible which takes a whitelist approach to protection. Using attr_accessible you can specify the attributes that can be set and all others will be protected.

Testing for Vulnerability

From a black box perspective its easy to determine if an application is vulnerable by attempting to set an attribute that you know could not exist. In the majority of cases a 500 error will be returned. In a few cases an error page with an error such as “Unable to create account” or “Unable to save settings” was returned.

5 CommentsCategories:Webappsec

Exploiting Unexploitable XSS

May 26th, 2009

XSS that are protected by CSRF protection or where other mitigating factors are present are usually considered to be unexploitable or of limited exploitability. This post details real world examples of exploiting “unexploitable” XSS in Google and Twitter. While the XSS detailed in this post are site specific the methods that were used to exploit them could be applied to other websites with similar implementations. Alex’s (kuza55) Exploiting CSRF Protected XSS served as inspiration for this post.


Google has services deployed across many different domains and subdomains and as a result requires a way to seamlessly authenticate members who are logged in to their Google Account. Google’s solution to this problem is the ServiceLogin URL.

When called by a member who is logged in to their Google Account the URL generates an auth URL and redirects to the particular service.

When the auth URL is loaded the service uses the auth token to log the member in. No verification was done between the service and Google to ensure that the account that the member was being logged in to was actually theirs. It was possible, then for an attacker to generate an auth URL for their account at a service and to use it to log a member in without affecting the member’s Google Account session. Because the member’s Google Account session was untouched it was also possible for the attacker to use the ServiceLogin URL to log the member back into their own account at the service.

Google Sites XSS

On the Google Sites User Settings page a user’s settings were used in a javascript function unsanitized. As a result, it was possible for an attacker to submit a setting with a value that would break out of the function and inject javascript into the page. Since the User Settings form is protected against CSRF, this was a self-only XSS. However, with the ability to log a member into an account and back into their own account the attacker could exploit this issue as if it was a full blown reflected XSS.

Blogger XSS

On the Advanced Settings page for publishing a Blogger blog on a custom domain a javascript function takes the value of the “Your Domain” field and displays it in the “Use a missing files host?” section. This function would display the value unsanitized. If the Advanced Settings form is submitted with javascript as the domain an invalid domain error is returned. On the error page the function is executed on page load which would result in the javascript being reflected on the page. Blogger’s forms are protected against CSRF, however like with the bad domain error message, the error message for using a bad CSRF token is displayed along with the XSS on the error page. This XSS was limited, however, in that to make a successful POST a blog ID belonging to the current logged in member is required. An attacker would have to hard code their exploit for a specific target blog otherwise the POST would be redirected to a login page. The attacker could get around this limitation, however, by logging a member into an account that they had created with a known blog ID which could then be used in the POST to trigger the XSS.

The XSS in Blogger was made easier to exploit due to the error message for using a bad CSRF token being displayed on the same page as the XSS. When the error message is properly displayed on a separate page, reflected XSS that require a POST and are protected by CSRF protection are considered to be unexploitable since it should be impossible for an attacker to know the CSRF token. However, this is not always the case. The implementation of many sites CSRF protection, including the majority of Google services, tie the CSRF token to a member’s account but not to an account’s specific session. Making the token compatible across sessions of the same account. With the ability to log a member into an account and to predict the CSRF token for the account, it becomes possible for an attacker to exploit these XSS as if they were unprotected.

YouTube XSS

At a YouTube member can create paid promotions for their videos. A promotion consists of a frame from the video and three lines of text. On the second step of the promotion creation process, the “Write your Promotion” page, a member is given three frames from their video to choose from and three text fields to enter their three lines of text. When the form is submitted, if the text fields contain invalid characters such as html/javascript an error is returned. On the error page the value of the first text field was used unsanitized in the title and alt attributes of the promotion’s image. YouTube’s forms are protected against CSRF and the error message for using a bad CSRF token is displayed on its own page. However, because YouTube’s CSRF tokens are compatible across sessions of the same account, it was possible for an attacker to exploit this XSS by logging a member into an account that they control.

Since being notified of these XSS Google has fixed the issues. Google has also started deploying protection to prevent the exploitation of auth URLs. The protection has already been deployed at Gmail and Google is looking to extend it to other services.


Twitter XSS

On every Twitter page a member’s language preference is used as a variable in the Google Analytics code. For members who had not yet set a language preference it was possible for an attacker to set it temporarily by using the URL:

The value would be used in the Google Analytics code unsanitized.

Since setting any of the profile settings also sets a language preference, and since settings their profile settings is the first thing most Twitter members do after registering, very few members would have been vulnerable to this XSS.

Twitter, like many sites that have implemented CSRF protection, did not extend the protection to its login page, allowing login CSRF attacks. With a login CSRF attack it would have been possible for an attacker to exploit the XSS by first logging a member into an account that had not yet had its language preference set. However, since using login CSRF destroys a member’s session, this attack would have had limited exploitability.

Twitter has a “Remember me” feature on its login page that when used will remember a member’s session after they have shut down their browser. Different sites implement this feature in different ways. Some sites set the same session cookies but make them persistent if the feature is used, other sites such as Twitter set a unique persistent cookie in addition to the session cookie. If an attacker used a login CSRF attack against a member who had logged in to Twitter using the “Remember me” feature, and in the attack the feature was unused, the member’s session would be overwritten but their “Remember me” cookie would not be. The attacker could then exploit the XSS and either steal the cookie or use it to log the member back into their own account and continue with the attack.

Since being notified of the XSS Twitter has fixed the issue and has extend its CSRF protection to its login page.

12 CommentsCategories:Webappsec

Clickjacking & OAuth

May 4th, 2009

This post details clickjacking and how it poses a serious security threat to OAuth service providers.


Clickjacking is when a visitor to a web page is tricked into clicking on an element that they believe to be harmless when in reality they are clicking on an element on a different website that exposes protected data or grants an attacker access. There are a number of ways to implement a clickjacking attack, but the most common way is to load the target website in a transparent iframe. The iframe is then positioned so that the target element that the attacker wishes a visitor to click on is positioned over a dummy element on the page that the iframe is contained on. Because the iframe is given a higher stack order than the dummy element, when a visitor clicks on the dummy element they are actually clicking on the hidden transparent element.

You can read more on clickjacking from Robert Hansen and Jeremiah Grossman here.


In 3-legged OAuth as the result of an action taken by a User a Consumer requests a Request Token from the Service Provider and then passes that Request Token to the Service Provider’s Authorization URL through redirection. The Service Provider then displays a page prompting the User to approve or deny the Consumer access.


In this example Faji is the Service Provider and Beppa is the Consumer. If Beppa’s developers were malicious they could use a clickjacking attack against Faji’s approval page to trick users into granting their application access.



From the user’s perspective the link appears to be harmless, but in reality when clicked on will grant Beppa access.

This is a basic example, however with a little social engineering it becomes trivial to get a user to click on the dummy element and have the attack go undetected.


There are two solutions to protect against clickjacking each with its own issues.

Service providers can use frame busting scripts to prevent their approval page from being framed. However, due to Internet Explorer’s support of a security=”restricted” attribute on frames they can be disabled in IE. For IE8 Microsoft has announced the support of a X-Frame-Options HTTP response header which can be used by service providers to deny their approval page from rendering in a frame. However IE8 is not yet widely used. One workaround is to require that Internet Explorer users have javascript enabled, however this comes with its own set of issues.

Service providers can require that users authenticate themselves before being shown the approval page, even if they are already signed in to the service. By doing so it becomes impossible for their approval page to be framed since a user’s credentials are not known to Consumers. This can be an inconvenience for some users, however, but more importantly by conditioning users to enter their credentials each time they are redirected from a Consumer it can increase the potential of phishing attacks. Service providers that choose this solution should educate their users about phishing attacks and should provide mechanisms that make it easier for users to confirm the authenticity of their site.


At the time of this post all service providers had been notified.

3 CommentsCategories:Webappsec