Stephen Sclafani

Ruby on Rails: Secure Mass Assignment

January 4th, 2010

The security implications of mass assignment have been documented since Rails’s inception and yet many applications are still vulnerable. In a survey of the top Rails websites I found that the majority were vulnerable to the issue. At three I was able to gain full admin privileges, at others I was able to gain access to other users’ data, in one case to partial credit card information.

Mass Assignment

@user =[:user])

In the above line of code mass assignment is used to populate a newly created User from the params hash (user submitted data). If no precautions are taken an attacker can pass in their own parameters and set any User attributes.

t.column :admin, :boolean, :default => false, :null => false

Consider an application that has a users table containing an admin column. When creating a new account an attacker can pass in the parameter user[admin] set to 1 and make themselves an admin.

has_many :blog_posts

It’s not only database columns that can be attacked in this fashion. Consider an application that has the above line of code in its User model. Because has_many allows the setting of ids via mass assignment, an attacker can pass in the parameter user[blog_post_ids][] and take control of other users’ blog posts.


Rails provides the class method attr_accessible which takes a whitelist approach to protection. Using attr_accessible you can specify the attributes that can be set and all others will be protected.

Testing for Vulnerability

From a black box perspective its easy to determine if an application is vulnerable by attempting to set an attribute that you know could not exist. In the majority of cases a 500 error will be returned. In a few cases an error page with an error such as “Unable to create account” or “Unable to save settings” was returned.


Exploiting Unexploitable XSS

May 26th, 2009

XSS that are protected by CSRF protection or where other mitigating factors are present are usually considered to be unexploitable or of limited exploitability. This post details real world examples of exploiting “unexploitable” XSS in Google and Twitter. While the XSS detailed in this post are site specific the methods that were used to exploit them could be applied to other websites with similar implementations. Alex’s (kuza55) Exploiting CSRF Protected XSS served as inspiration for this post.


Google has services deployed across many different domains and subdomains and as a result requires a way to seamlessly authenticate members who are logged in to their Google Account. Google’s solution to this problem is the ServiceLogin URL.

When called by a member who is logged in to their Google Account the URL generates an auth URL and redirects to the particular service.

When the auth URL is loaded the service uses the auth token to log the member in. No verification was done between the service and Google to ensure that the account that the member was being logged in to was actually theirs. It was possible, then for an attacker to generate an auth URL for their account at a service and to use it to log a member in without affecting the member’s Google Account session. Because the member’s Google Account session was untouched it was also possible for the attacker to use the ServiceLogin URL to log the member back into their own account at the service.

Google Sites XSS

On the Google Sites User Settings page a user’s settings were used in a javascript function unsanitized. As a result, it was possible for an attacker to submit a setting with a value that would break out of the function and inject javascript into the page. Since the User Settings form is protected against CSRF, this was a self-only XSS. However, with the ability to log a member into an account and back into their own account the attacker could exploit this issue as if it was a full blown reflected XSS.

Blogger XSS

On the Advanced Settings page for publishing a Blogger blog on a custom domain a javascript function takes the value of the “Your Domain” field and displays it in the “Use a missing files host?” section. This function would display the value unsanitized. If the Advanced Settings form is submitted with javascript as the domain an invalid domain error is returned. On the error page the function is executed on page load which would result in the javascript being reflected on the page. Blogger’s forms are protected against CSRF, however like with the bad domain error message, the error message for using a bad CSRF token is displayed along with the XSS on the error page. This XSS was limited, however, in that to make a successful POST a blog ID belonging to the current logged in member is required. An attacker would have to hard code their exploit for a specific target blog otherwise the POST would be redirected to a login page. The attacker could get around this limitation, however, by logging a member into an account that they had created with a known blog ID which could then be used in the POST to trigger the XSS.

The XSS in Blogger was made easier to exploit due to the error message for using a bad CSRF token being displayed on the same page as the XSS. When the error message is properly displayed on a separate page, reflected XSS that require a POST and are protected by CSRF protection are considered to be unexploitable since it should be impossible for an attacker to know the CSRF token. However, this is not always the case. The implementation of many sites CSRF protection, including the majority of Google services, tie the CSRF token to a member’s account but not to an account’s specific session. Making the token compatible across sessions of the same account. With the ability to log a member into an account and to predict the CSRF token for the account, it becomes possible for an attacker to exploit these XSS as if they were unprotected.

YouTube XSS

At a YouTube member can create paid promotions for their videos. A promotion consists of a frame from the video and three lines of text. On the second step of the promotion creation process, the “Write your Promotion” page, a member is given three frames from their video to choose from and three text fields to enter their three lines of text. When the form is submitted, if the text fields contain invalid characters such as html/javascript an error is returned. On the error page the value of the first text field was used unsanitized in the title and alt attributes of the promotion’s image. YouTube’s forms are protected against CSRF and the error message for using a bad CSRF token is displayed on its own page. However, because YouTube’s CSRF tokens are compatible across sessions of the same account, it was possible for an attacker to exploit this XSS by logging a member into an account that they control.

Since being notified of these XSS Google has fixed the issues. Google has also started deploying protection to prevent the exploitation of auth URLs. The protection has already been deployed at Gmail and Google is looking to extend it to other services.


Twitter XSS

On every Twitter page a member’s language preference is used as a variable in the Google Analytics code. For members who had not yet set a language preference it was possible for an attacker to set it temporarily by using the URL:

The value would be used in the Google Analytics code unsanitized.

Since setting any of the profile settings also sets a language preference, and since settings their profile settings is the first thing most Twitter members do after registering, very few members would have been vulnerable to this XSS.

Twitter, like many sites that have implemented CSRF protection, did not extend the protection to its login page, allowing login CSRF attacks. With a login CSRF attack it would have been possible for an attacker to exploit the XSS by first logging a member into an account that had not yet had its language preference set. However, since using login CSRF destroys a member’s session, this attack would have had limited exploitability.

Twitter has a “Remember me” feature on its login page that when used will remember a member’s session after they have shut down their browser. Different sites implement this feature in different ways. Some sites set the same session cookies but make them persistent if the feature is used, other sites such as Twitter set a unique persistent cookie in addition to the session cookie. If an attacker used a login CSRF attack against a member who had logged in to Twitter using the “Remember me” feature, and in the attack the feature was unused, the member’s session would be overwritten but their “Remember me” cookie would not be. The attacker could then exploit the XSS and either steal the cookie or use it to log the member back into their own account and continue with the attack.

Since being notified of the XSS Twitter has fixed the issue and has extend its CSRF protection to its login page.


Clickjacking & OAuth

May 4th, 2009

This post details clickjacking and how it poses a serious security threat to OAuth service providers.


Clickjacking is when a visitor to a web page is tricked into clicking on an element that they believe to be harmless when in reality they are clicking on an element on a different website that exposes protected data or grants an attacker access. There are a number of ways to implement a clickjacking attack, but the most common way is to load the target website in a transparent iframe. The iframe is then positioned so that the target element that the attacker wishes a visitor to click on is positioned over a dummy element on the page that the iframe is contained on. Because the iframe is given a higher stack order than the dummy element, when a visitor clicks on the dummy element they are actually clicking on the hidden transparent element.

You can read more on clickjacking from Robert Hansen and Jeremiah Grossman here.


In 3-legged OAuth as the result of an action taken by a User a Consumer requests a Request Token from the Service Provider and then passes that Request Token to the Service Provider’s Authorization URL through redirection. The Service Provider then displays a page prompting the User to approve or deny the Consumer access.


In this example Faji is the Service Provider and Beppa is the Consumer. If Beppa’s developers were malicious they could use a clickjacking attack against Faji’s approval page to trick users into granting their application access.



From the user’s perspective the link appears to be harmless, but in reality when clicked on will grant Beppa access.

This is a basic example, however with a little social engineering it becomes trivial to get a user to click on the dummy element and have the attack go undetected.


There are two solutions to protect against clickjacking each with its own issues.

Service providers can use frame busting scripts to prevent their approval page from being framed. However, due to Internet Explorer’s support of a security=”restricted” attribute on frames they can be disabled in IE. For IE8 Microsoft has announced the support of a X-Frame-Options HTTP response header which can be used by service providers to deny their approval page from rendering in a frame. However IE8 is not yet widely used. One workaround is to require that Internet Explorer users have javascript enabled, however this comes with its own set of issues.

Service providers can require that users authenticate themselves before being shown the approval page, even if they are already signed in to the service. By doing so it becomes impossible for their approval page to be framed since a user’s credentials are not known to Consumers. This can be an inconvenience for some users, however, but more importantly by conditioning users to enter their credentials each time they are redirected from a Consumer it can increase the potential of phishing attacks. Service providers that choose this solution should educate their users about phishing attacks and should provide mechanisms that make it easier for users to confirm the authenticity of their site.


At the time of this post all service providers had been notified.