Someone Created a Tor Hidden Service to Phish my Tor Hidden Service

Share this…

SMS Privacy is available as a Tor hidden service, and it turns out ~10% of users actually use it that way. This post details what I found when somebody created a phishing site of my Tor hidden service.


Charlie was poking around the other day and found that a Google search for “site:* smsprivacy” turned up an unexpected result.

smspriv6fynj23u6.onion is the legitimate hidden service name, but there is another result with a website that looks identical: smsprivyevs6xn6z.onion.

A brief investigation showed that the site was a simple proxy: page requests sent to the phishing site were forwarded on to the real hidden service, and responses forwarded back, apart from a few idiosyncracies:

The Content-Length header is missing

The Content-Length header tells the HTTP client how many bytes of content to expect. A dumb proxy server that is intending to pass the content through unchanged could simply pass the Content-Length header through unchanged as well: it knows the length won’t change if the content can’t change.

That the proxy server thinks the length of the content might change implies that the server is prepared to modify the content in some circumstances.

But why doesn’t it just write a Content-Length corresponding to the modified version of the content?

Possibly to reduce page load times: if the proxy doesn’t need to know the length ahead-of-time, it can stream content directly to the client as it receives it, modifying it as it goes. If it had to read all the content, then perform its modifications, and then send everything on afterwards, it might increase the page load time by enough to arouse suspicion.

Possibly the author considered storing all of the content to be an unacceptably-high memory load. If the same server is proxying dozens to hundreds of other hidden services, this might be a reasonable concern.

The Connection header is wrong

Here is a comparison of the response headers:

$ torsocks curl -I https://smspriv6fynj23u6.onion/
HTTP/1.1 200 OK
Server: nginx/1.10.2
Date: Fri, 13 Oct 2017 05:37:49 GMT
Content-Type: text/html;charset=UTF-8
Content-Length: 7387
Connection: keep-alive
Set-Cookie: [...]
X-Frame-Options: DENY

legit site

$ torsocks curl -I https://smsprivyevs6xn6z.onion/
HTTP/1.1 200 OK
Server: nginx/1.10.2
Date: Fri, 13 Oct 2017 05:37:57 GMT
Content-Type: text/html;charset=UTF-8
Connection: [object Object]
Set-Cookie: [...]
X-Frame-Options: DENY

phishing site

The Connection header is rewritten from keep-alive to [object Object]. This is what you get in javascript when you convert an object to a string if the object does not implement toString(). This could be a big clue towards working out what software the proxy server is running.

Most likely it uses NodeJS. I couldn’t spot anything that would cause this bug in either node-http-proxy or Harmon (middleware for node-http-proxy to modify the response). It could be something completely custom of course. If you know any software that has a bug that sets the Connection header to [object Object], please let me know what it is.

There is some unexpected caching of javascript files (and possibly others)

I added some Javascript to detect whether the page was running on a rogue domain, and if so POST the document.referrer to me, to peruse later. I found that changes to my script were picked up in the browser when using the legit site, but an out-of-date version was served when using the phishing site, so I believe the phishing site is doing some extra caching. This could again be with a view towards reducing page load times.

I tried to investigate this caching while writing this post, and found something even more interesting! The proxy is now dropping all of the content from my tracking script so that I can’t get any more information about them. This is easily fixed by renaming the script and modifying the content slightly, but that’s a game of cat-and-mouse that I don’t intend to play. At minimum, it implies that somebody is actively watching this proxy and taking steps to keep it working.

The hidden service address is changed

The proxy appears to rewrite all instances of smspriv6fynj23u6.onion to smsprivyevs6xn6z.onion. Though, interestingly, it does not do the same for uppercase.

The Bitcoin addresses are changed

This is the true purpose of the phishing site. Normally a phishing site would exist to harvest user account credentials which can then be used or sold later, but this site takes the much more direct approach of simply rewriting the Bitcoin addresses to addresses controlled by the fraudster.

When navigating to the payment page for the first time, there is a modest delay before the page loads up, presumably while the backend generates a new Bitcoin address (that this takes a noticeable amount of time implies that it is being inserted into a huge database of address that lacks indexes, or it is being generated on a slow machine, or it is being generated by code written in a slow language. If the latter, it’s also not unlikely that the RNG is insecure). All Bitcoin addresses displayed in text are rewritten to addresses controlled by the attacker, and there appears to be a 1-to-1 mapping between legitimate addresses and fraudster addresses. Notably, the QR code remains unchanged and still decodes to the legitimate address.

I sent a payment to one of the fraudster’s addresses just to see what would happen: 1GM6Awv28kSfzak2Y7Pj1NRdWiXshMwdGW. It just didn’t show up on the site, which adds more credence to the theory that the site is mostly a dumb proxy. The money hasn’t been spent yet, but it might be interesting to see where it goes if it does get spent.

How is the site distributed to users?

I saw a few different results come back from the Javascript that POSTs the referrer off to the server when viewed on an unrecognised domain. Mostly it was people viewing the hidden service via a web proxy (e.g., but I did spot 2 hidden services:

  • 7cbqhjnpcgixggts.onion: “The onion crate”: this is a list of Tor hidden services. Like an olden day “web directory”, but for Tor. The phishing version is marked quite prominently as “Phishing link”.
  • hss3uro2hsxfogfq.onion: “not Evil”: this is a search engine for Tor hidden services. A search for “sms privacy” brings up the legitimate site at the top and the phishing site second. I clicked the “report abuse” button next to the phishing site but it hasn’t been removed yet.

This doesn’t explain as much as I hoped it might. I was hoping to find a tweet, blog post, or similar where somebody gives out the phishing link instead of the real one. It’s unlikely that the people behind “The onion crate” are responsible for the phish. If I was trying to get people to use my phishing site, I wouldn’t mark it as “Phishing link”. It’s possible that the people operating the “not Evil” search engine are the perpetrators, although still unlikely. If I was operating a search engine with the intention of sending people to phishing links, I wouldn’t include the non-phishing links at all. Certainly not as the first result.

It’s possible that the actual phishing campaign is yet to begin, although note that “The onion crate” marked the phishing link as 2017-05-17, which implies that it’s existed for a fair while.

Who is responsible?

Most likely it’s a run-of-the-mill cyber-criminal who has written a proxy that replaces Bitcoin addresses with his own addresses, generated plausible-looking hidden service addresses for various hidden services, and is sitting back waiting for the money to roll in.

At first I thought it might be an intelligence service hoping to spy on SMS Privacy users. But if you were hoping to do some covert surveillance, you wouldn’t alter the Bitcoin addresses to the extent that the site no longer works. I suppose it could be designed to spy on a specific subset of users, and act like a phishing site to everybody else, but I think the “run-of-the-mill cyber-criminal” explanation is more likely.

Phishing hidden services is a much easier job than phishing traditional websites because there is (by design) no easy way to locate the hidden service server, and there is no centralised naming system, which means even the legitimate sites have random characters in the address. Getting plausible-looking addresses is comparatively easy. And even after your phishing site gets detected, nobody has the power to revoke your domain name or shut down your hosting. It’s the perfect crime. The only downside is that the userbase tends to be a lot more tech-savvy, and is not so easily tricked, relative to the general population.

How can customers protect themselves?

SMS Privacy customers should make sure they’re browsing either using HTTPS, or, if using Tor, smspriv6fynj23u6.onion is the only legitimate hidden service. Anything else is almost certainly harmful in one way or another.

Has anyone been tricked?

I’ve never received any emails from people complaining that their payments have gone missing. (Well, that’s not strictly true. But in each case it turned out to be my fault, not a user inadvertently browsing a phishing site). So if I had to guess, I’d say that no users have ever been tricked. Certainly not a large number of users.

Further investigation

I guess the software running this proxy also proxies many many other hidden services. Once you’ve written some code to proxy a hidden service, rewrite the domain name to your own, and rewrite Bitcoin addresses to your own, you’re basically done. You can slap it in front of as many upstream hidden services as you like for almost no extra cost, so I would be very surprised if the fraudster is not doing that.

In fact, it might be interesting to find other Tor hidden service phishing sites and see if they share the same idiosyncracies (Connection: [object Object]Content-Length missing, rewriting the hidden service address in lowercase only, and a small-ish delay when a Bitcoin address is first displayed, and it also gives a 500 response for unknown hostnames).

It might also be interesting to try to probe for vulnerabilities and see if we can find out the full list of hidden services that are being proxied. There’s a fair chance that the hostname selection is being done in the proxy code, which would mean that asking for the hostname of a different phishing site might return the content of the other phishing site! Which would be a very strong indicator that they’re running on the same machine.


It was quite interesting to find somebody actively doing this, and a little thought shows just how easily it can be done on a large scale: A basic working version could comfortably be implemented in a weekend. I wouldn’t be surprised if a lot more hidden service phishing sites crop up in the future.