I’ve reported this as a radar, which you can dupe: rdar://34885659 👍
Sometimes iOS shows the following notification on the lock screen, which opens up the iCloud Settings screen, this is a much better approach than to ask for the password directly:
Showing a dialog that looks just like a system popup is super easy, there is no magic or secret code involved, it’s literally the examples provided in the Apple docs, with a custom text.
I decided not to open source the actual popup code, however, note that it’s less than 30 lines of code and every iOS engineer will be able to quickly build their own phishing code.
Imagine if everybody read this before posting a comment on HackerNews/Reddit #oneCanDream 🙂
But, I have 2-factor enabled, I’m safe, right?
Good for you, everybody should use 2-step verification obviously, however many people don’t. At the same time, even if your Apple account is 2FA protected, many users still use the same username/password combination on most web services, meaning if hackers know your Apple ID password, chances are high, they’re gonna try the same combination on other common services.
Also, even with 2FA enabled accounts, what if the app asked you for your 2 step code? Most users would gladly request a 2FA-token and ask for it, and directly pipe it over to a remote server.
Apple would never accept such an app, right?
Apple is doing a great job protecting users from dangerous third party apps, that’s why the App Store is built and provided like it is, that’s why we code sign our application (not really, but kind of).
However, it’s rather easy to run certain code only after the app is approved, those are not new ideas, but just to give you some ideas:
- Use the iTunes search API to compare the current version number with the App Store version number (example request), this way the app can automatically enable malicious code after it got approved.
- Use a remote configuration tool to enable a feature only after an app is approved by Apple
- Use a time-based trigger: just skip running certain code for the first week after submitting the binary, meaning the code will only run once the app is either approved or rejected.
- Pull an Uber and don’t run certain code when the location is near Cupertino (it’s probably fixed by Apple by now)
The things above is public knowledge, most iOS developers are aware, and I strongly advise against using any of this, Apple will eventually catch you and block your account.
The point of this list is: While the review process provides a basic safety filter, organisations with bad intent will always find a way to somehow work around the limitations of a platform.
Phishing on mobile? Is that a thing now?
This area will become more and more relevant, with users being uninformed, and the mobile operating systems not yet clearly separating system UI and app UI. This is kind of related to detect.location, where apps would write their own, custom image picker to provide a better “experience”, but in reality, with that, they also get full access to your image library, and optionally also your camera (related to watch.user).
iOS should very clearly distinguish between system UI and app UI elements, so that ideally it’s even obvious for the average smartphone user that something seems off. This is a tricky problem to solve, and web browser are still tackling it, you still have websites that make popups look like macOS / iOS popups, so that many users think it’s a system message.