Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Ask HN: Is Google phasing out Authenticator/TOTP?
51 points by prometheon1 on Feb 24, 2022 | hide | past | favorite | 98 comments
I use TOTP for every site that supports two-factor authentication. When setting up 2FA for a new google account, I can choose: SMS/call, security key or google prompt. I don't have a security key, I would prefer not to log into google with my work account on my phone, and I would prefer not to be susceptible to sim swapping. Is TOTP less safe than SMS/call?

Interestingly google's own Authenticator TOTP app still exists, but apparently you can't use it to set up 2FA for a google account anymore: https://play.google.com/store/apps/details?id=com.google.and...



SMS is not reliable for 2FA, it's trivial for a determined party to sniff SMS messages. TOTP is the best way for typical users to do 2FA, as most people wont have a Yubikey or anything like that. Google Prompt is the equivalent of iCloud's cross-device prompt where you must allow the action from another device that's already logged in to the account.

The Google Authenticator app isn't the only app that can be used to generate TOTP tokens, even though many sites directly refer to it. Anywhere that you are given a QR code to scan you can use any TOTP app you'd like. I use Authy personally because it allows me to back up my TOTP tokens behind a master password and access to my phone number, so in the event my phone is lost or replaced I'm able to restore 2FA access by going through the process to configure Authy again and re-enter my master password from another password manager.


As it becomes easier to emulate hardware tokens[1], Google may start limiting which ones it accepts. I believe they can use attestation keys to do that.

This is just a softer layer of security to slow down less sophisticated mass signup attempts.

Google may very well eventually phase out TOTP, under the justification that it is not as secure, but I would be shocked if they ever retire the highly insecure SMS verification.


> As it becomes easier to emulate hardware tokens[1], Google may start limiting which ones it accepts.

Why? I hope they don't, as I'm relying on my password manager to emulate a hardware token so I can finally log in to websites without needing a username/password.

At its core, FIDO2 is an authentication API, so the site can ask your browser to authenticate you (in whatever way the browser wants). If that's "talk to the password manager to authenticate the user using some fancy cryptography", why does the authenticating site care?

I'm looking forward to the day when my password manager only has one credential in it, my soft-FIDO2 private key.


> > As it becomes easier to emulate hardware tokens[1], Google may start limiting which ones it accepts.

> Why?

Likely not in the same of security, but as an extra speed bump for automated account creations.

The easier it is to create accounts in automated ways en masse, the more likely that system can be abused.

If you require SMS authentication, you can use that telephone number as a means to limit accounts being generated.

If you can restrict software emulating hardware, you're similarly increasing the barrier to entry to require hardware tokens too, increasing the cost of creating accounts used for fraudulent activity, and reducing the lower hanging fruit (e.g. spam) from being as profitable.


If you require SMS authentication, you can use that telephone number as a means to limit accounts being generated.

If you require SMS authentication, you can use that telephone number to personally identify the individual. Privacy invasion justified in the name of security.


Why is it that Google and the tech giants would rather that you use highly insecure HTTP than a possibly insecure self signed certificate?

You use self signed certificates all the time with SSH. If you haven't seen an SSH key before, you don't fall back to telnet.

If you stop to think about it, is incredible how much effort they put into forcing you to use the latest browser, and not trust self signed certificates. It is far easier to root your device and patch your kernel than it is to use an older browser.

Yes, it is a highly effective but clumsy heuristic to detect abuse. But I am convinced that they may have other incentives as well.


It's been a while since I used self-signed certificates, but experience used to be just about right. You hit the page and browsers throw a big fat warning. You add an exception, basically acknowledging that this is a certificate you trust (exception is for the cert, not any cert on this domain ever), and as long as the certificate doesn't change, your trust is at the same level it was on the initial load of the page.

The UX problem with self-signed certs is that you start expecting to accept them, so when that site asks you again to accept it while you are browsing in a cafe on a public WiFi, your browser would need to know that now you are on untrusted network and that you should better watch out.

Which is why LetsEncrypt came to be: it provides at least some chain of trust without any extended validation, which is a bit extra on top of self-signed certs.


> The UX problem with self-signed certs is that you start expecting to accept them, so when that site asks you again to accept it while you are browsing in a cafe on a public WiFi, your browser would need to know that now you are on untrusted network and that you should better watch out.

But again, should you watch out more than if you were using HTTP? Does your browser make you opt in to connecting to every HTTP site on an open wifi network? What about an HTTP captive portal on an open network?

I have not heard a good argument for the current behavior with self signed certificates that justifies the behavior of completely unencrypted connections.

The ideal behavior would be for your browser to make it clear that the connection safe from third party attacks, but that it can't verify the website. Perhaps leave the scary warnings for submitting something over an self signed or unencrypted connection.


It's again a user expectation problem. What if you connect to a web site for the first time while on a rogue public network?

If users expect to be "safe" when on a secure site, without them understanding intricacies of certificates, self-signed is counter productive.

There are certainly improvements to be made to the experience, but none of that can explain all these nuances in a way a temporary visitor will read and grasp.

OTOH, it's easier to teach them "HTTP unsafe, don't type anything private".


> The ideal behavior would be for your browser to make it clear that the connection safe from third party attacks, but that it can't verify the website.

If it can't verify the website, the connection is not "safe from third party attacks".


> Why is it that Google and the tech giants would rather that you use highly insecure HTTP than a possibly insecure self signed certificate?

There are legacy concerns that factor in here, where HTTP was the default, and HTTPS was a costly alternative.

Changing defaults can be expensive.


Chrome has been marking HTTP URLs as "not secure" in the URL bar for like three years.


The current implementation is idiosyncratic.

Out of the box, you will get a lot more resistance for using a self signed certificate than bare HTTP. At the very least, self signed certificates should be in the same security context as HTTP.

Our devices should opportunistically use encryption, even if validation is not available.

I had a client that wanted to use an Android tablet to monitor IP cameras on his local network, and it was virtually impossible to use the TLS on zeroconf .local domains.

The official solution is to rely on the underlying network for security. Even though the webservers on devices and our browsers have TLS support.


If your password manager has control of both your password and your "2nd" factor auth, it defeats the purpose of it being a 2nd factor.

You are still protected from your password hash being stolen from the target website, decrypted and then used for log-in, but if password hashes were accessed, potentially a bunch of other stuff that you'd care about is too, so that's a somewhat moot point.

But someone stealing your laptop and getting access to your password manager gets access to your 2FA too. Making it not a "second" anything: it's akin to using two passwords for log in to a single site and keeping them in the same place. Physical separation of the two authentication factors, thus, matters.

That also means that you should not have your password manager on the phone, or at least only have a separate one: ideally, password managers would integrate between desktops and mobile devices to pass short-lived access to passwords for Oauth/OpenID Connect auth instead.

Yeah, bridging convenience and security is a long standing nightmare of a problem to solve. :)


Yes, I know all that, and that's fine. I'm not looking to solve the problem of authenticating so I can launch nukes, I just want people to not steal my Twitter account.

A random thief stealing my laptop would have to:

1) Bother

2) Break my hard disk encryption/know the password

3) Break my password manager encryption/know the password

I think that's hard enough for someone wanting to get into my email. I use a separate hardware key to secure my domains, and that's about it.


That's fine, the only question is why use 2FA at all?

What's the attack vector you are protecting against that a good, non-reused password is not covering?


Keyloggers, stolen password databases, MITM, shoulder surfers, interception.


Yeah, that makes sense: I brought up stolen password hashes, but I generally disregard keyloggers/MITM/interception because I usually use trusted devices and network encryption (HTTPS), but not all sites do, and I can see how people might be forced to use untrusted devices.

Still, when you've got access to your password manager (to get your password and TOTP token too), you've got access to a trusted device too.

And there is still an option for anyone (including shoulder surfers) to type in your password+token a bit faster than you so they get in: nobody bats an eyelid for getting reprompted for another TOTP token.

You are also vulnerable to someone stealing your password manager password in this manner, especially with a cloud one (which is what most businesses require).

As a conclusion, it does grant you some extra protection against using password only, but when on a separate device, it's really another dimension.


I'd be interested to know more about how you use your password manager to emulate hardware tokens


I don't currently, I meant more in a "I'm hoping they will implement this".


Ahhh. I wonder if this is something I could cobble together with a few open source components.

I now have a weekend project....


It has always, from the very beginning, been possible to emulate hardware tokens.


I use 1Password for Google and it works fine. Follow the 'Google Authenticator' setup and don't believe the wizard -- it's not exlusive to Google Authenticator.


Check again—that option isn’t there these days


It was there last time I checked a few weeks ago. How else are you meant to set up an OTP with a Google account?


KeepassXC has pretty good support for 2FA. I use it for all my TOTP needs. On mobile, I also use Aegis, which is free open source and can backup your TOTPs.


The developers don't recommend storing your TOTP secrets in the same database as your passwords [1]. I used to do this, but now I'm using andOTP [2].

[1]: https://keepassxc.org/docs/#faq-security-totp

[2]: https://f-droid.org/en/packages/org.shadowice.flocke.andotp/


Bitwarden does TOTP as well if you give them $10/yr. It's handy as with most logins it will auto-fill UN and PW then put the code in your clipboard for you to just paste and hit enter.


Coinbase refuses to work with Authy.

I first heard about Authy when it was recommended to me by coinbase but sometime last year coinbase forced me to change to Google authenticator.


Smartphone secure enclave U2F is the best for users. TOTP is still easily phished.


I think phishing is an entirely different matter.

Any access is easily "phished" with pliable people (which is not necessarily a set of people, but also a question of timing and circumstances: everyone is sometimes more or less pliable): "please log in with your U2F device, download that document and upload it to this URL https://your-company-confidential.s3.amazonaws.myurl.com/, before we can reinstate your access to company systems".


It isn’t a different matter. It is the core matter. Phishing and stuffing completely dominate the actual attack space. SIM swapping and other theft of SMS messages is tiny in comparison.

The advantage of U2F is that it isn’t phishable. You can only sign the message for the pre enrolled URL.

Yes, you can still fall for more elaborate instructions but you cannot simply give the attacker your credentials through a normal looking flow.


Not sure what's "normal looking" in the flow where you are supposed to dictate/type-out a TOTP code to someone while not being allowed to use it to attempt a log in (and that they have <60s to make use of). "Be quick and type me out your TOTP code from your phone before it changes, darn, that one didn't work, let's try again".

I also disagree it's that far fetched to get people who'd do that to also do whatever else you want them to.

And while SMS swapping is miniscule in comparison, the big difference there is that there is no signal at all that you are under attack. With phishing, there is no way you are not feeling something is at least a bit off, so you know to check soon after, even if you've been compromised.


The phishing flow is precisely the same as the normal auth. You click a link. It takes you to evil.com that looks like your bank page. You type in your password. The system takes your password and starts an auth flow with the actual bank. You are shown a TOTP page. You type in your code. The system takes your code and completes the auth with your bank. This is 100% automated and the only observable different is the URL.

After this happens it takes you to a “something went wrong” page and has a link back to your real bank website.

With U2F this impossible because you cannot sign a message for bank.com when visiting evil.com.


You can set up 2FA through one of the other options first. After that, TOTP should be available as an option as well. After setting TOTP up, you can remove the other one.

They're probably hiding the TOTP option because the backup story for Google Authenticator is really poor. If you lose your device, you lose access to all of the accounts you had set up 2FA for with Google Authenticator. Of course, there are other TOTP apps that are better in this regard, but Google is unlikely to promote those because then they'd lose some control over the authentication flow.


I dislike how the only way to get TOTP codes off a device is by rooting. I had a half dozen codes that I had to dump from the sqlite db when I first used authenticator.


Thankfully this has been fixed. You can now choose "Transfer accounts" (from the menu in the top right corner) to copy everything to Google Authenticator on a new (or backup) phone.


Interesting! I ended up going with the LastPass authenticator in the end because of this issue but I am happy to see it fixed.


I use andOTP[1] installed via f-droid. You can easily get a backup file with your codes.

[1] https://github.com/andOTP/andOTP


Just FYI, it's possible to backup your codes on Android using Aegis too. No root needed.

https://github.com/beemdevelopment/Aegis


This!

Though now Google Authenticator supports transfers, there is less of a need.

But I still enjoy using andOTP.


I honestly was wondering about this exact thing for a while -- if you lose your phone, do you use lose access to every account you have, irretrievably? (For that matter, with some apps it wasn't clear to me even what you do if you get a new phone, which everyone does eventually right??) What's the way around all this? I have had trouble getting an answer to this question, so thank you for addressing it somewhat, apparently I'm not totally crazy!

(I am, somewhat shamedly, not yet a user of TOTP 2FA, although I will have to become so soon because of various platfrom and organizational requirements, and am trying to figure it out, and finding it challenging).

What are the options/ways that other TOTP apps handle this better, what are my choices here?


This is indeed a big problem. Phone numbers are easy to transfer to a new phone when you drop yours in the toilet. This, however, makes them easier to attack.

The way to handle this correctly is to have backup options. This can be a second device or a printed sheet of backup codes. I’d wager that few people actually do this, though.


What is the mechanism for a "second device" backup option, how does that work an what TOTP products support it how?


The simplest case, applicable if you have access to more than one device that supports generating TOTP codes at the time you are setting up TOTP for a given site, is to simply scan the site's QR code on all the devices.

E.g., when I create an account at a site that uses TOTP I scan the QR code on both my phone and my tablet.

(Actually, I scan it twice on each device, using two different TOTP apps. That way I'm covered if one of the apps stops being supported).

If you don't have all your devices available at the same time, or you want to allow for the possibility of adding a new device later, you can simply save the QR code. When the site gives you the QR code, take a screenshot and save that in some secure fashion. I save it as an encrypted PNG file.

To add a TOTP device later view the screenshot and scan the QR code.

Most sites that give you a TOTP QR code have an option to get the code in text. As an alternative to (or in addition to) saving a screenshot of the QR code you can ask for the text code and save that. Securing a text file might be more convenient for some people than securing an image file, although using the text code to set up a new phone or tablet likely won't be as convenient as using a QR code.


Interesting, thanks!

I guess you need to keep these QR codes/texts as carefully/securely as you would any password....

This is all a lot of work, trying to figure out how to adjust to this world of 2FA.


The good news is that the QR codes and/or texts should be easier to secure than passwords.

You need the password every time you login to a site, so you need to find a way to securely store passwords that also allows easy frequent access to the password.

You only need the QR code when setting up a TOTP application on one of your devices which generally will only be once per device and then once whenever you replace a device. You don't need frequent easy access and so can pick a storage method that is optimized toward ease of initial storage.

For example if I've just signed up for site example.com and I've saved a screenshot of the QR code in example.png, I'd do this:

  $ gpg -ear tzs@mydomain example.png
  $ mv example.png.asc ~/OneDrive/totp-recovery
  $ rm example.png
and then I no longer have to think about it or do anything with it unless I need to set up a new device, which is very rare.


On my banking website I enroll my phone’a TOTP app and my wife’s TOTP app. If I drop my phone in the toilet I can authenticate with her phone.


Aegis, available in F-droid as well, is an open source solution that allows you to easily backup and restore your TOTP "devices" (or rather, keys). You'd have to transfer that backup manually between your old and new phone.

A more user friendly cloud solution might be Authy that others have recommended.


Thanks for the solution!


I replaced long ago the Google Authenticator by Aegis on Android.

You can actually do backup of your keys (encrypted).

Also they reverse engineered the steam Authenticator, so one less app to have on my phone.


I'd never thought to check until now, the AndOTP app can also do Steam.


Off topic, but I moved from Google Auth to Aegis[0] recently and Aegis is so much better in every possible way.

[0] https://getaegis.app/


And it's GPLv3 and on GitHub. And yes, when I migrated, it offered features G Auth didn't have, haven't checked since...


I haven't seen anything indicating TOTP is being phased out, but there are several reasons Google may be driving people to Security Keys (aka Webauthn, FIDO, u2f):

* TOTP (and SMS codes) can be forwarded, so you can be phished/duped into entering credentials on a spoof website. FIDO prevents this.

* TOTP (and SMS) may be grabbed from your phone by malware. This is harder with FIDO as it requires a physical button press.

* TOTP requires substantial user knowledge to use correctly. FIDO also has usability issues (have to keep your key with you, endless USB-A vs USB-C issues) but maybe they believe its better.


I usually point people to Authy. Nice app and alternatives are available of course. But more importantly they provide nice guides on how to set up 2FA for various websites. Including Google/gmail: https://authy.com/guides/googleandgmail/

Even so, non technical people get very confused by this stuff. As the CTO, I enforce 2FA for all of our stuff. But it continues to be a support headache when we have new people joining.

I'd love for this to be solved in a more user friendly & standardized way. So, I can understand why Google moved away from TOTP as the main way to do 2FA. It probably caused a lot of support overhead for them.

Hardware tokens have the same issue. Nice for techies but too hard to deal with for normal people.


Tangentially related but I made sure to store all of my TOTP codes in a secure, offline location so that I can quickly migrate my 2fa app to a new phone by generating QR codes.

Google has been mainly prompting me on other devices rather than asking for my TOTP code, however.


I'm irrationally quick to hate on anything Microsoft touches, but for TOTP their Authenticator app is very good. It holds my google TOTP, amongst others.

Allows you to import and export to csv; ios app also has automated icloud backup if you're so inclined.


I'm sorry but what makes you think Authenticator can not be used while getting 2FA in use with your google account? That link does not show anything like that to me. And I used it to set it up about 2 months ago.


> I'm sorry but what makes you think Authenticator can not be used while getting 2FA in use with your google account?

The fact that Google doesn't give him the option, presumably.


I don't think they are phasing out Authenticator - I still use it. However for your Google account they say:

>We recommend you sign in with Google prompts. It's easier to tap a prompt than enter a verification code.


I find it interesting that the 2FA methods it gives you are non-anonymous: SMS has a phone number, security key has a hardware ID, and Google Prompt has your IMEI/phone ID (and maybe number?) too.


Security keys hardware IDs are either not accessible to applications, or non-unique, for this reason.

A bigger security/privacy issue with U2F is that you cannot use it with javascript disabled.


TOTP is falling out of fashion because the underlying crypto component is just a plaintext secret that is distributed over the Internet and stored unhashed in a database, probably right next to the password hashes. There's a real argument too be made that it's not a real second factor but instead more like 1.5FA.


The storage of the secret on the server side isn't really relevant to the major wins that TOTP provides. A well-done implementation stops keyloggers and shoulder surfers, and for many people that aren't using password managers, it does add a significant level of security, without greatly inconveniencing the user.

Honestly, the alternatives are worse. For U2F keys, you need to remember to bring your dongle with you, it has bad support on phones/mobile devices, some machines lock down ports, and there's a not-insignificant cost to buy.


U2F keys aren't the only other option. Treating phones as a always connected hardware dongle with an app is another option. There are others as well. Hardware tokens talking U2F/FIDO are just the only standards compliant option that can be implemented totally in browser.


I don't think there's any validity to this. TOTP is falling out of fashion because it's not phishing-resistant, and, more importantly, because it has poor UX for ordinary users. This "plaintext secret" stuff has nothing to do with it.


I mean, I was a engineer that participated in those discussions when I worked at an IDP firm. The concern that if we got popped and the password database was leaked somehow, that with TOTP the attackers had everything to start logging in as users who had weak passwords but with public/private schemes they didn't. This was a very real concern that affected preference for public/private schemes (which included hardware tokens), and engineering effort to decrease the UX burden of hardware tokens specifically.

You might correctly call that concern more focused on CYA of the the firm itself, but I think that's not the worse thing in the world, and the underlying crypto primitives certainly played a role here.


This doesn't even make sense as an objection. If you get popped, attackers are going to log in with whatever credentials you don't revoke. You're simply going to revoke all your 2FA credentials; retaining them, no matter what scheme was used to store them, would be malpractice.

Meanwhile: WebAuthn (nee U2F) was literally introduced as an anti-phishing tool --- not as a database protection technology. They did U2F because people kept getting phished with code-generator authentication. The problem with TOTP is that it doesn't have a mechanism to authenticate the site asking for the code.


Yes, you would revoke them as soon as you realize, but until you realize (and practically there's generally a delta there for a successful attack) in public key schemes of 2FA the attacker doesn't have enough to login as those users since they only have those user's public keys. For TOTP they very well might have everything they need for weak passwords.

If you can trend towards public key based schemes there's real benefits from the IDPs prespective. Yes TOTP wasn't designed to help here, but from the IDP's perspective that just means that you should gently shift users to 2FA schemes that do provide those semantics. And U2F/FIFO is not the only other option here, just the only one that can be implemented totally in browser for the client. Duo's phone push notifications are public key crypto under the hood for one example.


The question is why TOTP has lost favor. The reasons are that it has poor UX, and that it doesn't authenticate sites. That's pretty much it.


And I'm saying that as a former engineer at an IDP, the shape of the crypto played a major role in the forms of 2FA we favored, causing us to specifically de-emphasize TOTP in favor of public key schemes.

The calculus is different if you're just a regular website, but the question was pointed at Google, an IDP in their own with many forms of public key schemes implemented.


I really disagree with this idea. It reminds me of how developers try to prevent you from pasting passwords into password fields.

The benefit of using TOTP at all far outweighs the insecurity of storing it on the same computer. And it is so low effort, I use it everywhere possible.

I have a simple script that takes a screenshot, uses zbarimg to extract the single line otpauth:// string, and appends it to my pass password database[1], which then hooks back into my browser. I somehow don't think I would bother with a hardware key for my BestBuy.com account.

Besides, my password db is itself is encrypted with GPG, and GPG can use a hardware token to be unlocked, thereby giving you more flexibility and transparency, while still benefiting from a token.

Developers can't know what configuration users have.

[1]: https://www.passwordstore.org/


Cool; when are we going to get rid of the "API key" then?


An API key is basically 1FA, not really sure why that's relevant.


By that account, passwords are exactly the same, so it's not even 1.5FA but only 1FA with each being 0.5FA.

Point of something being a second factor is for it to be something physically distinct from the first factor, and something different in the "something you know, something you are, something you have". As long as you keep your TOTP device and password manager separate, they provide all the guarantees a second factor should.

As a bonus, they also provide one-time use tokens (both HOTP and TOTP, though HOTP is more susceptible to pre-generation of tokens to be used later), thus protecting against eavesdropping and people looking over your shoulder (or recording you).

As such, I don't buy the argument for TOTP not being a second factor auth at all.


Passwords and TOTP are both something you know.

And there are children comments here talking about how it's fine, they keep their TOTP keys in the same password database.


They are both "something you know" only if you keep them on the same device.

Yes, there are comments of people using them here that way. At my previous job, even a security IT guy did not find that an issue when I brought it up how people automated 2FA for CLI scripts. In that case, they are not even 1.5FA, they are strictly 1FA password-authentication (jumping through a few hoops to get all "components" of a password, but it's a single password protecting access).

If they are separate devices and you don't really remember your key, they are hardly something you know, IMO, so I disagree there.

In a sense, you could also extract the base cryptographic key of any hardware token (or the producer of the token might have it), and it's also "something you know", which is why I disagree. It's just trivial to extract this key for TOTP, but extraction is a separate step IMO.


Had to look it up and you're correct. Why on earth isn't it based on public key cryptography?


The backend needs to be able to produce the same code that you produce. If somebody could do that with a public key then you’d need to keep the public key a secret.

If you want to do it with a protocol that enables signed messages and other stuff then the UX cannot be “type in a few numbers.”


I don't see why it would have to be more numbers to type in than in the symmetric case. And yes, the public key would be the secret. Seems like a strict improvemebt on the way it currently works.


If the public key needs to be a secret then it is the same as it is now. If the attacker obtains either secret they win. The whole point of asymmetric crypto is that the public key doesn’t need to be kept secret.


Maybe I didn't express myself very well. The server stores the public key, instead of the current symmetric secret. So the new "secret" is the public key. It could be published of course but there is zero point to do so. So now if the database is exfiltrated by a hacker without anyone noticing, he still couldn't produce tokens to log in as a user.


> So now if the database is exfiltrated by a hacker without anyone noticing, he still couldn't produce tokens to log in as a user.

Why not?

With TOTP, you type in a short code. The server needs to also be able to generate this code so they can verify that your code is correct. If it only has the public key then it must be able to produce the tokens to login as the user.

If you want a more complex setup where the public key can only verify that the code is valid but cannot be used to create valid codes then the codes themselves (and the protocol) become considerably more complex and aren't going to be managed by a "type six digits into a web form" UX.

I also don't think this really matters in a major sense. If somebody pwns the credential database then continuing to rely on authentication to prevent further harm is a fool's errand. "The adversary only got to extract the credential database and achieved nothing else" is not an especially common scenario today.


Yeah you're probably right.


No…surely totp uses asymmetric cryptography, right?


The codes are the last six decimal digits of an hmac over the key and the current time (rounded to some boundary, generally 30 seconds). The QR code is the account's metadata and the preshared key.


what's your preferred better option(s)? Just hardware keys? (Other comments in this thread seem to be suspicious of hardware keys too!)


An app on a phone that uses the security module like how Duo works is nice for most use cases, and FIDO/U2F dongles work well for the vast maority of remaining use cases.


I always store the secret key for TOTP before I use the QR code. I have a command line app I wrote that can generate OTP codes using the same secret key, so as a back up I can always generate codes with my desktop or phone.


Last I've had to do this (last November): for Google itself, you need to set up SMS 2FA, and then you can add an additional TOTP device and remove the SMS 2FA. A bit annoying, but worked for me for my work account.


We issue security keys to staff who don't want to use a personal phone for work.

(I.e. they have the option: use their personal phone, or carry this Yubikey.)


you can add a security key (real or emulated one) then add TOTP and after remove the security key


Yes, but why not just offer it in the first place? I had the same impression when I had to jump through those hoops


Because it is easy to automate TOTP, and it is probably not effective at slowing down signup attempts.


SMS is far superior from Google's perspective --- your phone number also reveals your identity.

Privacy invasion in the name of security. TOTP doesn't offer this and is clearly to be discouraged. Unless you accept the privacy invasion, they really don't want you to have an account.


I configured a gMail account for 2FA yesterday with Google Authenticator. Had no problems.


Please use Microsoft Authenticator instead: https://www.microsoft.com/en-us/security/mobile-authenticato....




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: