There are interesting data security companies happening right now. For instance, Matthew Green is doing Zeutro, an ABE company. Think of ABE as Shamir's Secret Sharing on Steroids: you can encrypt data and delegate it out to different people based on boolean expressions. That at least addresses a fundamental problem in data center encryption (the fact that serverside data encryption is "all or none" with respect to applications).
This, though? I assume the announcement means VGS is doing great in the market. Congratulations, I guess?
VGS also handles compliance, audits, assumes liability and handles custodianship of the data, and provides a convention (versus configuration for most tokenization security) that provides a simple integration.
If you're looking for someone to help offload and get you compliant quickly without having to get mired into the world of compliance yourself it's a solid offering.
"This is nothing new", "I could build this myself", "you have to install something, nobody does that", etc. etc.
The key to a successful company is not being first, not (only) having a great technical solution, and not having tech noone else does.
The key is an umbrella of technology, business sense, marketing ability, salesmanship and much more. Andreeses/Horowwitz probably see a whole umbrella, and not only the tech.
Note that he didn't make any of these criticisms. What he did say is that there is little information about what the company's innovation is, and to public appearances what the company is doing is not novel. He then went on compare the company to another one with an impressive team of cryptographers behind it who developed an impressive, novel solution to the problem at hand. Further, he's made this point from a position of domain expertise in the field.
Frankly I don't think raising the spectre of famous "wrong" comments about Dropbox constitutes a meaningful response to his point. He outlined a substantive critique. In order for a criticism to be a middlebrow dismissal it has to be both middlebrow and dismissive. What you're responding to isn't, and doesn't fit the template you're invoking.
But the crucial difference is that instead of storing sensitive data in plaintext ourselves and then sending out access tokens, we manage an OpenPGP PKI/web-of-trust for you behind the scenes so that we're only storing encrypted data, and only the token (which we never see in its entirety) can decrypt it.
End-to-end encryption is much harder to implement for these kinds of use cases than simple tokenization, but there's also the huge benefit of not needing to trust your storage layer.
With credit cards, for example, an approach like this could hypothetically remove PCI-compliance as an issue entirely because no one is actually storing the cc # in the clear. To me this is a lot more interesting than simply shifting the burden of trust. That said, anything is better than our current status quo of spraying secrets all over the place.
Different niche than VGS, which again, is taking a novel approach to securing sensitive information. You can tell that their founders have had real-world experience from their novel solution; using a proxy to mask and reveal sensitive information.
Something I appreciate very much about running in the cloud is being able to use the control plane’s APIs to authenticate requesters (e.g. Kubernetes API + Service Accounts or AWS IAM + Instance Roles). Does EnvKey have anything in the way of that?
Regarding PCI compliance: if card data is encrypted, the scope of compliance simply moves over to the keys :-)
We're definitely interested in other authentication approaches that leverage pre-existing cloud credentials/roles, but for now are sticking with the simplicity/universality of an environment variable, since just about every platform supports them, and access is generally coupled to server-level access.
Integrating with Kubernetes in particular is very straightforward--you just set an ENVKEY secret, expose it as an environment variable to your pods, install envkey-source[1] in your containers, and then run a single line `eval $(envkey-source)` to inject your config. Or to make it even simpler, one of our users has figured out an approach that avoids the need to install envkey-source/eval it in your container at all[2].
"Regarding PCI compliance: if card data is encrypted, the scope of compliance simply moves over to the keys :-)"
True enough! This could also be said about e.g. your Stripe secret key, though you're still less screwed by having this exposed than losing the cc numbers directly.
1 - https://github.com/envkey/envkey-source
2 - https://medium.com/@dmaas/add-envkey-to-a-docker-app-in-kube...
I get that you are not storing it in the clear, but what if I actually have to use it?
I interviewed and was offered a job at this company. I turned it down because they had some of the most morally bankrupt leadership I have ever seen in a startup. Frankly, it made me less likely to interview with YC companies at all.
Just a quick list of giant red flags-
1. They are violating visa laws by having their employees in the Ukraine lie on their applications and say they are coming into the US for tourism instead of business.
2. The Ukrainian developers they get out here are kept on their Ukrainian salary, with a small stipend for housing. So they get to live in the bay area on a eastern european salary.
3. Their CEO actually bragged to me about how little they were paying the only female developer they had in the office. He thought it was hilarious.
4. When they made an offer they refused to tell me how many shares had been issued for the company or what percentage the offer included, making their offer completely impossible to decipher. It was also about 15% lower than the numbers they had discussed with me beforehand.
If I was an investor in this company I would demand the removal of the CEO and put their CTO in charge.
Edit: apparently the key part is "no salary/income from a U.S. based source". So maybe they're in the right.
In practice, if they're only staying for a few months I don't know if USCIS will care, unless they get caught red-handed.
I wouldn't want to argue with USCIS about the letter of the law though, they'll deport first, ask questions later.
https://travel.state.gov/content/dam/visas/BusinessVisa%20Pu...
Edit: I've sent an e-mail about it.
What kind of software engineer in their sane mind would want to stay in the US illegally (I don't think one can get any long-term tourist visa?) _and_ get paid peanuts? Even if they really want to live in the US, being poor sounds like a very strange sacrifice.
Unless one's a junior developer (where I heard it's hard to compete those days), as far as I know there are a lot of realistic options to find a legal immigration venue to a first-world country (probably not to the US, though) - so, why do stupid things like that?
I have fond memories of that time and I would revisit... but I'll wait until things return to how they were back then. (Start at "no TSA" and go from there.)
Also, it opens up networking opportunities for said developers, maybe they get a better remote contract afterwards.
This is not a technical problem, it's a usability problem. We have had the cryptography necessary to technically fix this for a long time. Replace the single human-memorable token (SSN) with a unique public/private key pair. Then you provide safe authentication by signing verification messages with your private key without placing that private key into the hands of a centralized vendor (like Very Good Security).
The obstacle to this solution is 1) buy-in, to either get the government to do this or to bypass it with this solution in private industry, and 2) usability, to abstract as much of the technical signing process away from the user as possible. But this is a better solution. From what I can understand of Very Good Security's website, it's just more of the same. It wants to become the secure gatekeeper of sensitive data instead of developing a novel means of obviating that problem entirely.
The real company to fund is one which takes inspiration from an existing cryptographic protocol - like ApplePay's or AndroidPay's - and expands it to handle identity verification and one-time payment authorization without requiring an SSN or canonical credit card.
Observing how people get along with cryptocurrency wallet software, key management is a hurdle that many will fail to clear.
The ideal solution would look like the ApplePay protocol - there is a PKI and cryptographic authentication, but users (and receiving vendors) never need to know what a digital signature even is. I agree with you that trying to get users to handle their own key management is a complete non-starter.
There are governments that work on solutions to give each citizen a certificate. What I would love to see would be the possibility to issue your own sub-identities that only exhibit as much information as you want/need to share for that specific use case. E.g. if you need to make $20k/yr for a new mobile phone plan, you can issue an identity that makes $20k/yr as long as you make at least that.
https://privacybydesign.foundation/irma-en/
https://privacybydesign.foundation/irma-explanation/
https://petsymposium.org/2017/papers/hotpets/irma-hotpets.pd...
edit: links
What do you think of Vinny Lingham's company that is aiming to do something similar?
- If H is a simple cryptographic hash function, it's not resistant
to brute-force attacks to recover the SSN
- It's not revokable
What we need is something more akin to a Credit Card number. Something like an abstraction layer. It might even be implementable as a UUID.If you need to revoke it, you can do so since it's not cryptographically tied to anything.
Failing that, a base32-encoded random string (without = padding) with an optional checksum would do the trick.
We offer a variety of various format preserving aliasing algorithms. Only legacy systems tend to choose the SSNs if they have fixed-width columns in their RDBMS that are difficult to change (imagine petabytes of data).
The idea behind format preserving aliases is actually based on the NIST SP 800-3G standard[1]. We use FF1 and are actively engaging with the world's leading cryptographers such as: https://cryptoonline.com/publications/.
Happy to share more in detail if there's interest. Please email me: mahmoud @ ${COMPANY_NAME}.com
[1] https://nvlpubs.nist.gov/nistpubs/SpecialPublications/NIST.S...
Ask your cryptographer if the algorithm they're proposing you to use is IND-CCA2 secure (especially if it meets the criteria for IND-CCA3).
Litmus test: If they don't know what that means, you shouldn't be trusting them for cryptography advice.
If it isn't IND-CCA2 secure, you shouldn't be using it. Full stop.
For the curious: https://tonyarcieri.com/all-the-crypto-code-youve-ever-writt...
The IND in IND-CCA2 means "INDistinguishable"; i.e. from randomly generated line noise. For symmetric cryptography, your ciphertext shouldn't have any structure to it. (Lattices and such are a different story. If structure is permissible for your security goals, you're probably doing asymmetric cryptography anyway.)
To be clear: Format-preserving, order-preserving, order-revealing, and homomorphic encryption technology-- while an exciting research area-- fails to meet this requirement and should not be used for non-experimental purposes until their techniques have had time to mature. And even then, until they meet this requirement, only when the threat model doesn't realistically include the possibility of adaptive chosen-ciphertext attacks. (Spoiler: A real world threat model will almost certainly always include that.)
> We use FF1 and are actively engaging with the world's leading cryptographers
I've seen this "we engage with the world's leading cryptographers" genre of claim before, albeit from a much more arrogant source: https://news.ycombinator.com/item?id=6916860
But the issue I see is that there still has to be a way that the user is handing say, their SSN to a website, for it to request the token key that associates with it, which is a big risk point. Because they need to identify themselves in a way that can identify the correct VGS account to talk to?
I mean, I think really you'd be better off doing a private/public key thing, where you have some sort of device that gives a sub-key of your master identity key to the vendor?
[1] https://en.wikipedia.org/wiki/Stefan_Brands [2] http://financialcryptography.com/mt/archives/001011.html
We are actually evaluating more recent advances in zero-knowledge systems. Stay tuned on more news soon in that front :)
Reference? If you want people to take you seriously when you claim that someone's solution to a problem has been overlooked you have to provide a link to the (alleged) solution, not just to the author's biography.
Although I hadn't heard of Stefan Brands before, I wasn't surprised he'd worked with David Chaum, who is the person I first think of regarding zero-knowledge proofs: http://sceweb.sce.uhcl.edu/yang/teaching/csci5234WebSecurity...
Then I get depressed because we had anonymous digital cash 20 YEARS AGO and fucked it up.
Software patents are bad, crypto patents are worse. It's (almost) literally patenting math.
[1] https://smile.amazon.com/Rethinking-Public-Infrastructures-D...
"The authentication mechanism is such that the receiver not only authenticates the message, but also demonstrates a property of the attributes encoded into its certified key pair. The receiver has full control over which property is demonstrated: it can be any satisfiable proposition from proposition logic, where the atomic propositions are relations that are linear in the encoded attributes. Any other information about the attributes remains unconditionally hidden." (Emphasis mine.)
I will admit that the math was not something I studied in depth (and I definitely didn't check the proofs!), since crypto is at best a hobby for me and not my main job.
Hi robert204,
Your question has two specific parts that I want to address:
1) Single Point of Failure
2) Larger target for malicious actors
Regarding point #1:
- We have invested significant amount of resources in making our product as stateless as possible and our core product can live on different cloud providers' edge networks.
- We conduct failover tests every 2 weeks to ensure we have the capability to respond to any blips in downtime. Our SOC2 Type2 report is available to discuss the availability and disaster recovery items in detail.
- As a side note: We solve the issue of the "vendor is down" problem -- for example, we have customers who seamlessly switch between providers, say credit score checks, when one of them is down without the liability of storing that data themselves.
Regarding point #2:
- This is our core focus. We take on the liability. The idea here is if this is the core focus, we can do this better than a lot of folks out there.
- We also broker access to different Fortune 500 institutions that visit our offices and constantly pen-test us, audit us, etc.
I think it's important to acknowledge that as developers security is always important, but never prioritized until its urgent. We are trying to change that @ VGS.
Please, email me directly and I'm happy to have a further chat: mahmoud @ ${COMPANY_NAME}.com
I don't want any of my sensitive data stored on "some cloud provider".
Also, your security strategy apparently boils down to "we'll be REAL CAREFUL, pinky swear!"
That strategy does not work, and has never worked before. The whole reason why you think your product is needed is because your prospective customers do it just like that.
I'm stunned you found investors with this proposition.
idea time: Cryptographically store this data on physical cards that can fit into wallet and be managed by the user and 'revoked' if they lose the card. obviously things like backing up and storing will still need to be done, but that does not necessarily need to be reachable via an API or on the internet all after it has initially been created.
I spent two minutes on this idea, be nice :-)
- $PROVIDER wants the following data: $LIST_OF_OPTIONAL_AND_REQUIRED_ITEMS
- You select which you can provide
- If the data to be provided includes "billing identifier" or "credit file identifier" (and especially if the identifier is, say, SSN), then first your app obtains a new identifier from the reporting agency or your insurance carrier, and *that* number is given to $PROVIDER
Gives more control back to the customer/patient and eliminates (yet another) treasure trove of data for attackers to go after.not that it matters but I recently switched to LineageOS and my opinion has not changed.
I think the innovation here is that instead of being part of carrying out the rest of their business, tokenizing and keeping the real info safe is the whole product here. That seems smart to me.
The dumb part, of course, is that we have these bearer tokens (SSN and CC numbers) in the first place, without constantly rotating them. There's some amount of rotation with CC numbers when the company detects fraud and sends you a new card. But for SSN, it's unconscionable that they're both the username and password.
> When it’s time to bill your insurance company, their “reimbursement” code goes through VGS which “reveals” the token and sends the real version to the insurance company.
Forgive me if I am wrong, but that means all 3rd party integrations that require the sensitive values must be implemented by VGS correct?
But no worse than "Best Buy" I suppose.
Why should an entire country trust them? I'm not saying they wouldn't be an improvement over Equifax, but it still sounds far from ideal. I think a hardware token would be preferable.