-
Notifications
You must be signed in to change notification settings - Fork 45
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Transition to simpler key management #80
base: master
Are you sure you want to change the base?
Conversation
Signed-off-by: Paulo Gomes <[email protected]>
Move away from key hierarchy keys, into a simpler data slicing instead. The use of key hierarchy, saving the DEKs alongside the data, is very effective when there is a need for long-term storage. As Lasso's KEK is not persisted externally, once the instance restarts the encrypted data can no longer be used as the DEKs can't be decrypted. Due to that, a more performant approach would be do away with the KEK, and keep all the DEKs in memory. The encrypted data now has a keyID, as opposed to having an encrypted DEK and its nonce. As a result, the encryption process had a throughput increase of ~260MB/s and a 22% decrease in allocations. Conversely, the decryption process had throughput increase of ~400MB/s and a 50% decrease in allocations. Storage-wise, this approach consumes 28 bytes less per saved row which also translates of less data loaded into memory. goos: linux goarch: amd64 pkg: github.com/rancher/lasso/pkg/cache/sql/encryption cpu: 11th Gen Intel(R) Core(TM) i7-11800H @ 2.30GHz │ /tmp/before │ /tmp/after │ │ sec/op │ sec/op vs base │ Encryption/encrypt-1024-16 1.847µ ± 7% 1.316µ ± 6% -28.73% (p=0.000 n=10) Encryption/encrypt-4096-16 3.059µ ± 4% 2.480µ ± 6% -18.93% (p=0.000 n=10) Encryption/encrypt-8192-16 4.395µ ± 5% 3.705µ ± 6% -15.71% (p=0.000 n=10) Decryption/decrypt-1024-16 1390.5n ± 4% 883.9n ± 3% -36.43% (p=0.000 n=10) Decryption/decrypt-4096-16 2.660µ ± 5% 2.050µ ± 3% -22.95% (p=0.000 n=10) Decryption/decrypt-8192-16 3.986µ ± 4% 3.444µ ± 4% -13.61% (p=0.000 n=10) geomean 2.675µ 2.056µ -23.14% │ /tmp/before │ /tmp/after │ │ B/s │ B/s vs base │ Encryption/encrypt-1024-16 529.0Mi ± 8% 742.2Mi ± 6% +40.32% (p=0.000 n=10) Encryption/encrypt-4096-16 1.247Gi ± 4% 1.539Gi ± 6% +23.39% (p=0.000 n=10) Encryption/encrypt-8192-16 1.736Gi ± 5% 2.059Gi ± 5% +18.61% (p=0.000 n=10) Decryption/decrypt-1024-16 702.1Mi ± 4% 1104.8Mi ± 3% +57.35% (p=0.000 n=10) Decryption/decrypt-4096-16 1.434Gi ± 5% 1.861Gi ± 3% +29.77% (p=0.000 n=10) Decryption/decrypt-8192-16 1.914Gi ± 4% 2.215Gi ± 3% +15.74% (p=0.000 n=10) geomean 1.132Gi 1.473Gi +30.12% │ /tmp/before │ /tmp/after │ │ B/op │ B/op vs base │ Encryption/encrypt-1024-16 2.078Ki ± 0% 2.016Ki ± 0% -3.01% (p=0.000 n=10) Encryption/encrypt-4096-16 5.703Ki ± 0% 5.641Ki ± 0% -1.10% (p=0.000 n=10) Encryption/encrypt-8192-16 10.20Ki ± 0% 10.14Ki ± 0% -0.61% (p=0.000 n=10) Decryption/decrypt-1024-16 2.781Ki ± 0% 1.875Ki ± 0% -32.58% (p=0.000 n=10) Decryption/decrypt-4096-16 5.781Ki ± 0% 4.875Ki ± 0% -15.68% (p=0.000 n=10) Decryption/decrypt-8192-16 9.781Ki ± 0% 8.875Ki ± 0% -9.27% (p=0.000 n=10) geomean 5.166Ki 4.590Ki -11.16% │ /tmp/before │ /tmp/after │ │ allocs/op │ allocs/op vs base │ Encryption/encrypt-1024-16 9.000 ± 0% 7.000 ± 0% -22.22% (p=0.000 n=10) Encryption/encrypt-4096-16 9.000 ± 0% 7.000 ± 0% -22.22% (p=0.000 n=10) Encryption/encrypt-8192-16 9.000 ± 0% 7.000 ± 0% -22.22% (p=0.000 n=10) Decryption/decrypt-1024-16 12.000 ± 0% 6.000 ± 0% -50.00% (p=0.000 n=10) Decryption/decrypt-4096-16 12.000 ± 0% 6.000 ± 0% -50.00% (p=0.000 n=10) Decryption/decrypt-8192-16 12.000 ± 0% 6.000 ± 0% -50.00% (p=0.000 n=10) geomean 10.39 6.481 -37.64% Signed-off-by: Paulo Gomes <[email protected]>
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Great work @pjbgf , thank you very much for it!
Simpler code, better perf, better test coverage, better security... is there anything more I can really ask for? 😉 really appreciated!
There are a few nitpicks/cosmetic suggestions and only one nontrivial question on pkg/cache/sql/store/store.go - although that's definitely not a deal breaker in any case.
Once that is answered this is ready for my approval and will go to the Frameworks team for another round of review.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Signed-off-by: Paulo Gomes <[email protected]>
I believe I worked through all the feedback on the new commit. The PR description is now updated with uint32 keyID and the latest benchstat. PTAL |
Signed-off-by: Silvio Moioli <[email protected]>
Signed-off-by: Silvio Moioli <[email protected]>
Signed-off-by: Silvio Moioli <[email protected]>
Signed-off-by: Silvio Moioli <[email protected]>
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
All good, green light from my side!
(I took the liberty to directly add a few mostly cosmetic commits on top)
Will merge as soon as we get a 👍 from the frameworks team
Thank you so much for the collaboration
Signed-off-by: Silvio Moioli <[email protected]>
Signed-off-by: Silvio Moioli <[email protected]>
Signed-off-by: Silvio Moioli <[email protected]>
Move away from key hierarchy keys, into simpler data slicing instead.
The use of key hierarchy, saving the DEKs alongside the data, is very
effective when there is a need for long-term storage. As Lasso's KEK is
not persisted externally, once the instance restarts the encrypted data
can no longer be used as the DEKs can't be decrypted.
Due to that, a more performant approach would be do away with the KEK, and
keep all the DEKs in memory. The stored data now has an uint32 keyID, as opposed
to having an encrypted DEK and its nonce - saving 40 bytes per row.
As a result, the encryption process had a throughput increase of ~260MB/s
and a 22% decrease in allocations. Conversely, the decryption process had
throughput increase of ~400MB/s and a 50% decrease in allocations.