Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
How SGX Fails in Practice (sgaxe.com)
158 points by g0xA52A2A on June 9, 2020 | hide | past | favorite | 96 comments


This is one of two lovely side-channel vulnerabilities in Intel processors released today. Apparently they also leak RDRAND and RDSEED data across cores and took 18 months to fix it: https://www.vusec.net/projects/crosstalk/


> The mitigation locks the entire memory bus before updating the staging buffer and only unlocks it after clearing its content.

Entire memory bus as in entire access to any kind of shared memory (shared CPU caches, RAM, ...)? If yes, wow, that's a pretty desperate mitigation. Doesn't that starve large parts of the whole CPU whenever an affected, but mitigated instruction is executed, as all other cores can't interact with memory anymore for the duration of this execution? Doesn't that also open the ability to "DoS" CPUs by constantly executing such instructions?


> Entire memory bus as in entire access to any kind of shared memory (shared CPU caches, RAM, ...)?

I guess this is the only solution for Intel (apart from an hardware fix) as one core doesn't have control over the loads of another core.


Exactly what I wonder too. Can't the fix be used as an attack vector. At least infer disruptions to trigger timeout events, add execution time variance.

And wouldn't you be able to detect RdRand instruction execution for (for example) key generation. Pinpointing when processes perform sensitive operations.

And, would it be possible to use RdRand execution as a time based exfiltration channel? Run a timing loop in one process on one core. Use RdRand reading with different frequency in a second process to (using morse code or similar) to encode information. The first process use its timing loop to recover the information.


You can already do that with split locks, which also lock the memory bus.


Yeah, split locks have a huge performance impact, especially with high core counts: https://rigtorp.se/split-locks/

Fortunately the Linux kernel can kill offenders with SIGBUS: https://www.phoronix.com/scan.php?page=news_item&px=Linux-Sp...


Hmm, now I wonder if this could turn into a F00F-like bug? What would LOCK REP RDRAND or just REP CPUID do...


You can still acknowledge interrupts in between, so hardware watchdogs will prevent this.


At this point SGX is just so broken that it seems like its only purpose is to provide PhD students something to write a paper on :)

I'm hesitantly excited for AMD's SEV enclave to roll out. Anyone know if it's shaping up to be any better?


SEV has been subjected to its own share of attacks (and design/implementation fails), but note that it has a different threat model.

* https://arxiv.org/pdf/1712.05090.pdf 2017

* https://arxiv.org/pdf/1612.01119.pdf 2017

* https://arxiv.org/pdf/1805.09604.pdf 2018

* https://ipads.se.sjtu.edu.cn/_media/publications/fidelius_hp... 2018

* https://seclists.org/fulldisclosure/2019/Jun/46 2019

* https://www3.cs.stonybrook.edu/~mikepo/papers/severest.asiac... 2019

* https://www.usenix.org/system/files/sec19-li-mengyuan_0.pdf 2019

* https://arxiv.org/pdf/1908.11680.pdf 2019

* https://arxiv.org/pdf/2004.11071.pdf 2020

Any enclave technology will be reliant on the underlying security of the processor itself. Someone was going to have to go first. Intel happened to take greater risks in the name of performance, and all of their technologies (including their first-to-market enclave technology) are suffering reputational hits as a result.

I'll also just mention that CrossTalk is the more interesting vulnerability affecting SGX that was disclosed today.


Oh huh, I see. Thanks for the papers. "Someone was going to have to go first. Intel happened to take greater risks in the name of performance, and all of their technologies (including their first-to-market enclave technology) are suffering reputational hits as a result." Very true, and a point worth making. Just curious, do you work closely with SGX/SEV? You were quick with the links!


SEV is exciting because it has a much better cost-to-benefits ratio.

It provides useful defense in depth without requiring any changes to the application stack - you can run regular VMs with syscalls, plenty of memory and high-bandwidth IO.

SGX, on the other hand, is extremely limited and notoriously hard to target. It's even harder these days - you need specialized compilers and coding techniques to mitigate a number of attacks that can't be fixed by a microcode update.

I reckon it's almost impossible to do serious SGX work these days without being under NDA with Intel such that you can work on mitigations during embargoes for the never-ending stream of vulnerabilities.


SEV is exciting because it has a much better cost-to-benefits ratio.

I think that's not actually true.

The problem is if you believe SGX "needs" these sorts of defences/mitigations then so does SEV, because SEV VMs are not magically immune to side channel attacks and in fact suffer far more than just micro-architectural side channels because they also leak all disk and memory access patterns, network access patterns and so on. These sorts of side channels aren't the responsibility of any CPU to fix but are remarkably powerful.

Sometimes it feels like SGX gets placed under a rather nasty double standard. Enclaves are "hard" because you "must" mitigate side channel attacks. SEV VMs are "easy" because nobody even tries at all. Indeed they cannot try - normal software isn't built to eliminate app level side channels. That's why enclaves are special programs to begin with.

If you are happy to use existing non-hardened software though and just take the defence in depth argument, well, no problem - you can use SGX too. There are things like SCONE that let you run existing software inside enclaves. Unlike SEV, SGX is actually fixable when security bugs are found so it's meaningful to talk about using it. SEV has been theatre so far. It's not renewable, there's no equivalent of TCB recovery so nobody bothers trying to attack it because it's already been broken in unfixable ways before.


You are right about the double standard but using enclaves means restructuring your application. Even SCONE requires porting. SEV gives you the warm and fuzzy feeling that you are doing something to improve security without having to do a lot of work, assuming that your favorite OS version has been ported to run in a "secure" VM.


It's apples and oranges.

SEV is useful for defense in depth and absolutely not comparable a secure enclave like SGX or TrustZone.

However, it results in significant security gains with almost no extra effort.

Secure enclaves like SGX are designed with much stronger security guarantees and are therefore hard and expensive to use, and they're regularly broken due to design issues that may be impossible to fix, making it a bad investment.


The problem is, it's not really clear it results in significant security gains.

SEV (a hypothetical version that wasn't broken) would give you encrypted memory and some basic remote attestation protocols.

The point of this is to stop a cloud vendor from peeking into your VM.

But the problem is, none of the software inside your VM is designed to resist attacks by a malicious hypervisor. This isn't just side channel attacks but also so-called Iago attacks. This is where a formerly higher privileged piece of software is trying to break into a lower privileged bit of software by manipulating the API responses.

For SGX it was shown that running a whole standard Linux process inside an enclave and relaying system calls outside (i.e. a similar arrangement to what SEV does) could lead to the process being hacked the moment it did something trivial like call malloc(). The reason was, the software didn't expect the kernel to try and manipulate it via syscall return values because there is no point normally.

In SEV we have the same issue. Hypervisors were previously more privileged than kernels. Therefore kernels and apps aren't designed on the assumption of a malicious hypervisor. It's highly likely that not only side channel attacks but also Iago attacks apply there too.

Now you could say, well, one step at a time, maybe it doesn't matter, better than nothing etc. Sure. All true. At the very least it complicates the attack process and is more likely to generate an audit trail inside the cloud vendor.

But actually Intel had a somewhat comparable solution for a long time called TXT. It lets you start a trusted/measured hypervisor that can then supervise VMs on the box. This is in some ways stronger as you can actually check the hypervisor won't attack the VMs. But it's hardly used because cloud vendors use proprietary hypervisors, and actually the threat model people care about is "malicious cloud vendor trying to break into the VM", not "incremental security improvement of unknown utility".

I suspect Intel will implement encrypted memory for VMs at some point anyway because of the viewpoint you raise here being quite common - "I'll encrypt the RAM for free and then I'm done" although of course it needs tooling integration as well. It's not actually quite for free.

But I guess if this takes off then AMD will start to see a lot of research papers where people break into the encrypted memory space in various clever ways, and of course, you're also vulnerable to any exploits in the software stack on the VM itself. That was the main reason the security community ended up with the SGX type model. When your trusted computing base is an entire virtual machine it doesn't give you much because VMs have so much software in them, they get hacked all the time. The idea of enclaves is you design your app so the bulk of the software stack is untrusted, and only small parts get to directly touch sensitive data.


I fully agree with all of what you said. There's a big difference between "hey can you dump this VM for me please" and "hey please implement this paper to dump this customer's VM" and it provides cloud providers with plausible deniability when law enforcement comes knocking. It's certainly useless once your threat model includes grad students with lots of time at their hands.

The "trusted hypervisor" approach is actually what my company is working on, with SEV just as a convenient attestation mechanism and nice defense in depth.

Yes, with SEV the guest OS has to assume that the emulated hardware is untrusted and I'm quite sure that no OS was built with this threat model in mind. It's not as bad as proxying syscalls from SGX to the host kernel because there's a lot smaller attack surface, but I bet you could find a hundred ways to compromise a stock Linux kernel running in SEV.

I'm more optimistic about unikernels written in say, Rust, which would still be a much friendlier API than SGX.


Ah, my company is working with SGX. Perhaps that explains our difference of view ;)

The hypervisor ABI is quite large, perhaps not as large as a kernel's but it doesn't really make sense to expose the same API to secure code anyway. For instance an encrypted VM that then uploads its data somewhere else via raw TCP sockets doesn't make sense conceptually, even though the API allows it. You have to use SSL. Likewise an encrypted VM that downloads software updates from the repositories of a cloud provider, also doesn't make much sense, even though nothing in the tech stops it.

The nice thing about an enclave is you can understand completely the data flows in and out by examining the interface. That does mean compiling existing software for it will die with unimplemented functions. But those errors are probably telling you something meaningful - i.e. if the code attempts to call socket() or open() then you can just relay them outside the enclave, but it makes more sense to think about what you're really trying to do.

It's a more purist approach with worse compatibility, I agree. It's really focused on finding the minimal TCB of your program and excluding all else, like how Chrome moves the rendering engine out of its TCB. I suspect many apps can be designed in such a way that almost all of the code is untrusted, but it's a bit of a frontier and takes quite a bit of skill.


SEV is fundamentally less secure than SGX because it only provides memory encryption but no integrity protection. Enclaves are a challenging problem given the much more aggressive threat model, but SGX is the better security model of the two IMO.


Yes - in a recent paper by Wilke et al[0], they nicely demonstrate how the lack of integrity checking can be exploited.

SEV is a very new technology and its current (and previous) iterations have known weaknesses. The next generation of SEV will likely have SEV-SNP[1], which will prevent the host from writing guest memory/messing with the guest's page mappings.

Will probably take a few more iterations to stabilize. At that point, it should provide decent security guarantees.

Current-gen SGX has much stronger guarantees (conceptually, at least) with full memory integrity checking and less attack surface, but it suffers from CPU vulnerabilities, most of which AMD didn't have, and the integrity checks and architecture come at a large performance and development cost.

SEV has different tradeoffs that make it much more useful for real-world use cases, while still providing strong security guarantees.

[0]: https://arxiv.org/pdf/2004.11071.pdf

[1]: https://www.amd.com/system/files/TechDocs/SEV-SNP-strengthen...


Oh nice, hadn't heard about SNP yet - looks interesting.


So this breaks SGX completely.

Signals new PIN thing relies (almost) entirely on SGX being secure to make their encrypted profile and contact backups secure.

This attack reduces the security of the Signal encrypted backups to just the PIN.

Edit: Indeed the authors point this out explicitly in the SGAxe paper.


> Signals new PIN thing relies (almost) entirely on SGX being secure to make their encrypted profile and contact backups secure.

This just seems irresponsible. How could they excuse this? It seems like anyone who has even peripherally been working with TEEs recently is _well_ aware that SGX is broken beyond repair. It's not just a matter of patching bugs, this whole model seems bunk.


Seems unlikely they were mentioned in the paper in this depth without being made even casually aware that this paper would be published.

Which means they just went ahead with it anyway.


Even people not working on TEEs are aware that SGX is broken. Actually, you don't even have to even know what SGX is to bring up the concern that their entire scheme relies on SGX working correctly.


Still useful for plausible deniability if the government comes knocking.


Nah. If it can be fixed with a microcode update it's not the end of the world.


Definitely not the end of the world, but the utility of a trusted computing implementation depends a lot on the track record of actually being one, and SGX's isn't exactly great.


To be honest, you could say the same about SSL/TLS, browsers, and (probably) virtualization.

Secure enclaves are new. It takes time to develop new technologies and work out all the issues. I probably wouldn't trust my data to SGX today, but I'm not opposed to it as an idea. In 5-10 years it may be in a reasonable state.


If we had a heartbleed every month, then I'd be rethinking whether I want to trust anything to TLS, yes. Browsers are awful, and tolerable only because they provide so much value (and even then, I overwhelmingly only run a browser with extra mitigations in place). And no, virtualization probably shouldn't be trusted against hostile code.


As long as we rely on languages like C for our foundations, we will have them, just not always on TLS, that is why it is so relevant to push for better foundations, even when most of us aren't doing systems programming.


If you're relying on it for remote attestation (which is what signal is doing) it's not any use for that, because you have no real verification that the microcode has been updated.


In SGX, you do have verification that the microcode has been updated. This is known as the CPUSVN value, it is part of the SGX report that gets issued, and mixed into the keys used to sign reports.


Except this attack extracted the attestation keys, so I can attest to any version of the microcode that I want. (Even ones that don't exist).


This attack did not extract the root attestation secret or sealing secret, both stored in CPU fuses.

Instead, this attack extracted the sealing and attestation keys stored by the current version of Intel's Quoting Enclave under the current microcode revision (and presumably all previous microcode revisions).

Assuming Intel fixes the vulnerability in the next revision of microcode (lets call it Rev H), the Quoting Enclave will need to generate and/or store new attestation keys. Because the root secrets were not leaked AND the microcode revision has revved forward, these cannot be derived under previous microcode revisions. Thus, they are assumed not to be available to the attacker under the SGXAxe/Cacheout vulnerabiilty.

Intel generates or provides access to new public keys that correspond to the attestation keys. You use these to verify the Quoting Enclave's attestations.

Intel additionally asserts that these new public keys will only verify attestations created by the Quoting Enclave running under Rev H of the microcode.

You must determine whether to trust this assertion.


Nope. Microcode updates bump the CPU security version number I believe and incorporate that into the attestation, i.e. you can only accept attestations from machines with a certain SVN.


Remember, SGX is not a security feature. It is a DRM feature meant to take control away from the owner of the machine and give it to content owners. Happy to see every successful attack against this module, and hoping that Intel decides to just ditch it entirely.


Oh boy, here we go again.

No, SGX is a technology allowing for the trusted execution of code in an untrusted environment. DRM is one potential application of that technology.

See also the comment section on literally _every other HN submission with SGX in the title_, for example here: https://news.ycombinator.com/item?id=22495251


Yes, indeed here we go again, because I fundamentally disagree that "allowing the trusted execution of code" is any different than DRM. A rose by any other name.

I believe a user should have full control over the hardware they own. SGX and other enclaves are a direct affront to that, and I won't give up the fight. If you need a "trusted execution" of code, run it on a server you control.


Trusted computing isn't so much about control as it is about trust/attestation. You can run privileged malware on your computer without trusted computing. On the other hand, you can also sandbox trusted computing.

If you don't want to run somebody else's code on your computer – don't! (Chances are you're not a cloud provider and nobody is asking you to, anyway.) If you oppose DRM – don't consume content protected by it, whether the DRM is implemented in SGX, TrustZone or just in obfuscated software.

Don't get me wrong, I am not the biggest fan of DRM either (I personally see it as an evil, maybe a necessary one though). But shunning the entire field of trusted computing is a bit like opposing theoretical physics because it ultimately has brought us nuclear weapons.


I'm not entirely against the concept of trusted computing, but it's hard to believe that "trusted computing" as we know it today is so innocuous when it's included on consumer CPUs. Why not keep it as a premium feature for server-class CPUs instead?


Remote attestation can be incredibly useful and mutually beneficial on client devices as well if you're willing to consider applications beyond DRM.

For example, every smartcard is essentially a device in your physical possession, running somebody else's code. I'd argue that for example in the case of EMV payment cards, the benefit is mutual (less fraud).

Android supports a variant of this for generic secure transaction confirmation: https://android-developers.googleblog.com/2018/10/android-pr...


I'm comfortable with it on SmartCards or EMV chips, since those are tailored devices for a specific purpose and the trust model is understood by the participants. I'm not particularly upset that I can't root my credit card.

Mobile phones, it's disappointing, but total consumer control was kind of a lost cause from day 1.

What truly bothers me is the introduction of trusted computing into general-purpose consumer CPUs, where previously we had complete freedom. There's an old but still very good CCC talk that encapsulates my feelings about this [0].

[0]: https://www.youtube.com/watch?v=HUEvRyemKSg


Just to better understand your line of reasoning: Do you generally equate trusted computing with elevated privileges for the attestable code being executed?

I think this is where a lot of trust (on a meta-level) in the technology has been lost, and as far as I understand it, modern implementations are, on the contrary, tightly sandboxed.

In that sense, modern trusted computing can actually be more freedom preserving: If you are generally willing to tolerate DRM on your system, as long as it's not able to access data on your computer that are none of its business, you're more likely to see that happen on a modern, hardware-assisted DRM platform than on the rootkit based software shenanigans of the early 2000s.


Elevated privileges for a hardware blob is an even worse situation that unfortunately exists (Intel ME/AMD PSP).

For me the issue is the blob itself. "Trusted" for the manufacturer is "trust us" for the end-user. My expectation for a general-purpose CPU is that I can inspect the code that's running on it. Building a TPM into consumer CPUs defies that freedom.

So, it's about hackability (in the Hacker News sense). If you can't see what's happening inside of your computer, is it really yours?



I believe it has actually been deprecated for consumer CPUs.


I don't think SGX is available on any consumer CPUs.


My laptop has an Intel i7-8750H CPU.

https://ark.intel.com/content/www/us/en/ark/products/134906/...

> Intel® Software Guard Extensions (Intel® SGX)

> Yes with Intel® ME

I have it disabled in the firmware settings.


It has been part of the architecture since Skylake:

> Most Desktop, Mobile (6th generation Core and up) and low-end Server processors (Xeon E3 v5 and up) released since Fall 2015 support SGX.[0]

[0]: https://fortanix.com/intel-sgx/


> if you don't want to run somebody else's code on your computer – don't! Chances are you're not a cloud provider and nobody is asking you to, anyway

That's a pretty dishonest argument to say the least.


Nah. There is massive interest in SGX as a defense in depth technique for secure cloud computing (see Azure confidential computing for example). In fact I believe Intel have recently announced they are deprecating SGX on consumer devices in favour of server hardware. I'm as anti-DRM as the next man but this is just scaremongering.


If they're getting rid of SGX on consumer devices, I consider that an absolute win, and am happy to tone down my rhetoric. However, I couldn't find anything to that effect with a quick search. Can you share where you saw that announcement?


Consumer devices still have other DRM technologies like PAVP/HDCP. SGX is simply unnecessary for consumer-hostile DRM.

I don't think SGX + PAVP ever made it past the prototype stage, and the SDK is proprietary. My company actually wanted to use it for a trusted computing application (think "secure transaction approval") and Intel told us it was deprecated and unsupported.


There is no world where you just get to tamper with stuff. In the absence of trusted computing, the anti-tamper strategy is server-side policy logic and central databases.

The ability to control and trust devices that have left the issuer's physical possession enables decentralized and privacy-preserving architectures.

If this tech were perfect (it isn't) you could even do a peer to peer cryptocurrency without proof of work. Just make your peers prove they are running software which will refuse to double-spend.


The ability to control and trust devices that have left the issuer's physical possession enables decentralized and privacy-preserving architectures.

One question: who, exactly, is this "issuer" you're referring to? Me? The people who sold me the computer? The people who manufactured the computer? The people who wrote the CPU microcode that I don't have access to? The government agencies they may or may not answer to?

Just make your peers prove they are running software which will refuse to double-spend.

OK, make that two questions: how is this not equivalent to solving the Halting Problem?

There is no world where you just get to tamper with stuff.

There is, if it's my stuff.


If it's your stuff, then you control whether remote attestations are required and what checksums / signatures are considered valid.


So, no answer to the actual questions, it sounds like. Got it.


Do "content owners" actually.. use SGX? I've only ever seen it used for secure boot/trusted environment stuff under the control of the machine's owner, not content DRM.


Maybe not widely yet. But here's[0] an Intel employee saying it's supported, which confirms that it's one of their visions for the module.

[0]: https://software.intel.com/en-us/forums/intel-software-guard...


Powerdvd requires SGX for 4K Blu-ray’s IIUC https://forum.cyberlink.com/forum/posts/list/77436.page


> If you need a "trusted execution" of code, run it on a server you control.

What CPU do you use for this purpose?


I understand the implication here to be "how can you trust your own hardware anyway," and the answer is "of course in theory you can't."

Even if you had SGX and wrote your algorithm or whatever to run in it, if your hardware's compromised, the horse has left the stable. Your proprietary algorithm and input that you developed on the compromised hardware are already available to the attacker. So having SGX locally is not going to help.

At the end of the day, SGX is not going to defend a proprietary computation if the input and algorithm are already compromised.


As I said in a sibling comment, think if it more as a way of reducing your TCB in cloud environments. Instead of trusting the whole cloud software stack, you now just trust your app and the hw, both of which you need to trust anyway.


I'm not trying to make that argument.

I'm literally asking you, what do you use for that purpose?


> trusted execution of code in an untrusted environment

This implies taking power and control away from the user. Trusted execution means execution the user can't tamper with or analyze as well as memory the user has no access to. The user's own machine has been sealed off.

It's not just DRM. From a computing freedom point of view, all trusted computing is bad. It prevents legitimate activities like reverse enfineering proprietary software in order to create a free software replacement.

Trusted by whom? Invariably it's the company making the software. Users are never empowered by this. Corporations are already in a position of power over most people, we don't need even more technology that gives them even more control. Imagine if this becomes common enough to show up in browsers and the web. Suddenly inspect element and view source no longer work, scripts and ads cannot be blocked and extensions become a thing of the past. All in the name of corporate control.


You're presenting one situation: someone else's code running in the user's environment. But what about running the user's code in someone else's untrusted environment?


> someone else's untrusted environment

That "someone else" should have the same powers over their machines that users do. They should be allowed to see everything and tamper with anything. After all, the computer is theirs. Other people's computers should always be untrusted.


Do you feel the same way about cloud providers? Should they always be able to see everything and tamper with anything for the workloads their users run? If that’s the case, would you be in favor of making contracts to that effect (as already exist for all cloud providers I’m aware of) unenforceable?


I agree that trusted computing can be a good thing for cloud computing providers. They own the computer hardware but other people are paying them in order to use those resources. That's the entire point of cloud computing. The user and owner of the hardware are different people and so offering trusted computing as a feature makes sense.

Consumers are the opposite: they pay companies in order to use their software on their computers. The owner and user of the machine are the same person. Users should get to maintain complete control over the hardware, including the ability to reverse engineer the software and even make "unauthorized" copies of anything they want.

I don't trust that this technology will be restricted to cloud computing though. The copyright industry is worth billions of dollars. They'll make use of it on consumer machines if it's available. This is something that should be prevented at all costs, since the worst case scenario is a world where all commercial software runs in a secure execution environment where the user has no control.

Threat models where the user is an adversary are obviously user hostile. So of course whenever these technologies are compromised it is a victory for software freedom.


> Users should get to maintain complete control over the hardware, including the ability to reverse engineer the software

Secure enclaves and auditable software are not mutually exclusive. It's totally possible to run open source software in an enclave!

> the worst case scenario is a world where all commercial software runs in a secure execution environment where the user has no control.

This would be a very bad scenario indeed, but I consider it completely unrealistic. The trusted computing base of an entire graphical computing device including its operating system and all installed applications is absolutely impossible to audit. It didn't work for Microsoft (Palladium), and it's also not the direction that Apple is going, arguably one of the most restrictive client platforms today.

Practical trusted computing are tiny (and ideally heavily sandboxed!) trusted enclaves running the least amount of critical, audited code.


Okay sure, but that still leaves the problem me sending you (someone untrusted) my code and asking for proof that it's running unmodified. Such a thing would let me trust your computer as my own if you allow it.


We have mathematical techniques for this--such as ZKSTARK--that allow for the construction of proofs while not also hiding the execution from the user. Showing you "I did this correctly" should not imply being unable to watch it happen and decide to put a stop to it if you don't like what it was doing.


But it does? That's a fairly narrow definition of "watching it happen" when both the real computations and the data are hidden from you. Have you gained anything except fewer practical uses?

I mean at the end of the day wouldn't it be nice to be able to use a cloud service and be sure that server is running exactly the published source and your secrets are hidden? That the trust-domain of your phone can extend into a datacenter?


I'm not familiar with ZKSTARK, but does it allow running computations in a trusted way without revealing some of the input parameters?

The latter constraint is something that I think is usually not addressed by zero knowledge proofs but rather only trusted computing (or the software equivalent, whitebox cryptography, assuming it exists).


This can be accomplished, completely generically, by running a Turing machine using fully-homomorphic encryption, and providing only the bootstrap/evaluation key, but not the encryption/decryption keys.

(Note: you have to use a Turing machine rather than some application-specific circuit, or someone can use a circuit-bending attack to tamper with the data.)

Unfortunately, this is infeasible even by FHE standards, and doesn't let the user get an output non-interactively.. might as well run it on your local machine.

But the sorts of things SGX does may one day be doable with only high-powered maths, if we get good functional encryption or program obfuscation.


You're of course free to hold that view.

But if you were a cloud service provider, I'd give my data and money to the competition (assuming I believe their claims regarding trusted computing).


I'm more concerned about this scenario: running my program A on my machine side by side with somebody's else program B that I don't fully trust. Your scenario looks like the authors of program B not trusting me. Sorry for them but it's my machine.


In my opinion, the idea there is the same as it's always been: don't trust the client.


SGX is mostly about servers, not clients.


Fixed in January.

https://blogs.intel.com/technology/2020/06/ipas-security-adv...

> Finally, in January 2020 we released INTEL-SA-00329 addressing two side-channel issues. Specifically, CVE-2020-0549 addressing an issue called “L1D Eviction Sampling” which has been referred to by researchers as “CacheOut”. The CacheOut researchers recently informed us of a new paper referred to as SGAxe. It is important to note that SGAxe relies on CVE-2020-0549 which has been mitigated in microcode (confirmed by the researchers in their updated CacheOut paper) and distributed out to the ecosystem.

And from the CacheOut paper:

> Finally, Intel had indicated that microcode updates mitigating the root cause behind CacheOut will be published on June 9th, 2020. We recommend these be installed on all affected Intel platforms to properly mitigate CacheOut.

The above blog is indeed dated june 9, however it indicates that the fix was made available, and verified, back in January.

I find fault, or at minimum a conflict, between the researchers publication as of june 9, and Intel's statement on the same date. Pretty horrible. I especially dislike that the paper has a section called "Current Status", while the paper is undated as is the norm for academic papers of this type. The "Current Status" isn't 100% clear but appears to be current as of November 2019, so yes this would be in advance of both the June 9 supposed date and the apparent January actual date. Perhaps the paper linked on the sgxaxe website isn't the "updated CacheOut paper" that Intel refers to.


Boy, I'd love to see Moxie's comment on this after years of shilling SGX. I was always disappointed by that and never understood why someone as otherwise-bright as him went for it.


After the whole Signal’s PIN and SGX saga I am convinced Moxie is NSA/CIA asset. Maybe not from the beginning, but at some point they have decided that this is ideal point of influence and made an offer (not necessarily financial) he couldn’t turn down.


There have been several attacks like this already (and more likely more to come). The nice thing about the SGX design is many of these issues can be fixed immediately with a microcode update. Now if an attack is announced that can't be fixed with a microcode update that is another story :). In general, protecting against these kinds of attacks for any enclave is a hard problem, but it is an active area of research and there are already research proposals for more side channel resilient enclave designs (e.g. see the Keystone project from Berkeley). I expect some of these mitigations will be incorporated into future iterations of SGX, but it will take time.


You're right, and the idea of a secure enclave is not an inherently bad one. What is bad is treating it as an unassailable fortress, as in signal's address book feature. There's a difference between using it to make critical parts of existing computations more secure and using it to do stuff one wouldn't otherwise do.


> What is bad is treating it as an unassailable fortress

It's also a pretty bad architecture/design failure, for a TEE to share computational resources with untrusted processes. I heard the decision to do so at Intel was due to marketing considerations outweighing the security concerns: Customers didn't want computational resources going fallow for the sake of security.

I'm skeptical about whether a truly trustworthy TEE is possible, but it seems even less likely, given SGX's architecture.


> You're right, and the idea of a secure enclave is not an inherently bad one.

Maybe it is when you're trusting private keys and proprietary technology controlled by a single entity that may be subject to commercial and political pressures that make their motives not aligned to yours.


I hope you're not typing that on an Intel processor :)


  $ grep vendor_id /proc/cpuinfo
  vendor_id       : AuthenticAMD


For signal this issue actually cannot be fixed.

One failure is a permanent leak of the profile and contact information for everybody with a weak pin (which is probably virtually everybody).


Yes that's a fair point, of course anyone who is using this in production now will be impacted. I was more highlighting that it is a recoverable error in terms of being able to use existing SGX hardware in future.


Because there's literally zero alternative to it. If you're running Signal and want to be sure that the server is running the code published on GH then it's all you can do.


> Because there's literally zero alternative to it.

Iff you take Signal's overall design as a given. If we had a chat system that used GPG, the servers could be compromised without issue. Of course, usability would suffer, so maybe Signal is worth the tradeoff, but it's not as if Signal is the only way to do what Signal does.


You are correct that Signal is not the only way to do what Signal does. My medium-term plan is to go to Matrix once the P2P option matures.

> if we had a chat system that used GPG

FWIW DeltaChat uses email messages encrypted with AutoCrypt (a flavor of OpenPGP) and your contacts are simply emails you know. The first message will be unencrypted with OpenPGP setup info embedded in the body. If the other email client supports OpenPGP (or is DeltaChat) then further messages are returned encrypted.

Someone on HN said that this was the NSAs dream since they could collect metadata on people who were sending encrypted messages via the various leaky parts of email delivery, SMTP relays etc.


Well, Intel can improve it with each new processor design so it will continue to get better.


> Our bot provides Attestation as a Service (AaaS)

slow clap




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: