> According to a lawsuit filed today in federal court in California, Waymo accuses Anthony Levandowski, an engineer who left Google to found Otto and now serves as a top ranking Uber executive, stole 14,000 highly confidential documents from Google before departing to start his own company. Among the documents were schematics of a circuit board and details about radar and LIDAR technology, Waymo says
> The lawsuit claims that a team of ex-Google engineers used critical technology, including the Lidar laser sensors, in the autonomous trucking startup they founded, and which Uber later acquired
I was confused as to what stealing a patent actually meant:)
> Recently, we received an unexpected email. One of our suppliers specializing in LiDAR components sent us an attachment (apparently inadvertently) of machine drawings of what was purported to be Uber’s LiDAR circuit board — except its design bore a striking resemblance to Waymo’s unique LiDAR design.
> We found that six weeks before his resignation this former employee, Anthony Levandowski, downloaded over 14,000 highly confidential and proprietary design files for Waymo’s various hardware systems, including designs of Waymo’s LiDAR and circuit board. To gain access to Waymo’s design server, Mr. Levandowski searched for and installed specialized software onto his company-issued laptop. Once inside, he downloaded 9.7 GB of Waymo’s highly confidential files and trade secrets, including blueprints, design files and testing documentation. Then he connected an external drive to the laptop. Mr. Levandowski then wiped and reformatted the laptop in an attempt to erase forensic fingerprints.
Yea that's incredibly bad if that's what that guy did.. in general for EE designs, the schematic/layouts are not that hard to deduce how you would do it if you are in contact with the company that makes the LIDAR device (the unique and difficult part -- basically just need to ask them, like "yo, how roughly should I interface with this thing and whats your recommendation on a number of the components.. and can you give me a reference schematic.."). So copying stuff means that the guy actually probably was clueless as to how it actually worked and planned to just hand it off to somebody else who did understand it -- so definitely super bad...
Many many years ago, I was at a software company, and a competitor popped up making very similar software. They made a presentation of their software at a conference, and had a slide showing a screenshot. Our typos were clearly visible.
Fun times, fun times. We got a few people fired, but that was about it.
I used to work at a small company that made a SaaS product for a niche HR function. One of our largest customers, a bank whose ATMs you likely see every day if you're in the US, decided to drop our service in favor of a solution built in-house. We were disappointed, but had to let them go.
Then, about 2 years after they'd canceled, our support department got a strange email from a user that was having problems. At first, the support rep didn't understand the issue and asked the user for screenshots of the problem. Sure enough, the screen shots looked exactly like our old product, which had been decommissioned over a year prior. It even had our support email address and phone number on it. It turns out the version of the software they developed in-house started by scraping our site, renaming all the .html files to .aspx and then making the dynamic parts be data driven.
When we called them on it, they basically threatened to use lawyers to put us out of business if we pursued the issue, so I guess our management decided to drop it. We did make them change the support email and phone number.
> When we called them on it, they basically threatened to use lawyers to put us out of business if we pursued the issue, so I guess our management decided to drop it. We did make them change the support email and phone number.
This is why we can't have nice things. Darned lawyers. Can't live with them, can't live without them
To be fair ...saying you'd use lawyers to put someone out of business makes no sense unless the company had done something that the lawyers could go after. Hating on lawyers for being aimed at a company is like hating the court system for being available. The bigco's lawyers arent doing this of their own motivation. I'm' sure there was other stuff going on that made the op's company back down.
Yet another believer in the Just world hypothesis...
As Cardinal Richelieu said[1] about 400 years ago: Give me six lines written by the most honest man in the world, and I will find enough in them to hang him
In this case, I believe the threat was twofold. They'd drag out proceedings so that we couldn't have afforded to make it to a judgment. But they'd also threaten to cancel a deal with one of our other large customers if they didn't cancel with us. Knowing basically our customer list and how much those customers paid for the software, it wasn't much work to infer the company's somewhat tenuous financial situation. They guessed, correctly, that we'd drop it rather than risk the company's future on a successful lawsuit that, win or lose, would likely mean the loss of a significant customer.
There was nothing we'd done as a company that was actionable.
I wouldn't assume so: a larger company can afford to sink a lot more money into litigation, so they could effectively bleed the small company dry defending even a frivolous lawsuit.
Patent trolls are one example of this - where they may deliberately select weaker defendants (instead of going after the bigco "infringers") so ensure they hold the upper hand in the financial power imbalance.
Based on the story from Flash Boys, that Goldman engineer was taking something like a couple of megs worth of his own work that was related to various open source projects. But he became the poster boy for stealing trade secrets and an example to be made of.
His mistake was that he used the company internet connection to upload said source files to something like his personal github or dropbox of the sort, so there were proxy logs of him stealing corporate secrets.
Most people would've gotten their personal-ish data off a corporate laptop with a usb key/harddrive which they probably wouldn't have triggered any alarms.
> Most people would've gotten their personal-ish data off a corporate laptop with a usb key/harddrive which they probably wouldn't have triggered any alarms.
Most top banks use Citrix technology (or similar virtual desktops) to stream workstations for employees accessing both locally and remotely, so the only way you can get data out is by using your phone to take photos.
Banks and similar financial companies are extremely paranoid about data security. Working at one all USB drives were mounter read only unless they were specifically provided by the company and encrypted using provided software. Anywhere you could upload files was blocked as well.
What's to stop someone from sending sensitive data as an attachment via email using an encrypted email service? I don't think a proxy can detect much besides data size correct? I suppose a virtual desktop could log which files are uploaded.
Most email servers reject that (I tried to send something to a Deloitte consultant and gave up after password protecting ZIP file).
While most of such companies actively MITM their HTTPS connections (I wonder whether Google does that - theoretically they are CA so it would be extra easy for them), you could probably get away to uploading said file to any service that is small enough to be unknown to corporate firewall providers (Symantec, etc).
It also seems like the least interesting thing to steal. The hard part is probably the software and access to real world driving data. It is like holding up a bank and walking out with just the coins. The risk reward profile is terrible.
That was a result, though, of Google collaborating with Velodyne. I suspect Velodyne would have reduced pricing to everyone as a result. And Google's not the only entity working towards less expensive LiDAR. There's Delphi/Quanergy, etc.
Just seems like a huge risk to save maybe $10k per vehicle (assuming price drops for everyone) for some relatively short time period.
I imagine there are more software people capable of programming their way through the problem than electrical engineers that can work their way through it.
An electrical engineer is also going to see the Uber name and expect Valley idiocy, while being a being a secondary concern for not being softwary enough.
That would be Sergey Aleynikov [0]. I'd recommend reading Flash Boys by Michael Lewis [1], who covers this story in detail. Sergey's case seems much more nuanced, and if I remember, Lewis takes his side on a number of issues.
...and then after that, do also read Flash Boys: Not So Fast by Peter Kovac. Not necessarily for Aleynikov's case in particular, but because it's healthy to see both sides of the greater narrative that Lewis unfolds.
Yes good point. That would be super obvious that they stole it then.. "yea, could I order the BA33525 part that you custom built for somebody else.. yea I know its undocumented, but I know you have it somehow...."
You do that sort of thing all the time though with electronics:
I'll open it up, and find some undocumented chip variant and see if I can source it elsewhere, I'll also search+email around to see if I can find a datasheet anywhere.
If it's not printed on the chip you'll probably be able to get it out of JTAG.
From the medium post "downloaded additional highly confidential information pertaining to our custom-built LiDAR including supplier lists, manufacturing details and statements of work with highly technical information."
Levandowski does have some specialized technical knowledge of Lidar, he started up 510 systems, a company focused on making Lidar for robots, which was quietly bought up by Google in 2011.
It sounds worse when you consider the potential for treble damages due to having knowledge of the infringement. Uber's negotiating position is doubly weakened by the fact that they are dealing with a reputation crisis at the same time as this claim has been brought forward.
If Google has the information they claim they do, they have persuasive arguments to anchor their damages calculations on the basis of Otto's acquisition value. Times three.
Eh, Google doesn't use patent litigation as an offense strategy, but they _do_ pursue leaks. They treat leaks (that is, exfiltrating data while employed) very seriously. Offensive.
I think Alphabet probably views Waymo as their next big thing, and Waymo appears to have a sizable lead over everyone else in the self driving space which they want to protect. Remember, Alphabet is a major investor in Uber through Google Ventures, and their stake is likely worth billions. They wouldn't pursue this unless Waymo was of critical importance to them.
I get that a lot of people on HN are highly skeptical, but Google Ventures claims to be completely firewalled from Google corporate (this issue is only made more confusing by the fact that Google Capital is not firewalled, and by the fact that Google corporate itself also invests in companies).
I do believe them: many media firms have the same set up between the journalism and advertising departments.
GV may make decisions and operate completely independently from the rest of Alphabet, but that doesn't necessarily mean that Alphabet does not take into account GV's portfolio position in Uber, which is its largest investment. Waymo even hints as much in its Medium post: "Our parent company Alphabet has long worked with Uber in many areas, and we didn’t make this decision lightly."
In practice, when it comes to outside investments, I don't think the artificial separation between Google Ventures (GV), Google Capital (Capital G) and Google Corporate matters that much.
FWIW, David Drummond, longtime Chief Legal Officer took a board seat after GV's $250m investment in Uber in 2013 but eventually stepped down last August [0].
And this "over 14,000 highly confidential and proprietary design files for Waymo’s various hardware systems" is what, an engineering repo? (Edit: My money's on repo. It says 'searched for and installed specialized software', I'm translating that as 'installed git.') Or did he just take a copy of his mailbox?
Sounds like a rookie move to be downloading this stuff on his work laptop either way.
Stealing is a rookie move. But I don't think there was a choice about using his work laptop. You can't connect an arbitrary device to Google's network.
If I were to put my evil hat on, I'd arrange for an innocent and not-very-well acquainted coworker to have their laptop outside away from cameras while I used it to siphon data off onto a USB drive.
That's... an impossible task. All ways to identify a machine in a network are dependent on the machine itself — if properly configured, I can create a machine that can not be told different from another machine.
Just extract the device certificate from one device and store it on another. Problem solved.
Extracting data from the TPM or equivalent stores in ARM devices is also not impossible, as the DRM-breaking community has shown with extracting keys from TPM-based DRM.
How about a whitelist of hardware fingerprints, similar to what Microsoft does with Windows installations. Also, Google is known to have custom manufactured hardware, specifically with security in mind. I don't think the idea that Google is able to secure their own network against foreign devices is really that far-fetched.
Google can only verify what hardware you're running by sending a packet via ethernet to your device. You control all software running on your device, and can send a spoofed result.
If you were crazy, you could even just emulate Google's hardware entirely and proxy all requests to that emulated hardware.
Nonetheless, while this guy certainly wouldn't be able to do it, many Google employees would.
But that's really not the case here. Making secure computing elements like TPMs or HSMs, or Apple's Secure Enclave or the plethora of other devices out there is a solved problem. You can decap it and try to get that data out of it but at the very least with the current state of the art you can make this very unlikely to succeed and extremely expensive to even attempt.
Handwaving away all of this as just a minor nuisance is silly. An attacker would have to find some unknown side channel or try to physically modify the TPM to get at the data, either approach means the attacker has significant resources, certainly well beyond the means of our hypothetical attacker. Heck, it's been speculated that even the NSA couldn't get data out of something like Apple's Secure Enclave without risking destroying it in an attempt.
Yes, svn handles large binaries just fine. Most last-generation version control systems do. It's probably the main thing keeping some companies from switching to git.
On the other hand, we know that public statements about lawsuits are always a pack of lies as everyone tries to look as wronged as possible: remember YC's bluster about the Cruise co-founder lawsuit, right before they had to settle for dozens/hundreds of millions? or remember the Oculus lawsuit where they alleged almost the exact same thing (Carmack stealing thousands of sooper-sekrit VR documents/files) and wound up only winning on some non-compete stuff and getting only a fixed award which was a fraction of what they thought they'd get?
Being ultimately required by circumstance to make a substantial payment to settle a lawsuit does not imply that the payer was lying, or even incorrect, in their defense.
I think losing a lawsuit is definitely evidence for claims being false just as winning correlates with telling the truth about not doing wrong things, and if it is true that there is no correlation whatsoever between losing a lawsuit and making false claims, then the legal system has failed utterly (and incidentally, you have given me a great idea for a business model).
Given the adversarial nature of the system and the many past comments on HN to take claims by litigants with very large grains of salt when it came to the Oracle or Facebook or Oculus lawsuits, I'm surprised at the apparent credulity on display. Google will huff and puff just as much as anyone else when it comes to lawsuits.
It's evidence, but it's not dispositive, especially in cases like Cruise, where the technicality that determines the outcome might give the defense a 60/40 edge after a court case that will itself cost many millions of dollars.
What he said.
Particularly in patent and other IP cases, it's often cheaper to settle, even for a lot, than defend the lawsuit.
You could say our system has failed utterly
You wouldn't be wrong in that case.
....
I've never been in a situation anything like Cruise, but I've been a founder in an acquisition that went to legal over absent cofounders --- actually, come to think of it, twice, once with me the absent cofounder and once not --- and: I can't imagine any of these cases ever not settling.
Even with no tenable claim at all, the absent cofounder has a gun to your head: a proposed liquidation that pends on a civil suit simply isn't going to close. There are time limits on all of this stuff, and the acquirer is simply going to say "fuck it" and walk if they can't predict when the deal will close. A detail I think people who've never sold a company don't realize is that the legal costs for both sides of a deal that closes uneventfully can get close to 7 figures.
I guess this is valuable information for startup founders. It's also a reason you should run, not walk, from any early-stage business partner that wants to negotiate or complexify vesting. 1 year cliff, 4+ year vest, the way everyone does it, or go start a different company.
The Cruise lawsuit never went to trial. They settled. Neither side had their claims tried in a court of law - which, as you state, should approximate finding the truth.
A representative for Guillory declined to discuss the settlement amount, but said the terms were "mutually agreeable." As part of the settlement, the parties have both agreed to dismiss their lawsuits.
It is an informed guess based on the fact that the settlement involved acknowledging, as the original YC paperwork said, he was a cofounder who owned 50% of a $1b exit. I haven't seen anyone in a hurry to leak claims that the settlement was for a trivial amount, nor did anyone at the time offer any good reasons for thinking that.
This happens more than you would think. The mistake is probably compounded by two designs being almost the same and potentially the same engineers involved.
I'll bet the supplier searched for the engineer's name in his inbox and responded to a big email thread about the project with many people involved. The Otto engineer's old colleagues got the email instead of him.
They'll most likely have a case, but it's also unlikely that the supplier would be big enough to cover any damages in the high 9-10 figures, they'd just throw in the towel and go bankrupt.
Also an argument that "we stole stuff but you're the one who leaked it so you should be punished" won't play well with a jury.
How do you steal a patent? If you steal (to be clear, illegally take) information and then use that to get a patent, is that stealing a patent? Is there case law on on something like that?
Yeah, they definitely have some really specific accusations about what went on:
>>> To gain access to Waymo’s design server, Mr. Levandowski searched for and installed specialized software onto his company-issued laptop. Once inside, he downloaded 9.7 GB of Waymo’s highly confidential files and trade secrets, including blueprints, design files and testing documentation. Then he connected an external drive to the laptop. Mr. Levandowski then wiped and reformatted the laptop in an attempt to erase forensic fingerprints.
The bit about connecting the external drive is interesting but I guess there's probably a ghost of that action somewhere on a drive (assuming you could more or less restore the drive before being wiped).
My guess is he still had a network connection up when he did the copy. These kind of things like JAMF, osquery and so on upload what you do on your computer on a fairly frequent basis. After a certain company size all of them do something like this, and most of them do not spell out the amount of spying they do on their employee's work devices and office space.
Or he didn't do a secure wipe when he reformatted the drive and google inspected the computer when he returned it.
OR google modifies their laptops and has separate chips logging this stuff, which would be fairly impressive!
Or he didn't actually do any of this alledged stuff and it's all innocent.
Unless it's changed since the last time I played with it osquery doesn't upload anything on its own–it's just a local tool/agent that you can use to gather data. Carbon black is a good example of something like that.
JAMF I believe gathers application usage data but nothing as in depth as what's being discussed. It's also comically handicapped. It somehow manages to do a poor job of everything it tries to do so "the world's largest online Mac administrator community" is forum post after forum post of half understood franken-scripts. I used to think it was milquetoast but after sitting through their sales reps crapping all over open source software (despite extensive use of OSS libs in their products) and seeing it fail to do the most basic stuff out of the box my opinion is that it's over priced crap.
It could well be that they have comprehensive debugging enabled for all their employees on work supplied machines. I know they beta test through employees, so that would make sense, to some degree. If the debug log is remotely synced, as an added benefit they get to review actions after the fact, which might be what happened here.
Company-issued laptop probably tracks what software is installed anyway to mitigate malware. Services are probably designed not to accept connections from untrusted devices. Hardware 2FA makes it difficult to impersonate someone without their knowledge.
I can see the temptation—I've always missed having access to IP after leaving a company.
You cant restore SSDs. I would assume Googlers are rolling SATA SSDs, if not M.2 PCIE ones.
What Google could be doing is remote logging on the laptops - logs uploaded to ze cloud every time you connect to the mothership. Plugging in USB drive leaves trace with USB ID, volume information etc. Windows also logs this and more http://www.forensicswiki.org/wiki/USB_History_Viewing
Protip: to exfiltrate data with minimal trace your best bet is taking out the drive and reading it in another computer (using write blocker for best effect), this can still be traced if someone is logging SMART written/read data (I am, but Im paranoid), not all HDD/SSD vendors provide this info. Second best is booting from USB drive so the original OS never sees the plug/unplug event in the first place, I have no idea about current state of UEFI/AMT logging going on tho.
Really interesting! It's sobering to think about the ways that even in a system not set up for logging you can trace back through these actions.
I was asked to figure out what had happened on a system where some data had changed and 2 parties were blaming each other. After about a hour digging around I managed to piece together a picture of how Person X had got up on a Monday morning, discovered (on their mobile, home wifi) that they had made a mistake on Friday, then logged in on their desktop to fix it from home (first time they logged in at home), then went to work and blamed someone else.
What was remarkable was how many different sources there were to pick up bits and pieces from. In isolation there wasn't much to go on, but once you start the connecting the parts, it's really incriminating.
2. logged into an area he should not have had access to (this is probably standard)
3. attached an external drive (possible, but standard?)
4. and they got all this info after he deleted the drive, which means they either went in and found remaining data on the drive or else they captured the info in real time.
I suppose if the drive is clean now, and they know he downloaded data, they can infer that he wiped it.
I suppose that if they know he accessed it, and there was software on his computer preventing him from doing so, they can infer that he downloaded something to overcome it.
But knowing that he connected to an external drive implies active monitoring. That's the part I am most curious about.
1) makes sense if Levandowski did it over the company internet connection. There could be a record of his unusual requests (software download, software update).
> they can infer that he wiped it
For 4), Levandowski reformatted the hard drive before returning it, so there's no inference there.
This is all information a rudimentary desktop auditing tool can gather and store on a server. Most collect both hardware (which would include connected devices) and software inventory. Anyone SHOULD be auditing company PCs on a relatively regular basis. It wouldn't surprise me if Google was auditing much more frequently than the average and could catch something like this in the act.
What? Most version control systems have authentication systems, I'm pretty sure Google has good reason to keep audit logs of employee's access to schematics.
Even if they do, it is very unlikely they were built with provable nonrepudiation requirements. From personal experience designing the security of a PKI CA that passed gov security certifications, the audit subsystem is the most challenging part to do right. Could probably consult for the defense in tearing down the evidence :)
It would require a prohibitive amount of engineering resources to be done right, i.e. a chain of guarantees that from creation time to the moment they are inspected it can be proven that the logs cannot be tampered with by nonauthorized users. There are other requirements e.g. separation of roles that are expected on audit subsystems. I am positive it would not pass an adversary expert analysis.
Google's threat models include nation-state adversaries: I suspect the effort that seems "prohibitive" to you was seen as necessary after the infamous smiley on that PRISM slide. Security is an existential threat: if user's don't trust Google, they will fail.
Google also has an internal PKI CA - I think they meet and exceed that security baseline for rigor.
Yes, for purposes of issuing certificates I'm sure they are OK wrt auditing (I was just establishing my "credentials" with the CA comment).
The threat models targeting anti-Google malicious actions obviously worked since they have traces of the Otto guy's activities. What I am asserting is that these forensics logs they use as evidence can be attacked in court as not being sufficiently protected from tampering by an internal Google party interested in fabricating evidence.
I'm answering my own question, well sort of. The complaint doesn't state this, just "specialized software". This doesn't make the complaint lesser of course.
There are all sorts of other tidbits in the complaint that further strengthens the case in favor of Waymo, it's an interesting read, and I'm surprised it's very readable even for a non-lawyer.
I'm guessing they saw it through remote access logs? If they see he downloaded gigs of data he wasn't supposed to be using for his job then that's very suspicious. I wonder how they found out that he copied it to an external hard drive though. I'll want to watch this case as it develops.
It's kind amazing that Google keeps reiterating how their incredible security protects your data, and that even employees are extremely carefully monitored. Yet their confidential trade secrets were taken by a dude who seemingly didn't have access to them as part of his job (since he had to go install the software) and a portable hard drive.
The post also says he talked about replicating their technology at a competitor with colleagues months before he actually stole the data too. Generally if you work on something confidential, and you start talking about taking it elsewhere, someone reports it or something.
Sidebar question if there are any armchair lawyers around: While I expect Uber to lose this lawsuit based on the type of evidence being claimed here, is it also possible for Uber to sue the supplier for leaking their confidential data back to Google? Because that seems like an incredible lapse of confidentiality in itself. Or will the notion that it wasn't legitimately their confidential data in the end, make Uber's own claim void?
There's a number of tradeoffs to be made between internal secrecy/silos and trust/openness in any company, and google tends to lean heavily towards trusting their employees when it comes to corp data + resources, designs, strategy etc.
User data is an entirely different matter, and is appropriately treated as such.
It doesn't need to be about "secrecy", it's about "security". Even when you ignore the employee trust issue, and assume everyone at your company would never betray you, "least privilege" is still not just best practice for security, it's common sense.
People should not be able to access data they don't need. If they need it, you can grant it. But the assumption should be that someone who doesn't use the design server shouldn't have access to the design server.
I don't want to know about or see information I don't need to have at work. It's not that I'm not trustworthy. It's not that I would abuse it. I just don't need the liability that it gets out through me.
Because your account credentials could get stolen. Your laptop could get stolen. Your laptop could get hacked into. Your laptop could get malware. Reducing the list of people who have access to a resource insulates against all of these things... automatically. And sure, all of those risks have other ways to mitigate them as well. But layers of security is key. And hey, it also stops employees from sneaking off with your data too.
Stealing a Googler's laptop or account credentials don't get you access. At best, you might get temporary access to my gmail in the browser. I could give you my username and password and you could accomplish nothing.
Googlers all have to use 2-factor access via hardware tokens, so you'll need to steal their laptop and steal their token, and murder the employee before they report it to security.
Google laptops only permit the installation of software from Google, they are locked down and don't allow arbitrary installation of software, much like an iPhone. Those using Chromebooks are even safer.
Having to ask permission for every thing not only adds huge overhead, it inhibits global code gardening and technical debt reduction, and it inhibits learning, because you don't even know if you need to ask permission for something until you see it. You don't know what you don't know. If I want to learn about Google Translate because it might benefit my project, asking permission is bureaucracy, because I don't even know if what they have will help until I see it, and if I had to write a long justification for access rights, I probably either don't have a clue why I really need it, or might just not bother because of the hassle and seek out other open resources.
I feel sorry for you if you work for a company that operates internally like North Korea. One of the rewarding things about working at Google is the constant learning experience of exploring other people's stuff.
Having access to code and data you don't technically "need" is often how people learn new skills and advance their careers without leaving the company. It's an important part of the culture.
Again, "bad" is your opinion. People aren't machines: how much trust you place in them affects your relationship with them. Culture is about your team's interconnected relationships.
For example: Smaller companies often have the luxury of more trust / transparency. As companies such as Google grow, they have to resort to things like sending internal notice of big announcements very close to when the actual announcements come out (because leakers). If you have 10 people, you can usually trust the whole team.
Google tries very hard to allow as much transparency and openness as is allowable for its size. It fosters a culture of trust. Logging access rather than restricting it is one of those culture moves. It makes employees feel trusted, and puts responsibility on them to behave ethically. When they don't, the other end is that Google can still take legal action.
I'd like to add to what other people are saying here. I've worked at a company where people were given, basically, the minimum level of trust. All the horizontal projects were dead, teams competed with each other for the same clients, and we chose meeting rooms to avoid being overheard by other teams when we were discussing things.
This is entirely not related to what we're talking about here. It's not about trust, it's about good practice. If someone needs access to something, they request it, you grant it. Simple. But there should be common sense controls on data access, for everyone's benefit.
You're talking about a hostile work environment. I'm talking about a secure one.
Security is inherently hostile, I don't see any other way of putting things. We tolerate a certain amount of hostility in order to reap the benefits that security gives us, and we tolerate a certain amount of vulnerability in order to reap the benefits that laxness gives us.
> If someone needs access to something, they request it, you grant it. Simple.
The saying goes that for every complex problem, like security, there is a simple solution, like that one, and that solution is wrong. The cost of such a process is just too high for most companies. You have to request access, explain why you need access, someone has to review it, then grant it. That's how it worked at the company I was complaining about. Processes that should take minutes took hours, those that should take hours took days.
Security, like so many things, is subject to cost-benefit analysis. Better security systems use the full triad: prevention, detection, and response. From the article, it sounds like these are working as intended. The security team detected the exfiltration and responded with a lawsuit. Trying to rely only on prevention will just lead to paralysis.
I might also be jaded, because when I hear phrases like "good practice" my instinct is that it means "omitting the cost-benefit analysis."
The problem is, it doesn't seem like prevention or detection was working here at all. Even though they clearly collected enough data to reconstruct what happened, they both failed to prevent this, by not having even common sense security protections, nor did they detect it when it occurred, only finding out because a supplier ratted out another client.
These are not high cost processes, these are basic, common sense practices we're talking about, that nobody really has any excuse to not have in place.
You are continuing to let your experience with a single, hostile work environment cloud your openness to something that... isn't even controversial. If you aren't locking down your files to only those who need them, you aren't equipped to be in business. If this is somehow uncommon among Silicon Valley, it explains why so many "they stole our trade secrets" lawsuits are going on right now.
> You are continuing to let your experience with a single, hostile work environment cloud your openness to something that... isn't even controversial.
I'm flattered that you want to talk about me, but really, I'm not the subject of the discussion here, and it's inappropriate to talk about what's going through my head or to try and psychoanalyze me.
> These are not high cost processes, these are basic, common sense practices we're talking about, that nobody really has any excuse to not have in place.
I've worked at a few different places on this spectrum in my career. Three of them have been fairly open, internally, like the way Google apparently operates. Maybe there are some high-value IP repositories you don't have access to, but you mostly have access to any source code you want to look at without getting access reviewed first. These companies were very open about the risks that this entailed, and openly discussed the fact that leaks were possible. The benefits became rather clear the longer I worked at each place. Whenever a system I worked on interacted with another system, I could follow what the other system was doing and even submit patches to other systems if necessary.
Saying that restrictive security is "common sense" or "not even controversial" is begging the question and argumentum ad populum, respectively. My argument here is that there are benefits to open access to most company IP, and that these benefits are important enough that the decision should be made on a company-by-company basis.
The access controls that would have prevented this particular case from happening would have to be rather draconian indeed. Anthony Levandowski's work was basically the genesis of autonomous vehicles at Google. Google purchased Levandowski's autonomous driving startup, 510 systems, in 2011. I don't know what kind of access controls you'd need to prevent a startup founder from accessing the technology built on top of his company's IP.
So, the head of my department doesn't have access to... most of what I do. That isn't to say she isn't in charge, or doesn't have every right to see that information. But it isn't her job, and she doesn't need that access, so she doesn't have it. If she needed it, she'd get it. There's no trust issue here, she is completely trustworthy. But her not having the access protects her just as much as it protects the rest of the organization. Because she doesn't have to worry about any risks to that access through her credentials.
In this case, Waymo has a design server, Anthony clearly didn't work on it's contents, because he didn't already have the software to access that server on his computer. Therefore, regardless of the source of the IP (which isn't his, he sold it), he really shouldn't have ever been given access to it. When the server was first spun up, access should've been given to... the people who would be using it, and nobody else.
Of course, if at some point he did need to access those files, he could ask, and be granted that access. And that doesn't need to be a difficult process (granting access to things takes an IT person a minute or two), but there is now an additional person that knows that user has been recently added to access. Even informally, this is a pretty good security measure, because in most cases, it should be fairly obvious why someone needs something. And if it's not obvious, and maybe that employee has been, as the article says, talking about leaving the company and replicating the technology elsewhere... suddenly that IT person maybe has a reason to mention the issue up the chain.
Where I work, I can make changes to what I'm working on that break things far away, from time to time. In well designed systems this doesn't happen too often, but you might be surprised sometimes how a seemingly insignificant change can make a system fail somewhere else because someone made an assumption that is no longer true.
So I can make a change, see that it breaks some test somewhere else (failed CI test), and peer into the diffs on the opposite side of the code base to decide what to do about that. It's proven quite useful, from time to time. I've seen weird problems like hitting pessimal access patterns for software developed halfway around the world.
> Anthony clearly didn't work on it's contents, because he didn't already have the software to access that server on his computer.
That doesn't follow. I work on a source code repository every day, but I'd need special software to exfiltrate a copy. Same with the various design documents and things I work with—all stored in a private cloud. If I wanted to exfiltrate it I'd get a script to do it automatically.
Remember, this wasn't just the guy who started the autonomous driver project. He wasn't just the "department head". He's an industrial engineer who founded a Lidar startup. The idea that he should be denied access to Lidar design documents is patently absurd.
Apple implements very restrictive internal secrecy, yet leaks haven't been stopped. So what's the point? Live in a police state, still get leaks, or live with freedom and benefits, and get leaks. I'll take the latter.
There's a huge difference between security of user data and security of company secrets. Having access to the latter is not unreasonable; access to the former should be heavily restricted and closely monitored.
There's a huge difference between employees being able to see our source repository, and employees being able to see user data or secrets. Generally, Google values its openness internally, and most employees can see the source code of most projects. In fact, that's how large scale refactorings and bug fixes work: I find a bug, I might have to fix the library in question, but also find all callers in the entire code base and fix them as well.
In general, when I develop, I virtually have all of Google's code base 'checked out' and can edit any of millions of files in my snapshot of the world. I don't need to check out multiple silo'ed repositories or beg for access, diving into any code in the universe has almost zero transactional overhead, it's all mapped into one giant filesystem. (https://plus.google.com/+MattUebel/posts/4dQBDF5CmdX)
On the other hand, production systems are heavily walled off from the corporate network. For all intents and purposes, the corp network your desktop is plugged into is "untrusted"
Chances are Google shares patent royalties with companies that help them build the technology they use, so the supplier was likely ensuring their continued royalties by informing on the infringement.
It shouldn't be hard. On Box, you can have it notify you every time someone downloads any file in a given directory or even specific files. I assume they have at least comparable infosec, if not explicit tripwires/honeypots in files that should never be downloaded.
I'm sure it doesn't sit well with google that a group of guys left and started a self-driving company that very quickly was acquired for a ton of money.. Most likely google wants to get a piece of the action since essentially they practically deserve it if their IP is being used to help the company make their technology.
We found that six weeks before his resignation this former
employee, Anthony Levandowski, downloaded over 14,000
highly confidential and proprietary design files for
Waymo’s various hardware systems, including designs of
Waymo’s LiDAR and circuit board. To gain access to Waymo’s
design server, Mr. Levandowski searched for and installed
specialized software onto his company-issued laptop. Once
inside, he downloaded 9.7 GB of Waymo’s highly
confidential files and trade secrets, including
blueprints, design files and testing documentation. Then
he connected an external drive to the laptop. Mr.
Levandowski then wiped and reformatted the laptop in an
attempt to erase forensic fingerprints.
Exactly! Btw, the specific term of art for "actual things" that are subject to theft is "rivalrous" (adj): a good whose consumption by one consumer prevents simultaneous consumption by other consumers.
You can commit theft of a rivalrous good, like an apple or a computer. You cannot, however, do the same to a file or a song or an idea. I wish more people understood this difference. Society would be better for it.
I take it as axiomatic that understanding the truth is better than believing in a falsehood. The latter almost always leads to gaps that can be exploited for potentially nefarious purposes.
In this particular case, the reason that theft is wrong (because it deprives another person) is very different than the reason that copyright infringement is illegal (because the founders wanted to encourage invention by granting limited monopolies). If people believe that copyright infringement is theft, however, then powerful corporations like Disney can convince them that copyrights should last indefinitely.
Well then I guess there exists a difference in ethics. Theft involves deprivation of the specific thing taken. So deleting the original data after copying it might consist of theft.
Not necessarily. Theft is simply taking something you shouldn't. That includes taking by duplication. Or perhaps you'd like to think it as the deprivation of potential profits.
There's no way in hell an employee contract would ever let someone copy documents and use them after leaving the company. The intellectual property belongs to Google, not the individual creators and certainly not to any other employee.
I had an interview there where the manager asked me to leave my laptop behind and go for a walk. I was hesitant after hearing stories of Uber conducting electronic espionage against its competitors. They could easily bypass Macbook security with a USB device (I had heard of that on HN too) so I was very nervous to leave my laptop behind and noted its exact orientation and position on the table. Sure enough when I returned my laptop had changed both position and orientation, but only enough to tell if you had specifically memorized it. I could be paranoid. They could have simply moved things on the desk. But anyway, people who are paranoid like me are advised not to take their laptops into Uber interviews. They are capable of just about anything, or so thinks my now paranoid self.
> I could be paranoid. They could have simply moved things on the desk. But anyway, people who are paranoid like me are advised not to take their laptops into Uber interviews. They are capable of just about anything, or so thinks my now paranoid self.
I would be cautious about discrediting intuition as paranoia.
I have been asked to bring my laptop to a interview so that I could could in a development environment I am already comfortable with. I not only found it reasonable but also appreciated it.
Memory forensics would have 100% given you evidence that it was accessed and exactly what was done. There are plenty of guides on how to forensically dump memory but that depends on the OS.
But I'm also in this is FUD camp. I highly doubt any company would do such a risky thing such as mirror your machine data merely for an interview. This would be highly illegal and damage the companies reputation for a relatively minor benefit.
Especially when interviewing any software dev which any infosec person would assume they are dealing with a sophisticated target with a high chance of being detected. Meaning they would have to use very careful cloning techniques.
It would be much easier to get you on staff on a company machine and temporarily monitor you closely after the fact. There are plenty of ways to thoroughly vet someone beforehand without taking such a risk.
I think you might be right, but I had reason to worry given Uber's history with competitors, esp. industrial espionage. They were actively competing with the company I worked for at the time. The laptop was my own, but I have no reason to assume that they knew that. I did have some material of my own that I considered to be valuable to their business so I was concerned primarily about that (I had done things that have pushed the envelop quite a bit in a very specialized area that they were just getting into) All of those factors made me paranoid. If Uber was a good corporate citizen with no history of actively and illegally spying on competitors, I would not have been so freaked out after that episode. Call me paranoid... what's new.
I've never felt the need to bring my own laptop to a job interview. If the company asked me to bring my own, I'd consider that strange or even suspicious.
It'd be really weird for them to request you to bring a laptop, but sometimes when a company flies you out for an onsite and your return flight is that night, your options are either 1) leave your laptop with the hotel as luggage (pretty bad) or 2) bring your laptop to the interview. There's no real reason for them to know you have your laptop with you however.
That is weird. What kind of tech company doesn't have an old laptop or a Chromebook for such interview format? What if you don't have a personal laptop? I'm certainly not bringing my work laptop to a job interview...
Recruiter A to recruiter B: Well, I hoped this one would be sharp enough, guess we need to search harder for people that wont be careless with our company sensitive stuff laying around simply because someone told them to leave laptop behind and take a walk ...
It's not possible unless you have zero-days against the USB drivers or firmware on your laptop, in which case being logged in or not doesn't really matter.
Proof I'm paranoid. My greater point, however, is that they have done some really shady stuff, and stealing competitor's IP is part of their culture, so their behavior itself promotes and justifies paranoia on my part and on the part of anyone looking to work for or do business with Uber.
I think it's pretty stupid to consider any consumer device to be secure enough. I did hear on HN some time before that interview that some USB device can be used to bypass the lock screen, which was the basis for my worrying. Now, some are saying in this thread that it is possible (or at least was at the time) while others saying that it is not (and was not) -- Even an educated sample of tech folks cannot make up their mind, so there is (or at least was) room for justified concern... no?
I think the consensus is that it's possible, but expensive. So like, nation-state espionage yes, corporate espionage no. But anyone who actually knows anything won't be talking about it on HN ;)
I can install hardware backdoor in a macbook in under 3 minutes (with prep obviously).
Even you could do it https://www.youtube.com/watch?v=qGPGOoJn54E hint: there are internal USB buses in the macbook, you can hijack one for something like rubberducky +management circuity to trigger only when laptop is powered on for a longer period of time but not touched (no imu/keyboard/touch events/dimmed screen).
A really critical thing that hasn't got much attention is that shortly before leaving Waymo, Levandowski had a meeting with senior Uber execs(!). The day after the meeting, he formed 280 Systems which became Otto.
The implication in the filing is that Uber planned this with Levandowski, and he only created Otto as a plausible corporate vehicle for developing the LiDAR technology before Uber acquired them. Given what we know about Uber and the assertions in the complaint, this sounds entirely plausible, maybe even likely.
In related news, Tesla is accusing ex-autopilot director Sterling Anderson of stealing code from Tesla before starting up Aurora with Chris Urmson (the former CTO of Alphabet's self driving car program):
> searched for and installed specialized software onto his company-issued laptop
That could mean he downloaded an SFTP client like Cyberduck. He could have searched the internet for a client and then installed it. It doesn't say he did not have auth.
Imagine a Google security engineer being deposed for this lawsuit.
Lawyer: "Show me on the MacBook how he downloaded the files"
Engineer: "Well, he used Cyberduck"
Lawyer: "Is that part of the Mac?"
Engineer: "No, he'd have to download it separately"
Lawyer: "So, he searched for and installed specialized software onto his company-issued laptop?"
Engineer: "Um, sure"
Lawyer: "Thank you, that's all the questions I had"
> That could mean he downloaded an SFTP client like Cyberduck. He could have searched the internet for a client and then installed it. It doesn't say he did not have auth.
They weren't trying to claim he hacked in. They're making the case he went out of his way to get his hands on these documents, and building a timeline that suggests why he went to that trouble.
It could also be a e-discovery tool like Nuix. If you want to find all documents containing X amongst millions of docs- that's the sort of tool I would use (work in computer forensics)
*edit. And 9.7 GB of data, assuming it's docs not "just' a lot of CAD is a lot of docs..
This is true, but it doesn't negate the issue. If you need some sort of client to gain access to the design server, and he didn't already have it, it very likely means that he didn't work on those files normally, and hence, really probably shouldn't have had access to them.
He may not have installed "hacking tools" or anything like that, but he did specifically take action to access files he didn't normally use as part of his job. Which is, I think, all that this post is claiming.
Interesting. I vividly remember a commenter here on a thread about Uber's acquisition of Otto. The user said based on the timeline and filings, it seemed like Otto hadn't really accomplished anything yet, and was probably founded purely to be acquired by Uber. I wonder if there's even more here...
Does anyone else remember this New Yorker profile [1] of Anthony Levandowski and self driving cars? Way back from 2013, when this tech was still novel. Google let Levandowski run the show for this piece -- his name is mentioned 57 times in the article. Goes to show how important and trusted he was in Google's universe.
Sure. I met him when he was still a student at UC Berkeley. He was the one who built the self-driving, self-balancing motorcycle for the 2005 DARPA Grand Challenge. It didn't navigate that well or get all that far, but it was really cool.
Maybe I have a selective memory as a former Zynga employee, but generally these "stolen documents" lawsuits in high profile tech companies have generally turned out to be pretty factual. Easy to prove, and hard to fake.
Considering even with logging off: journaling file systems, "user assist", device connection logs, pre-fetch (or your OS's equivalent)- these are all huge tranches of data if you are looking at a system shortly after an event. Ask me 6 months later- probably not. Give me a system that hasn't even rebooted, has a heap of ram and hasn't been used much since- 6 weeks is fine, not ideal, but doable.
Journaling filesystems don't "journal" all the activity in perpetuity. They typically just journal the changes until they're committed to disk, usually for less than a second.
I always was incredibly surprised at how quickly Uber had working self-driving cars (with the required, highly specialized hardware). Guess this explains it.
Who knows maybe they had somebody else from the Google self driving car project to steal self-driving car secrets earlier. Based on what this Levandowski guy did the industrial espionage may go unnoticed. I'm wondering if Waymo will require Uber to reveal schematics of their self-driving car project as part of the law suite.
I have doubts what was stolen was actually any of the secret sauce. An interface board for a lidar unit is probably one of the most simple things on the list.
The actual self driving software, and more importantly, all of the collected data from the waymo fleet would have been the key.
Not so much an interface board as a whole new tested design for a LIDAR unit, including a unique patented optics setup and laser driver circuit, according to the complaint. Also testing, manufacturing, and characterization procedures and results and information on suppliers for the parts required. The PCB was just the component whose accidental disclosure lead them to conclude that Uber and Otto were using the stolen design. Since the PCB apparently dictates the position and orientation of the laser diodes and sensors, presumably it would only be useful if they copied the whole thing.
Interesting. Is the specific Lidar unit really that big a differentiator? I understand they aren't cheap or simple, but it seems odd that each self driving car company would want to design their own. I would guess you would rather have some healthy ecosystem of suppliers...Velodyne, etc.
> it seems odd that each self driving car company would want to design their own
They don't want to do this, so, if they do, it's because they had to do this.
Practically all of the current sensor suites are expensive, bulky, and power hungry. If you want them on lots of cars, you need to reduce all 3 of those characteristics dramatically.
Frankly, I wish this "patented" (does that word mean anything at all these days ?) tech gets into China, and gives all us hackers cheap Lidars to play with.
Uber launched their beta in Pittsburgh just before Otto acquisition. This doesn't explain it. It explains how they quickly wanted to build their own lidar.
What kind of employee would download 14K files to a personal drive right before quitting? It is trivially easy to watch what files get copied over to external drives.
I think you can follow the money trail here and find some answers for sure. Now if Uber/Otto has a clause that prohibits employees from bringing in confidential data from previous companies, how can they be held liable? Does Google have to prove that those stolen documents were actually used in Uber designs?
A supplier of Google received the file from Uber and that supplier forwarded it to Google. This means the file was sent out by Uber to a supplier to try to get parts made. I think that's proof enough.
Btw that's 1 very sharp eyed engineer, whoever that is...
>Waymo was recently – and apparently inadvertently – copied on an email from one of its LiDAR component vendors.
Is this going to be a legal test of that annoying lawyer email footer language?
>This message contains information from xxxxxx that may be confidential and privileged. If you are not an intended recipient, please refrain from any disclosure, copying, distribution, or use of this information and note that such actions are prohibited. If you have received this information in error, please notify the sender immediately by telephone or by replying to this transmission.
Ha! More legalese BS that never holds up.
> Otto launched publicly in May 2016, and was quickly acquired by Uber in August 2016 for $680 million.
The fact pattern here is going to be absolutely brutal for Uber. A non-technical judge is going to see the allegation: ex-google employee downloads technical documents in December 2015, launches a company 5 months later in May 2016, and is bought for $680M (later speculated to be $1B+) for all its technical accomplishments. How much fundamental research did they do in the 3 months between May-16 and August-16?!?!? Or was it just to buy the stolen IP that google had developed over 7 years?!? Brutal for Uber!
--
A public company recently settled a similar lawsuit (competitor hires exec, exec is proven to have downloaded documents) for $130M on much smaller numbers. And the defendant was run through the legal wringer first.
Presumably you're in Arizona at the moment, Mr. Levandowski, it's close to the border, run for it!
We'll take a moment to remember the salad days, when you were just a crazy college kid who showed up at the Darpa Grand Challenge with a self driving motorcycle:
When your company stores very private info on billions of people, and is actively attacked (sometimes successfully) by the top intelligence agencies of the world[1][2], you have to be extremely careful, and monitor everything.
Economist Joseph Stiglitz wrote in 2009 "...banks that are too big to fail are too big to exist..."
My theory is that the too big to exist theory is now true for basically all the tech giants. Generally, everyone who knows the kind of tracking these companies do (internal and external) agree this is true, except those who benefit from the companies' continued existence e.g. employees, investors, shareholders.
On the other hand, imagine if the data collection never stops and one of the big companies gets hacked, or faces a serious competitive threat making it more likely to sell its data, starts going out of business, or needs to cooperate by sharing its data in return for government favors, or needs to share data to get access to foreign markets etc. I have a feeling this venerable "consumer" is going to learn a painful lesson one of these days.
I think it's reasonable to be suspicious but what they described sounds mostly feasible without extra steps of tracking.
What evidence did they present and how could it be tracked?
- downloaded 9.7 GB of waymo data -> server logs of what files where accessed and downloaded by what user
- searched for special software -> he used google while logged into a work account, so they just looked up that work accounts search history
- Connected external hard drive and wiped data -> Short of automatic backups or something this seems like there is software explicitly for tracking when data is copied, where, and how much
Most of this besides the external hard drive part can be done by any employer who owns your work your gmail account. What really should be alarming is how easy it is for your employer to get lots of data on you even if they aren't some tech giant.
> Most of this besides the external hard drive part can be done by any employer who owns your work your gmail account.
Actually, I think that would be the easiest as I would assume any external USB devices connected to a computer would automatically send an alert to the security team due to how easily they can infect your computer with malware. I'm not sure my company has something like that but we have posters everywhere telling people to never plug external USB devices into our computers so I would not be suprised.
>What really should be alarming is how easy it is for your employer to get lots of data on you even if they aren't some tech giant.
Right? So much for the Principle of least privilege.
Not only is it very well known internally, Google has even open sourced some of the tools that are used for that purpose: https://github.com/google/grr
Still, how can they work in a basically zero trust environment? They can't hope anyone reasonable will come up with some great idea and willingly share it with them.
That was exactly my point. Patents are definitionally impossible to steal (unless, I guess, you somehow are able to get access to the patent database and change the patent holder?)
That's not a very interesting patent. It's basically some optical stuff to do LIDAR scanning with non-square pixels, presumably because you want more resolution in one axis than the other. That's nice, but not a big improvement over existing technology.
There's something very wrong in the world when the people who invent things aren't the main beneficiary of their own inventions.
Edit: A guy downloads 9.7GB of other people's work, walks off with it, and sells it. Flushing years of work from hundreds of engineers down the toilet. You down voters really support that? Amazing.
They are probably misinterpreting your comment; at first I thought you meant Levandowski was the one who invented the LIDAR components, so he should be financially rewarded outside the normal Google comp system (by doing this whole steal-then-get-acquired strategy)
"There's something very wrong in the world when the people who invent things aren't the main beneficiary of their own inventions"
Anyone can be the primary beneficiary of their own invention, it they want to pay their own salary, build their own products, hire their own layers, sell their wares etc..
If you want to collect $150K/year while you do R&D for Google, which may produce not that much, while you leverage all of their knowledge, investment, user-base tools etc - well, then you fork over the rights to Google.
'Inventing' something is not just some guy coming up with an idea, and then making that 'invention' worth something is usually harder than inventing it in the first place.
What's absolutely hilarious is that a lot of the self-driving tech was originally funded by US, the American Citizens and google simply used it and worked on it further. Maybe every American should get shares in Waymo!
Google might have been better off with patents than trade secrets. There are financial penalties for theft of trade secrets, but once the secret is out, no injunctions. The one who stole it can use it. With patents, injunctions are available, although hard to get.
Anyway, several companies are developing automotive LIDAR units which are better than Google's rotating things. Quantergy and Velodyne claim to be close to low-cost solid state LIDARs, and ASC has good ones now at a high price point. (An ASC unit just docked the Dragon spacecraft with the ISS.) By the time this gets to court, Google's secret technology will be obsolete.
The question is whether Uber will defend Levandowski or leave him to twist slowly, slowly in the wind and go to jail.
There's a lot to be done at the semiconductor level for solid state LIDAR. The ASC units work great, but the sensor requires an InGaAs fab, like night vision sensors, to get good light sensitivity and thus range. Others are talking about getting good performance with a sensor that can be made in a CMOS fab, but nobody is shipping yet. This is an area where Waymo has a strong interest, even if they're not making the sensors themselves.
The board isn't the interesting bit, the board just shows that Otto was using Waymo's design. This also isn't what they're suing over, this was just the tip off they needed to figure out that the ex-employee stole data on the way out the door.
It looks like bloomberg updated the title to: "Alphabet's Waymo Alleges Uber Stole Self-Driving Secrets", which makes more sense. We should change the title here.
GV is generally very arms length of google itself, in fact when I've pitched them they specifically say as much. They'll make investments that are supportive to the google mission generally, and sometimes that investment is a hedge I would imagine.
To add on to this, the existence of other Google investing arms makes me more believing of the claim that Google Ventures is firewalled from Google corporate; it seems Google needs Google Capital (and Google corp which also directly invests sometimes) because they can't tell Google Ventures to invest in certain companies even if they would like to for strategic reasons.
Despite the damning alleged evidence, I get the feeling that all the offenders knew that they would be found out ahead of time, evaluated the risk reward trade-off and decided that they could somehow get away with it.
Are there any lawyers here who could make an educated guess how they could?
> Recently, we received an unexpected email. One of our suppliers specializing in LiDAR components sent us an attachment (apparently inadvertently) of machine drawings of what was purported to be Uber’s LiDAR circuit board — except its design bore a striking resemblance to Waymo’s unique LiDAR design.
Doesn't sound plausible. At a minimum, this would have to be the "dumbed down" version of how they uncovered this.
* Levandowski first registered the domain for his then(now Otto) company on Nov'15
* The suit says on 3rd of Dec'15 he searched for the LIDAR docs and on 11th of Dec'15, he downloaded 14,000 docs from Google's servers.
* Google alleges that on Jan'16, Levandowski told his colleagues that he plans to replicate the Waymo tech at one of Waymo's competitor.
* One of the damning allegation from Waymo is that he met with top execs at Uber at their HQ in SF on Jan 14th 2016.
* Just a day later on 15th he officially formed one of his company(280 Systems, now part of Otto), later on Feb 1st he also registered his other company(Otto Trucking) Feb 1st.
* Strangely after working at Google for about 7 years, he quit Google without a notice(from suit) on Jan 27th.
This is from the interview Bloomberg's did after Uber acquired Otto: 'Kalanick began courting Levandowski this spring, broaching the possibility of an acquisition during a series of 10-mile night walks from the Soma neighborhood where Uber is also headquartered to the Golden Gate Bridge. The two men would leave their offices separately—to avoid being seen by employees, the press, or competitors. They’d grab takeout food, then rendezvous near the city’s Ferry Building. Levandowski says he saw a union as a way to bring the company’s trucks to market faster.'
From the above details, it can imply any of these three things might have happened,
* Scenario 1: He or Uber didn't do anything different from the official story so far.
* Scenario 2: Levandowski went to Uber saying he has custom LIDAR tech but ended up starting his own company the next day and 8 months later Uber just bought them for $680M for the team and tech he alleged stole from Waymo.
* Scenario 3: Levandowski went to Uber in Jan'16, said he has the tech for custom LIDAR, Uber wants it, but there is non-suspicious way for taking the tech directly to Uber since Levandowski alone can't build it. Instead Uber suggests to spin off his own company, hire a team (mostly from Waymo), put together a demo in Nevada desert. This brings in all the press and validity that Otto has the self-driving tech and team. So at this point Otto and Levandowski is a Self-driving tech startup not a LIDAR startup. Now Uber can come in, acquire this hot startup and team, in a market that's worth Trillions. Now Uber is suddenly in the trucking business, gets a huge PR and valuation bump. In this process they also get the LIDAR tech that's build in just 9 months.
What it means is that if the 3rd theory is true, Uber was always buying the LIDAR tech from Levandowski even before he left Waymo. Otto and other components are just a proxy so that it gives them a great story without any suspicions.
To put things into perspective, a single Velodyne HDL-64E LIDAR that almost all self-driving companies use costs around $75,000. Waymo says their equivalent custom alternative costs less than 10% (<$7000). This is a huge cost saving for a tech that is going to go in 100,000+ cars Uber hopes to have in the market in the future. So yea, this can be a bullshit Lawsuit (based on the evidence, less likely) or a well executed corporate espionage!
"I know it's hard to believe, Travis, but while I worked for a company doing the exact the same thing as this company, at nights, I singlehandedly created this trove of patentable technology that will revolutionize the automobile industry, which coincidently, really, my former employer is spending billions to do. You have to trust me."
Is the HN headline correct? The bloomberg article says "trade secrets" which are very different from patents, the video also says that this is not primarily a patent case.
Lastly, you have to ssh into a workstation to access any files and you aren't allowed to have files downloaded locally for this exact reason. He would have had to have gone through some trouble to even obtain these files locally I would imagine.
For eng workstations, virtually all are Goobuntu unless you have a reallllly good reason (e.g. CAD software for mech e's, or work on Windows Chrome).
For laptops, they're essentially just used as a dumb web terminal + ssh. You can get mac, goobuntu, or maybe even Windows. Since you aren't allowed to have source code on laptops (except certain special exceptions), it's not as big of a concern.
Workstations mount what's called citc (clients in the cloud). And you can use whatever ide or editor you like. Code is never downloaded directly to the desktop. You don't really notice any performance issues with this and you get the benefit of being able to build or include any library without having to download it all.
Laptops can't mount citc, and you need to either SSH/rdp into a workstation or use a web based code editor.
Once you've used citc/Piper/blaze you'll find it's a great system and there is no good alternative... Yet
As for OS choice I believe only goobuntu can be used to modify Google's code repo.
My understanding is that Google has users on Linux, Mac, and Windows, and has resources for managing all of them. As the other commenter points out, Goobuntu is a custom distro of Ubuntu that Googlers use. I do recall hearing people tended to need a justification for needing Windows, but I'm pretty sure OS X users are common.
The medium article also notes that a couple more employees stole confidential information. Five years later Waymo employees will be bitching and moaning about corporate overreach and will have these fucktards to thank. (If allegations are proved).
Yeah, that's wrong, being an investor in a company doesn't make you liable for their actions.
It does mean that Alphabet stands to lose something if they win the lawsuit, though whatever stake they do have in Uber is obviously negligible compared to WayMo itself.
My guess would be trade secrets since patents would be public information and afaik the technology isn't currently being sold this wouldn't be subject to patent infringement laws. Not a lawyer though, so I could be wrong.
Don't worry bud and thanks for your reply!! I knew there was a 50% chance that I was severely confused and I appreciate you taking the time to clarify.
This is a little idiotic. Alphabet let all their talent walk out the door. I'm assuming mostly because of idiotic management and now they're suing. They're basically losing on all sides.
> According to a lawsuit filed today in federal court in California, Waymo accuses Anthony Levandowski, an engineer who left Google to found Otto and now serves as a top ranking Uber executive, stole 14,000 highly confidential documents from Google before departing to start his own company. Among the documents were schematics of a circuit board and details about radar and LIDAR technology, Waymo says
> The lawsuit claims that a team of ex-Google engineers used critical technology, including the Lidar laser sensors, in the autonomous trucking startup they founded, and which Uber later acquired
I was confused as to what stealing a patent actually meant:)
Waymo has also posted this....
https://medium.com/@waymo/a-note-on-our-lawsuit-against-otto...
From this post...
> Recently, we received an unexpected email. One of our suppliers specializing in LiDAR components sent us an attachment (apparently inadvertently) of machine drawings of what was purported to be Uber’s LiDAR circuit board — except its design bore a striking resemblance to Waymo’s unique LiDAR design.
> We found that six weeks before his resignation this former employee, Anthony Levandowski, downloaded over 14,000 highly confidential and proprietary design files for Waymo’s various hardware systems, including designs of Waymo’s LiDAR and circuit board. To gain access to Waymo’s design server, Mr. Levandowski searched for and installed specialized software onto his company-issued laptop. Once inside, he downloaded 9.7 GB of Waymo’s highly confidential files and trade secrets, including blueprints, design files and testing documentation. Then he connected an external drive to the laptop. Mr. Levandowski then wiped and reformatted the laptop in an attempt to erase forensic fingerprints.
Ooops, that does sound bad after a first read.