Cost of Sandboxing Prompts Shift to Memory-Safe Languages. A Little Too Late?


NEWS ANALYSIS: Google’s decision to promote Rust for low-level Android programming is another sign that the shelf-life for memory corruption mitigations are no match for the speed of in-the-wild exploit development.

Just 13 years after Google introduced the sandbox in Chrome touting “a new approach in browser security,” the company is now blaming the limitations — and high processing cost — of sandboxing for a new decision to promote Rust as the low-level programming language of choice for the Android operating system.

The decision to promote Rust over C and C++ isn’t exactly a surprise but the language used in Google’s announcement signals a sad end to the sandbox as an effective anti-exploit mitigation for a vexing problem haunting software engineering since the mid-1990s.

“Sandboxing is expensive,” says Jeff Vander Stoep, a member of Google’s Android team. “Sandboxing doesn’t eliminate vulnerabilities from the code and its efficacy is reduced by high bug density, allowing attackers to chain multiple vulnerabilities together,” he added.

He said Google is turning to memory-safe languages like Rust to help overcome these “limitations” by lowering the density of bugs in the Android code and increasing the effectiveness of the existing sandbox.

More importantly, Vander Stoep says the move reduces Google’s sandboxing needs entirely and enables the creation and introduction of new software features that are both safer and lighter on resources.

HIGH SEVERITY 

He said memory safety problems in C and C++ continue to be the “most-difficult-to-address” issue in modern software engineering and noted that the Android team — like the rest of the tech industry — spent heavily to mitigate a class of vulnerabilities without much success.

“In spite of these efforts [to detect, fix and mitigate these issues], memory safety bugs continue to be a top contributor of stability issues, and consistently represent more than 70 percent  of Android’s high severity security vulnerabilities,” Vander Stoep added.

The problem isn’t limited to Android.  Published data from Google Project Zero shows that the majority of in-the-wild zero-day attacks over the last few years are dominated by buffer overflows and use-after-free memory corruption issues.

Ever since the mid-1990s, the brightest minds in cybersecurity have attempted to find approaches to solve memory corruption issues.  These long-term, cross-industry efforts produced many exciting mitigations and effectively raised the cost for attackers.

However, these mitigations came with heavy costs and the shelf-life for things like sandboxing has gotten shorter and shorter as attackers quickly found ways to bypass these roadblocks.

CAT AND MOUSE

“It’s a cat-and-mouse we can’t win,” says an ex-Google software engineer who requested anonymity. “We spend a lot on these mitigations. We spend a lot to address CPU performance hits, we spend a ton more on software development costs and, poof, someone comes up with a bypass and it feels like we’re back to square one.”

“Should we really have spent so much on sandboxing?  Should we really have spent so much all the control-flow integrity mitigations that didn’t really move the bar?  I’m not sure this was money and resources well spent,” he said in exasperation.

Despite these frustrations, the sandbox has been an effective mitigation that made it very difficult to exploit software programs.  Prior to the popularity of sandboxing, attackers could simply exploit a single buffer overflow vulnerability to achieve remote code execution, the holy grail of attacks.

Once sandboxes became a key feature in browsers and desktop apps, the cost of a remote code execution exploit was raised. Where a single bug won the Pwn2Own contest 10 years ago, attackers today must chain exploits for multiple vulnerabilities and include an expensive “sandbox escape” to achieve full code execution.

Now, there’s a shift to using memory-safe languages to effectively eliminate memory corruption as a bug class. For example, on the Android OS, managed languages like Java and Kotlin have become the go-to choice for app development because of ease of use, portability, and safety.  In addition, the Android Runtime (ART) manages memory on behalf of the developer.

The Android OS uses Java extensively, effectively protecting large portions of the Android platform from memory corruption bugs. However, Java and Kotlin are not options for lower layers of the operating system, as Google explains here:

Lower levels of the OS require systems programming languages like C, C++, and Rust. These languages are designed with control and predictability as goals. They provide access to low level system resources and hardware. They are light on resources and have more predictable performance characteristics.

For C and C++, the developer is responsible for managing memory lifetime. Unfortunately, it’s easy to make mistakes when doing this, especially in complex and multithreaded codebases.

Rust provides memory safety guarantees by using a combination of compile-time checks to enforce object lifetime/ownership and runtime checks to ensure that memory accesses are valid. This safety is achieved while providing equivalent performance to C and C++.

Google describes the addition of a new language to the Android platform is a large undertaking and warned that there are toolchains and dependencies that need to be maintained, test infrastructure and tooling that must be updated, and developers that need to be trained. “Scaling this to more of the OS is a multi-year project,” the company said.

NO SILVER BULLET

Still, security experts warn that moving to Rust isn’t going to solve all memory lifetime issues.

“Certain patterns just turn them into application logic bugs vs. RCE. An improvement, but no silver bullet,” says Halvar Flake, a security industry pioneer who has worked on memory corruption mitigations.

On a recent podcast interview, head of privacy and security at Lyft Nico Waisman also discussed the shift to memory-safe languages and predicted a future of app logic bugs and a continued dependence on C and C++ code in legacy software products.

Memory safety issues will continue to haunt the security landscape for decades to come but there is optimism that a combination of new technology — especially around memory tagging — and the adoption of safer programming languages will point to a brighter future.

But, as history has shown, mitigations have a shelf-life and novel techniques and bypasses are always just a Black Hat presentation away.  The sandbox was effective and it made life more difficult for attackers but, as Google’s decision shows, there’s a hefty price tag and no signs that sandbox-bypass exploits are going away.

The sandbox had a good run but it’s time to acknowledge this mitigation has seen its time in the sun.

view counter

Ryan Naraine is Editor-at-Large at SecurityWeek and host of the popular Security Conversations podcast series. Ryan is a journalist and cybersecurity strategist with more than 20 years experience covering IT security and technology trends. He is a regular speaker at cybersecurity conferences around the world.
Ryan has built security engagement programs at major global brands, including Intel Corp., Bishop Fox and Kaspersky GReAT. He is a co-founder of Threatpost and the global SAS conference series. Ryan’s career as a journalist includes bylines at major technology publications including Ziff Davis eWEEK, CBS Interactive’s ZDNet, PCMag and PC World.
Follow Ryan on Twitter @ryanaraine.

Previous Columns by Ryan Naraine:
Tags:



Don't forget to share

You may also like...

Leave a Reply

Your email address will not be published. Required fields are marked *