While working with Rust and WebAssembly, particularly when compiling complex crates with wasm-pack, you might encounter an unusual error: locals exceed maximum
. This blog post explores this issue and proposes a couple of solutions, each with its own merits.
The error typically appears as follows:
error: failed to parse input file as wasm. Caused by: locals exceed maximum (at offset 69420)
This message indicates that a compiled .wasm binary contains a function with more than 50k local variables, exceeding the limit set by the wasm parser (you can find the code here).
If you would like to find out the function which throws this issue you can convert the offset to hex via:
printf "%x\n" 69420
Disassemble the .wasm file with wasm-objdump:
wasm-objdump /target/wasm32-unknown-unknown/debug/test.wasm
Get the function name with (if the file is huge - you can open the file and scroll until you can find it):
awk ' { buffer[NR] = $0 } /10f2c/ { for (i=NR-1; i>0; i--) { if (buffer[i] ~ /func\[/) { print buffer[i]; break } } } ' objdump
One effective approach is to modify the optimization level in the Cargo.toml
file. This can reduce the number of local variables generated during compilation. Here’s how you can do it:
[profile.dev] opt-level = 1 incremental = true
Setting opt-level = 1
enables basic optimizations that can significantly reduce the number of local variables. The incremental option ensures faster subsequent builds.
An alternative solution, though more complex, involves manually implementing serialization
and deserialization
logic instead of relying on Serde macros. This approach gives you finer control over the code structure and can help reduce the number of local variables. Tools like Serde
use Rust’s meta-programming capabilities (like macros) to generate serialization code automatically. This process can produce code that's more complex than necessary for simple cases, potentially introducing many local variables. The automatically generated code is designed to work for a broad range of scenarios, which can sometimes be overkill for simpler structures, leading to unnecessary complexity and overhead.
Consider a struct MyData:
struct MyData { field1: i32, field2: String, // ... more fields ... field42069: usize }
A manual serialization function might look like this:
impl Serialize for MyData { fn serialize<S>(&self, serializer: S) -> Result<S::Ok, S::Error> where S: Serializer, { // Handle each field explicitly } }
Deserialization can be more challenging:
impl<'de> Deserialize<'de> for MyData { fn deserialize<D>(deserializer: D) -> Result<Self, D::Error> where D: Deserializer<'de>, { enum Field { Field1, Field2, ..., Field42069 }; impl<'de> Deserialize<'de> for Field { fn deserialize<D>(deserializer: D) -> Result<Field, D::Error> where D: Deserializer<'de>, { struct FieldVisitor; impl<'de> Visitor<'de> for FieldVisitor { type Value = Field; // Parse 'value' and return what you need ... } } } } }
If you have a struct with a lot of fields, consider splitting it into smaller structs. This can help manage the complexity and reduce the number of local variables in a single function.
Instead of one big struct:
struct BigData { field1: i32, // ... many other fields ... }
struct DataPart1 { field1: i32, // ... fewer fields ... } struct DataPart2 { // ... remaining fields ... }
This makes your code cleaner and may help with the too-many-variables issue.
Optimizing Cargo.toml
is simpler and doesn't add extra maintenance overhead. However, it might not always be sufficient for complex cases.
Manual Serialization/Deserialization provides more control and potentially better performance, but at the cost of increased complexity and potential for bugs.
When encountering the locals exceed maximum
error in Rust-WebAssembly projects, consider adjusting the Cargo.toml
optimization level as a first step. For more complex cases, or when looking for more efficient code, manually implementing serialization/deserialization
might be a viable alternative, though it requires careful consideration due to its complexity.