Performance
Brane is optimized for low latency and high throughput. Here's what to expect and how to tune for your use case.
Key Numbers
| Provider | Mode | Throughput | Latency |
|---|---|---|---|
| WebSocket | Sequential | ~8,000 ops/s | 0.12ms |
| WebSocket | Batch (100) | ~110,000 ops/s | - |
| HTTP (Loom) | Batch (100) | ~22,000 ops/s | - |
WebSocket vs HTTP
When WebSocket Wins
- Persistent connection — No TCP handshake per request
- Request multiplexing — Multiple in-flight requests on one connection
- Disruptor batching —
sendAsyncBatch()coalesces network writes
When HTTP is Fine
- Serverless / Lambda (no persistent connections)
- Low request volume (< 100 req/s)
- Simple read-only operations
Optimizing Throughput
Use Batch Mode
For bulk operations, use sendAsyncBatch():
import java.util.concurrent.CompletableFuture;
import java.util.List;
import java.util.ArrayList;
var futures = new ArrayList<CompletableFuture<?>>();
for (int i = 0; i < 1000; i++) {
futures.add(provider.sendAsyncBatch("eth_blockNumber", List.of()));
}
CompletableFuture.allOf(futures.toArray(new CompletableFuture[0])).join();Tune the Ring Buffer
For very high throughput, increase the ring buffer size:
var config = WebSocketConfig.builder("wss://...")
.ringBufferSize(16384) // Default: 4096
.maxPendingRequests(65536)
.build();Use YIELDING Wait Strategy
For latency-critical applications:
var config = WebSocketConfig.builder("wss://...")
.waitStrategy(WaitStrategyType.YIELDING) // Lower latency, higher CPU
.build();Enable Native Transport
Brane auto-detects and uses native transport (kqueue on macOS, epoll on Linux) for optimal performance:
// Native transport is auto-enabled - no configuration needed
var config = WebSocketConfig.builder("wss://...")
.build(); // Uses kqueue/epoll automatically| Transport | Throughput | Variance |
|---|---|---|
| NIO (Java) | 25,235 ops/s | ±6.6% |
| Native (kqueue/epoll) | 27,918 ops/s | ±2.6% |
| Improvement | +10.6% | -57% stdev |
Native transport provides more consistent performance with significantly lower variance.
Configure Frame Size for Large Responses
The default 64KB frame limit is configurable via maxFrameSize(). For large eth_getLogs responses, increase to 4MB or more:
var config = WebSocketConfig.builder("wss://...")
.maxFrameSize(4 * 1024 * 1024) // 4MB
.build();Optimizing Latency
Per-Request Timeouts
Set aggressive timeouts for time-sensitive operations:
import java.time.Duration;
// 5 second timeout for gas price (needs to be fresh)
var gasPrice = provider.sendAsync("eth_gasPrice", List.of(),
Duration.ofSeconds(5)).get();Keep Connections Warm
Avoid cold-start latency by keeping the WebSocket connection open:
// Create once, reuse for the lifetime of your application
var provider = WebSocketProvider.create("wss://...");
// Don't create/close per requestABI Encoding Performance
| Operation | Throughput |
|---|---|
Brane encodeCall | ~2,500,000 ops/s |
web3j encodeFunction | ~400,000 ops/s |
| Brane Speedup | 6.3x |
Brane's ABI encoder is optimized for zero-allocation encoding in hot paths.
Allocation-Conscious APIs
For GC-sensitive applications, Brane provides zero-allocation variants of common operations.
Hex Encoding/Decoding
| Method | Allocation |
|---|---|
Hex.decode() | 48 B/op (result array) |
Hex.decodeTo() | 0 B/op |
Hex.encode() | 264 B/op (char[] + String) |
Hex.encodeTo() | 0 B/op |
import sh.brane.primitives.Hex;
// Pre-allocate buffers once
byte[] decodeBuffer = new byte[32];
char[] encodeBuffer = new char[66]; // 2 prefix + 64 hex
// Zero-allocation decode
Hex.decodeTo(hexString, 0, hexString.length(), decodeBuffer, 0);
// Zero-allocation encode
Hex.encodeTo(bytes, encodeBuffer, 0, true); // true = include "0x" prefixABI Encoding
import sh.brane.core.abi.FastAbiEncoder;
import java.nio.ByteBuffer;
// Pre-allocate buffer
ByteBuffer buffer = ByteBuffer.allocate(68); // 4 selector + 32*2 args
// Zero-allocation encode
byte[] selector = new byte[]{...};
FastAbiEncoder.encodeTo(selector, args, buffer);
// For uint256 with primitives (avoids BigInteger boxing)
FastAbiEncoder.encodeUint256(42L, buffer); // 0 B/opRunning Your Own Benchmarks
Quick Benchmark (Local)
# Start Anvil
anvil --host 0.0.0.0 --port 8545
# Run quick benchmark
./gradlew :brane-benchmark:runQuickBenchmarkFull JMH Suite
./gradlew :brane-benchmark:jmhAgainst Real Networks
Create a .env file:
HTTP_RPC_URL=https://eth-mainnet.g.alchemy.com/v2/YOUR_KEY
WS_RPC_URL=wss://eth-mainnet.g.alchemy.com/v2/YOUR_KEYThen:
./gradlew :brane-benchmark:runRealWorldBenchmarkMethodology
All benchmarks use:
- Warmup: 100 iterations
- Measurement: 1,000 iterations
- Environment: Local Anvil devnet (eliminates network variability)
- JVM: OpenJDK 21 with G1GC
Performance Monitoring
Integrate with your observability stack using BraneMetrics:
provider.setMetrics(new MyMicrometerMetrics(registry));See Metrics & Observability for implementation details.