Integration Guide — Batch Computation
1000 charts. Trivial parallelism.
Vedākṣha's stateless, pure-function architecture makes batch computation a first-class use case. There is no special batch API — you simply call the same functions you already use, in parallel, across as many threads or processes as you have available.
The library ships with built-in Rayon integration for Rust consumers. Python and WASM consumers can use their platform's native parallelism primitives — the GIL is not held during computation.
Why It Works
Stateless Architecture
Every Vedākṣha function takes its full input as parameters and returns a value. There is no mutable global state, no singleton, no session object. Functions can be called from any thread at any time.
Pure Functions
The same inputs always produce the same outputs. No hidden dependencies, no environment reads, no network calls inside the compute functions themselves. This makes every function trivially cacheable and trivially parallelisable.
Thread-Safe by Construction
The Rust type system enforces the absence of shared mutable state at compile time. There are no locks, no mutexes, no atomic operations in the hot path. Parallelism is free.
Zero Coordination Overhead
Batch jobs need no coordinator process, no work queue, no message broker. Split your input list, hand shards to threads or workers, collect results. That is the entire architecture.
Rust — Parallel Batch with Rayon
Add rayon to your dependencies and replace .iter() with .par_iter(). Rayon handles thread pool management automatically.
use rayon::prelude::*;
use vedaksha::prelude::*;
/// A minimal birth record: Julian Day + geographic coordinates
struct BirthRecord {
julian_day: f64,
latitude: f64,
longitude: f64,
}
fn batch_compute(records: Vec<BirthRecord>) {
let config = ChartConfig::vedic();
// par_iter() distributes across all available CPU cores
let results: Vec<_> = records
.par_iter()
.map(|rec| {
compute_chart(
rec.julian_day,
rec.latitude,
rec.longitude,
&config,
)
}).
collect();
// Results are in the same order as records.
// Errors are per-record — one bad JD does not abort the batch.
for (i, result) in results.iter().enumerate() {
match result {
Ok(graph) => println!("chart id: ...", graph.id),
Err(e) => eprintln!("error: ...", e),
}
}
}Python — Parallel Batch with ProcessPoolExecutor
The Python GIL is not held during Vedākṣha computation — the extension releases it before entering Rust. Use ProcessPoolExecutor for the best parallelism on multi-core systems.
from concurrent.futures import ProcessPoolExecutor
import vedaksha as vk
def compute_one(record: dict) -> dict | str:
"""Run in a worker process — no shared state needed."""
try:
chart = vk.compute_chart(
julian_day = record["jd"],
latitude = record["lat"],
longitude = record["lon"],
)
return {"id": chart.graph.id, "asc": chart.houses.ascendant}
except vk.ComputeError as e:
return {"error": str(e), "suggested_action": e.suggested_action}
def batch_compute(records: list[dict]) -> list[dict]:
with ProcessPoolExecutor() as executor:
results = list(executor.map(compute_one, records))
return results
# Example: 1000 records, computed across all CPU cores
records = [{"jd": 2448057.9 + i, "lat": 28.6, "lon": 77.2}
for i in range(1000)]
results = batch_compute(records)
print(f"Computed {len(results)} charts")Performance Characteristics
Measured on Apple M2 Pro. All timings are wall-clock including function call overhead. Batch timings use Rayon on Rust / ProcessPoolExecutor on Python.
~0.08 ms~80 ms (1 core)~12 ms (8 cores)~0.002 ms~2 ms (1 core)<1 ms (8 cores)~0.15 ms~150 ms (1 core)~20 ms (8 cores)~1.2 ms~1.2 s (1 core)~170 ms (8 cores)~0.05 ms~50 ms (1 core)~7 ms (8 cores)Parallel timings scale linearly with core count. No synchronisation overhead.