Skip to main content

Synapsys

A Python library for modelling, analysis and real-time simulation of linear control systems. Provides a MATLAB-compatible API over SciPy, a multi-agent simulation framework, and a pluggable transport layer (shared memory / ZMQ) for MIL → SIL → HIL workflows.

pippip install synapsys
uvuv add synapsys
devuv sync --extra dev

Overview

Synapsys covers the full control-design workflow — from continuous-time LTI modelling to discrete real-time closed-loop simulation — with a consistent API across all stages.

Transfer functions and state-space — synapsys.api
from synapsys.api import tf, ss, step, bode, feedback, c2d

# Transfer function: G(s) = ωn² / (s² + 2ζωnˢ + ωn²)
wn, zeta = 10.0, 0.5
G = tf([wn**2], [1, 2*zeta*wn, wn**2])

# Closed-loop (negative feedback)
T = feedback(G)

# Frequency and time-domain analysis
w, mag, phase = bode(G)
t, y = step(T)

# Zero-order-hold discretisation at 200 Hz
Gd = c2d(G, dt=0.005)

Package Overview

PackageContentsStatus
synapsys.coreTransferFunction, StateSpace, ZOH discretisationStable
synapsys.apiMATLAB-compatible layer: tf(), ss(), step(), bode()Stable
synapsys.algorithmsDiscrete PID with anti-windup, LQR (ARE solver)Stable
synapsys.agentsPlantAgent, ControllerAgent, SyncEngineFunctional
synapsys.transportSharedMemory (zero-copy), ZMQ PUB/SUB & REQ/REPFunctional
synapsys.hwHardwareInterface, MockHardwareInterface (HIL)Interface
synapsys.mpcModel Predictive ControlPlanned

Simulation Fidelity Ladder

Synapsys is designed for incremental fidelity increases. Only the transport layer changes — the controller algorithm remains identical across all three stages.

MILModel-in-the-LoopSharedMemoryTransport · PlantAgent
SILSoftware-in-the-LoopZMQTransport · separate process
HILHardware-in-the-LoopHardwareAgent · real device

See the HIL / SIL guide for a step-by-step migration example.

AI + Control Systems Integration

Synapsys is built for modern control research workflows. Any PyTorch, JAX or scikit-learn model can be plugged directly into a ControllerAgent via a single np.ndarray → np.ndarray callback — enabling physics-informed neural networks, reinforcement learning policies and data-driven controllers to run in real-time SIL/HIL loops.

r = 1 mm₁m₂F=3.5Nt=0.00s x₂=0.000m Neural-LQR
Physical model — 2-DOF mass-spring-damper
Neural-LQR controller on 2-DOF mass-spring-damper: animated position tracking, velocities, control force and phase portrait

Neural-LQR on a 2-DOF mass-spring-damper — MLP initialized with LQR optimal gains tracking setpoint x₂ = 1 m. Phase portrait shows convergence to equilibrium. Run live via the SIL example.

Physics-Informed InitOutput layer initialized with LQR gains (solves ARE). Guarantees closed-loop stability from step 0 — no random exploration phase needed.
RL Fine-Tuning ReadyHidden layers trained by PPO/SAC/DDPG. The LQR baseline provides a shaped reward landscape, dramatically reducing sample complexity.
Any nn.Module WorksLSTM, Transformer, diffusion policy — the ControllerAgent callback wraps any model. Only the numpy↔tensor boundary changes.
Real-Time SIL LoopForward pass runs in a dedicated thread at 100 Hz over shared memory. Latency budget: < 1 ms inference + < 1 µs IPC.

Full walkthrough: SIL + Neural-LQR example →