Skip to main content
Core Infrastructure

VMS Core Engine

The foundation of Visylix. A high-performance stream processing engine handling 1M+ concurrent streams with zero-copy architecture, multi-protocol support, and flexible recording capabilities.

1M+
Concurrent Streams
Zero-Copy
Frame Delivery Architecture
<1ms
Internal Pipeline Latency
40 Gbps
Throughput per Node
99.999%
Uptime Target
H.264/H.265
Codec Support
Stream Ingestion

Push and Pull. Your Way.

Two ingestion modes to accommodate every camera, encoder, and deployment scenario. Visylix adapts to your infrastructure, not the other way around.

Push Mode

Cameras and encoders push streams directly to Visylix via RTMP, SRT, or WebRTC. Ideal for dynamic environments where devices initiate connections, such as mobile units, drones, and body-worn cameras.

  • Auto-registration of new stream sources
  • Dynamic bandwidth adaptation
  • Reconnection with session persistence
  • Source authentication and validation

Pull Mode

Visylix pulls streams from cameras and NVRs over RTSP or other protocols. Best for fixed installations where the VMS manages the connection lifecycle and camera inventory.

  • Scheduled pull with retry policies
  • ONVIF device discovery support
  • Credential management per source
  • Health monitoring and auto-reconnect
Protocols

Multi-Protocol Streaming

Native support for six streaming protocols. Each protocol is purpose-built for specific latency, reliability, and compatibility requirements.

RTSP

Pull
<2s

Industry-standard protocol for IP camera integration. Supports TCP and UDP transport with ONVIF compatibility.

RTMP

Push
<2s

Mature push-based protocol for encoders and broadcast equipment. Persistent TCP connections with low overhead.

WebRTC

Push / Pull
<500ms

Ultra-low-latency peer-to-peer streaming with custom DTLS and SRTP stack. Ideal for real-time monitoring and two-way communication.

SRT

Push / Pull
<1s

Secure, reliable transport over unpredictable networks. AES-128/256 encryption with forward error correction for WAN delivery.

LL-HLS

Output
<5s

Apple Low-Latency HLS for web and mobile clients. Partial segments and preload hints minimize glass-to-glass delay.

HLS

Output
7-10s

Standard HTTP Live Streaming for maximum compatibility across browsers, smart TVs, and set-top boxes at scale.

Recording

Four Recording Modes

Capture what matters, when it matters. From continuous 24/7 archival to intelligent event-triggered recording.

Continuous

24/7 recording with configurable retention policies. Frames are written directly from the ingestion pipeline with zero re-encoding overhead.

Schedule-Based

Define recording windows by day of week, time range, and camera group. Supports recurring and one-time schedules with timezone awareness.

Event-Triggered

Recording starts and stops based on AI detection events, external API triggers, or sensor input. Pre-event and post-event buffers capture complete context.

On-Demand

Users or API consumers trigger recording manually for specific streams. Supports duration-limited and indefinite capture with real-time status updates.

Storage

Three Storage Formats

Choose the right container format for your retention, playback, and distribution requirements.

MP4

Standard MPEG-4 Part 14 container for maximum playback compatibility. Moov atom is written progressively to prevent data loss on unexpected shutdown.

Best for

Long-term archival, export, and evidence management

fMP4

Fragmented MP4 with independent segments for instant seek and parallel write. Each fragment is self-contained, enabling efficient cloud storage and CDN distribution.

Best for

Cloud-native storage, CDN delivery, and real-time playback

TS

MPEG Transport Stream for high-resilience recording. Self-synchronizing packet structure ensures recoverability even after partial corruption or disk failure.

Best for

Critical infrastructure, disaster recovery, and broadcast workflows

Architecture

Zero-Copy Streaming

Frames flow from ingestion to output without unnecessary memory copies, achieving maximum throughput with minimal CPU overhead.

01

Ingestion

Frames arrive via RTSP, RTMP, SRT, or WebRTC and are placed into shared memory buffers. No copy occurs during protocol demuxing.

02

Routing

The stream router maps each frame reference to subscribed consumers: recording, AI pipelines, and output protocols. Only pointers are forwarded.

03

Processing

AI models read frames directly from shared memory. Recording writes from the same buffer. No intermediate copies at any stage.

04

Delivery

Output protocols (WebRTC, HLS, LL-HLS) read the original frame data for packaging and delivery. The frame is released only when all consumers finish.

Explore More of Visylix

Learn how AI analytics, deployment options, and integration APIs build on top of the VMS Core Engine.