Timeplus Enterprise 3.2
Key Highlights
Key highlights of the Timeplus 3.2 release include:
- Major performance improvement in the data replication network layer — up to 30x faster in some scenarios — powered by request pooling, recyclable network buffers, sharded request/response channels, scatter/gather writes, and IPv6 support.
- Major performance improvement (up to 40x) for Kafka consume / parsing for Protobuf, CSV, and similar formats via smart batching and a new parallel parsing strategy for Kafka source.
- Major enhancements to Python UDFs and external Python table functions now enable secure, direct communication with the local timeplusd instance via automatically provisioned an ephemeral user and token.
- Broad stability and quality hardening across mutable streams, checkpoints, materialized views, streaming joins, memory accounting, and replicated log recovery.
- Improved Okta SSO integration with a smoother login flow and support for mapping Okta users to read-only or admin roles.
Supported OS
| Deployment Type | OS |
|---|---|
| Linux bare metal | x64 or ARM chips: Ubuntu 20.04+, RHEL 8+, Fedora 35+, Amazon Linux 2023 |
| Mac bare metal | Intel or Apple chips: macOS 14, macOS 15 |
| Kubernetes | Kubernetes 1.25+, with Helm 3.12+ |
Releases
We recommend using stable releases for production deployment. Engineering builds are available for testing and evaluation purposes.
3.2.3
Released on 04-12-2026. Installation options:
- For Linux or Mac users:
curl https://install.timeplus.com/3.0 | shDownloads - For Docker users (not recommended for production):
docker run -p 8000:8000 docker.timeplus.com/timeplus/timeplus-enterprise:3.2.3 - For Kubernetes users:
helm install timeplus/timeplus-enterprise --version 11.0.7
Component versions:
- timeplusd 3.2.3
- timeplus_appserver 3.2.1
- timeplus_connector 3.2.0
- timeplus cli 3.0.0
- timeplus byoc 1.0.0
Changelog
This release consolidates all timeplusd changes from 3.1.2 through 3.2.3.
Features and Enhancements
- Add global tiered storage policy (#11811)
- Add feature flag to disable the workload rebalancer (#11815)
- Enhance parallel Kafka source (#11742)
- Add Python external stream init and deinit hooks (#11756)
- Add server config gates for JavaScript and Python UDFs (#11752)
- Propagate local API user to Python UDF (#11806) and introduce local API user support (#11804)
- Support non-const arguments for
randdistribution functions (#11715) - Add time-weighted aggregate combinator support and coverage (#11583)
- Support map type generation in random stream (#11586)
- Add
keep_range_join_max_bucketssetting to cap range join bucket count (#11775) - Improve discard log messages to include
range_time_columnname (#11854) - Add IPv6 support (#11653) and IPv6 enforce setting with misc fixes (#11670)
- Enable client batch send by default (#11658)
- Upgrade curl to 8.18 and enable Pulsar on macOS (#11693)
- Upgrade Pulsar client to v4.0.1 (#11611)
- Backfill right-side hash table in streaming enrichment joins (#11702)
- Tolerate
UNKNOWN_STREAMinDROP CASCADE / IF EXISTS(#11820) - Derive subject name according to schema subject strategy (#11705)
- Better error handling in stream metadata updates (#11542)
- Move external stream validation from constructors to dedicated methods (#11636)
- Validate task creation (#11664)
Performance
- Batch process Protobuf messages for Kafka (#11572)
- Avoid
Fieldtemporaries in C++ to Python column conversion (#11836) - Set
fill_cache=falseon KV full-scan read paths (#11831) - Add Python GIL wait instrumentation (#11808)
- Improve Python UDF data conversion and Python consume sink performance (#11788)
- Commit historical data inline in the poll thread (#11749)
- Merge small JSON blocks before commit (#11765)
- Add request pooling (#11643)
- Introduce recyclable network buffers (#11626)
- Scatter/gather writes for client (#11629)
- Sharding inflight requests map (#11628) and sharding response channel (#11655)
- Switch blocking queue to dequeue with queue reuse (#11622)
- Optimize eloop wakeup (#11617) and net perf baseline (#11616)
- Fine tune network buffer size (#11641)
- Refactor string-concatenation INSERTs to block-based inserts (#11573)
- Lock guard in consumer (#11800)
Bug Fixes
- Disable checkpoints at runtime for random-source materialized views (#11827)
- Fix join changelog mode for nested aggregation (#11833)
- Fix cgroup memory accounting (#11817)
- Prevent early meta-log compaction (#11826)
- Fix zero replication client (#11813)
- Fix async commit batch and client lifecycle (#11810)
- Fix memory tracker regression causing spurious
MEMORY_LIMIT_EXCEEDED(#11783) - Fix TimeWheel races and shutdown ordering (#11718)
- Fix heap-use-after-free in
RocksDBColumnFamilyHandler::destroy()(#11758) - Clear spilled hybrid update state after emit batches (#11741)
- Fix replicated log startup epoch recovery (#11728)
- Handle Python source cancel during MV failover (#11727)
- Fix context expired MV table subquery (#11738)
- Fix streaming CTE/subquery with aggregation returning empty results (#11695)
- Update MV schema after
ALTER STREAM MODIFY QUERY(#11681) - Keep session watermark monotonic (#11649)
- Support more streaming resize combinations (#11671)
- Fix
synthesizeQuorumReplicationStatus(#11668) - SQL analyzer now returns 400 for invalid SQL (#11708)
- Fix watchdog restart semantics (#11706)
- Preserve OR semantics with empty
INon mutable stream (#11596) - Handle malformed Confluent Protobuf messages gracefully (#11587)
- Fix avg time-weighted overflow (#11591)
- Fix fatal crash from stale async checkpoint epoch mismatch (#11601)
- Fix secondary index cleanup on delete for mutable stream (#11593)
- Fix server wakeup during shutdown (#11633)
- Fix meta log lagging (#11682)
- Improve
FetchHintunsetfile_positionlogging (#11772) - Backport replxx updates (#11818)
- Port Proton OSS #1124: merge small JSON block before commit (#11765)
- Add self-join regression coverage fix (#11678)