Why Video-Based World Models Fail for Interactive Systems
Video generation conflates appearance with state. We examine why persistence and composability require a fundamentally different substrate.
Read MorePapers, explorations, and artifacts from the STAR Labs team.
Video generation conflates appearance with state. We examine why persistence and composability require a fundamentally different substrate.
Read MoreOur approach to reading mesh geometry and semantic material labels to classify simulation type and emit solver parameters — without hand-authored physics.
Read MoreWhat would a game engine look like if time were a first-class dimension alongside x, y, z? Early architectural notes from our engine build.
Read MorePhysical environments encode relationships, affordances, and causality that no language model has been trained to read. Here's how we're approaching it.
Read MoreFrom raw geometry to semantic objects to functional regions — mapping the abstraction hierarchy that humans navigate effortlessly in physical space.
Read MoreArchitecture and engineering don't need better CAD tools. They need systems that understand space well enough to reason about what should exist in it.
Read MoreWe're formally launching STAR Labs as an applied research group. Here's what we're building and why.
Read MoreEarly prototype footage and architecture notes from our Godot-forked physics inference pipeline.
Read MoreIf spatial intelligence, world modeling, or 3D-native systems are what you think about — we want to talk.
Read MoreSTAR Labs is early-stage and actively looking for collaborators, researchers, and builders.