Quick start workflow
Create an account with your email address, verify it with the emailed 6-digit code, then open the editor and write a grammar into the input panel. Save drafts privately while you iterate. When a piece is ready, publish it to the gallery so it becomes browsable in the live viewer.
The PHP shell does not replace the original grammar runtime. It wraps the working modular editor so the same scene engine, SmartEditor, syntax highlighting, STL export, grid, axis widget, and orbit navigation remain available inside a proper site workflow.
- Email-verified accounts with code-based confirmation
- Login-protected editor and file storage
- Public gallery for published grammars
- Read-only public viewer pages
- Copy-to-editor flow for remixing gallery work
Account and recovery
Registration now stores an email address and requires email verification before normal login is allowed. If an unverified user tries to sign in, the site resends a verification code and routes them back to the verification page.
The login screen also exposes a password reset flow. Users can request a 6-digit login code by username or email, choose a new password on the reset page, confirm it, and be logged in automatically after the reset succeeds.
- 6-digit verification codes expire after 15 minutes
- Password reset uses a separate emailed login code
- Successful publish events send an admin notification email
- New registrations also send an admin notification email
Grammar authoring tips
Keep operators readable, but spaces around brackets are optional. The parser accepts forms such as `Tower(h)` and `[T(0 1 0)Part]` as long as the token boundaries stay unambiguous. Expressions may also contain spaces, tabs, newlines, and carriage returns, so multiline math is valid inside transforms and conditionals.
Build from the simplest valid scene first. Start with a single instance, then add parameters, calls, transforms, grouped blocks, and conditionals one layer at a time. That makes scene debugging far easier than writing a large grammar and trying to reason about multiple changes at once.
Use grouped blocks when you want scoped deformation behavior. Local DS*/DT* state and global GDS*/GDT* subset state both clone on group entry and restore on exit.
- Start from one visible primitive
- Add transforms incrementally
- Use brackets to compose multi-part scenes
- Add parameters and conditional calls only after the base form renders correctly
- Use GDS*/GDT* inside groups when you want several cubes to deform together without affecting the whole scene
Publishing and storage
This build now runs against Firebase-only storage at runtime. Firebase Auth signs users in, Firestore stores account and grammar metadata, and Cloud Storage stores the grammar source blobs.
Because the site keeps authored grammar text rather than only baked geometry, published works can be previewed, opened in the public viewer, or copied back into a private editor session for further iteration.
- Firebase Auth for account sign-in and verification
- Firestore for canonical user profiles and file metadata
- Cloud Storage for grammar source blobs
- Legacy JSON files remain offline migration input only
- Runtime storage now lives outside the public web root
Materials and private texture slots
The Materials page adds a private texture workspace on top of the renderer. Each account gets 20 user texture slots, and every active slot stores both a human-readable label and a grammar-safe material name.
Use the grammar-safe material name directly inside instance calls, for example `I ( Cube moss_wall 1 )`. The older slot tokens such as `usertexture4` still work, but named materials are easier to read and maintain when a grammar grows beyond a quick test scene.
Uploads and AI-generated textures both flow through the same pipeline: the image is normalized to a 1024x1024 PNG, stored privately under your account, and then exposed to the editor and materials manager through that slot and material-name mapping.
- 20 private user texture slots per account
- Separate display label and grammar material name per slot
- Direct grammar usage such as `I ( Cube moss_wall 1 )`
- AI texture generation and manual uploads share the same normalized storage path
- Texture records stay private to the signed-in account
Render settings page and WebGL2 rollout planning
The Render Settings page is the renderer-planning workbench for the WebGL2 upgrade. It exposes the live renderer capability probe, the current settings API surface, and a target draft for skybox, world-space lighting, shadows, and anti-aliasing.
Use it when you want a concrete implementation checklist and a copyable settings contract before those later pipeline passes are fully live.
- Run a live preview scene against the shared browser renderer
- Inspect capability blockers such as offscreen pass, cubemap, and shadow support
- Keep a future scene settings draft for skybox, shadows, and AA while the passes are still being implemented
Video page and camera path authoring
The Video page is a camera-authoring surface built on the same browser renderer as the editor. Run grammar into the live scene, orbit or frame the view, capture that camera into timed shots, and preview the movement path before exporting it.
The page emits two outputs: a proposed cinema camera script and a JSON render-job contract. The script is intended as the future grammar surface for cinematic camera movement, while the JSON block is the cleaner machine-facing contract for a headless capture worker.
Shot timing is absolute. Each shot stores a name, a time on the timeline, an eye position, a target position, and an easing preset that drives interpolation into the next shot.
- Capture current renderer framing into named timeline shots
- Preview interpolated camera movement inside the live WebGL scene
- Export a draft `cinema path_name { ... }` script
- Export a structured headless render job for future automation
- Use explicit eye and target coordinates for deterministic replay
Headless video rendering path
The current renderer is browser-first, so the practical video pipeline is headless Chromium plus Puppeteer frame stepping rather than a separate Node-only OpenGL port. That keeps authoring, preview, and final capture on the same rendering stack.
The intended automation flow is: open `video.php`, load the grammar and camera path, call `window.PG3DVideoPage.applyCameraAtTime(seconds)` for each frame time, capture the canvas output as PNG frames, then assemble the sequence with ffmpeg.
This means the Video page is already useful as an authoring and planning tool today, even before a full queue-driven video worker exists on the server.
- Recommended driver: headless Chromium with Puppeteer
- Deterministic frame stepping through `window.PG3DVideoPage.applyCameraAtTime(seconds)`
- Canvas frame export first, ffmpeg assembly second
- No need to fork the renderer into a separate headless graphics stack yet
Scene viewer controls
The integrated viewer uses the custom SVEC orbit style that was patched into the project. Drag to orbit, use the wheel to zoom, and use the fit/reset controls to reframe the current grammar output. The XZ grid and orientation axis widget are always visible inside the active scene viewer.
Published gallery pieces use the same viewer engine in read-only mode. That gives the live site a consistent visual language between authoring, browsing, and public presentation.
- Drag to orbit
- Ctrl-drag or secondary drag to pan
- Wheel to zoom
- Fit View and Reset View for framing