🌏 Project URLβ†—
https://call-me.e-kezia.com
🐈 Githubβ†—
https://github.com/ekkezia/call-me
πŸ‘€ Overview

A showcase for the 3D asset live documentation for a Bad Times Disco in Wong Chuk Hang, Hong Kong

πŸ“š Tech Stack
NextJS
React Three Fiber
Supabase
styled-components
Python
OpenCV
MiDAS
Boto3
AWS S3
πŸ–οΈ Description

This project reinterprets the act of party documentation into a computational performance β€” where every camera movement, every crowd motion, becomes data. Using an iPhone camera and Python-based depth reconstruction pipeline, the project captures 2D footage of partygoers and transforms it into textured 3D forms, producing a sculptural record of ephemeral moments.

🧠 Technical System

  1. Data Capture
    Using a laptop connected to an iPhone camera, 2D RGB images of the audience are captured live during the event.
    • Python (OpenCV) is used to stream and periodically record frames.
    • Every n frames, the captured image is sent through a monocular depth estimation model (mono+stereo_640x192) to generate a depth map representing the spatial topology of the scene.

Blog image

initializing webcam & reference to analyze_image method to predict rgbd data out of a 2D image

  1. 3D Reconstruction in Blender
    A custom Blender Python script automates the creation of depth-mapped 3D planes:
    • The RGB image is used as a texture.
    • The corresponding depth map drives a Displace Modifier, converting 2D pixels into a relief surface.
    • The scene is then exported as a GLTF using Blender’s Python API (bpy.ops.export_scene.gltf), producing lightweight, browser-ready 3D objects.
Blog image

creating the mesh plane on Blender for the 3D object

  1. Storage & Cloud Pipeline β€”
    Each generated GLTF object is uploaded to AWS S3, ensuring persistent access and scalability across captures.

  2. Frontend Visualization
    On the web interface, the 3D reconstructions are rendered with React Three Fiber, enabling users to:
    • Orbit, pan, and inspect each scene.
    • Experience the β€œparty” as a fragmented, volumetric archive β€” somewhere between memory, surveillance, and sculpture.

call-me

Preview of the interactive (by cursor rotate) photogrammetry



🧩 Conceptual Layer

The project treats documentation as reconstruction: each captured frame becomes a digital fossil of a collective experience. Rather than chasing photorealism, the depth distortions and mesh irregularities become aesthetic markers of noise, presence, and proximity β€” visualizing how memory and perception warp in social space.

By using real-time depth mapping and procedural geometry generation, the system transforms the act of filming into 3D modeling, dissolving the boundary between photographer, subject, and algorithm.

⭐️ Featured In
πŸ“ Notes

The application should be ideally hosted in a client-based app (e.g, React JS) due to its interactivity, but it is using Next JS currently because I need to host it on my personal website.

Elizabeth Kezia Widjaja Β© 2025 πŸ™‚