Niloom AI. Redesigning a GenAI Platform for Spatial Creators
New York Startup. a generative AI platform that empowers spatial (AR/VR/XR) creators to build immersive worlds with ease.
Scope
From discovery to beta launch
Co-Lead UX UX Designer
Aug 2023 - Late 2024
Project Overview
Client
Niloom AI
Industry
Generative AI · Spatial Computing · Creator Tools
Niloom AI is a generative AI platform that allows users to create AR/VR experiences using text or speech prompts. The web app serves as the core creation tool, enabling ideation, collaboration, editing, prototyping, and instant publishing to the Niloom Play app across devices like Meta Quest, iPhone, Vision Pro, and Android.
My Role
As co-lead UX designer for Niloom.ai's browser-based web app, I collaborated closely with a cross-functional team, including engineers, AI specialists, creatives, and product managers, to make advanced spatial computing accessible to both casual creators and professionals. This case study details the end-to-end process, from problem discovery to post-launch impact.
How it works
The Product
Core Challenge
How do you design a GenAI platform that is both powerful and playful. without intimidating creators?
AR/VR content creation traditionally demands specialized skills, expensive tools (e.g., Unity), and hardware expertise. excluding most creators. Workflows are fragmented, iteration slow, and cross-platform testing cumbersome.
Core pain points identified
  • Heavy developer dependency for prototypes
  • Disjointed asset sourcing (characters, audio, effects)
  • Complex publishing across VR/AR devices
Process/Methodology
I initiated the design process through collaborative ideation workshops, working closely with the CEO, engineers, AI specialists, and the creative team to define the UX vision of our generative AI platform. Together, I mapped the end-to-end journey of a system that transitions simple text prompts into fully immersive VR/AR scenes. customizable either manually or through our AI agent, NilooAI.
I moved quickly into low-fidelity explorations, producing rapid sketches to experiment with prompt flows, asset libraries, and real-time preview integrations. These early concepts helped us clarify how users would move from imagination to spatial output with minimal friction.
From there, I developed mid-fidelity clickable prototypes to validate the core journey: prompt → generate → edit → publish. These prototypes allowed us to test interaction logic, refine the editing experience, and ensure that AI-powered generation felt intuitive rather than technical.
The process was highly iterative, with frequent cross-functional reviews and early AI feature integration. particularly around handling ambiguous prompts through contextual suggestions, smart examples, and adaptive guidance. Throughout, I placed strong emphasis on responsive browser design and intuitive spatial metaphors, ensuring the experience felt both powerful and accessible across devices.
FigJam - Ideation Process
Final Design
Now Available
in the App Store
GET
Shots from my office
Niloom Play Icon - App Store
The guy behine pixels 😊