In order for the prototype to work properly, please give it access to your location as well as mic access. 1) Tap space bar to start routing automatically. 2) Tap voice button to activate AI.

Overview
This project explores how far AI-assisted design-to-code workflows can be pushed using Figma Make — not just for visual design, but for building functional, API-driven prototypes. It began as a small test to see if I could run the Mapbox API inside Figma Make. After completing that integration quickly, I continued to expand the challenge — integrating real-time navigation, open-source mapping, and conversational AI.

Concept
The goal was to create a high-fidelity CarPlay-style infotainment prototype that blends natural voice interaction with live navigation. The prototype connects multiple APIs to simulate production-ready behaviors, using:

  • MapLibre / OpenStreetMap / OSRM / Nominatim for free, real-time navigation

  • Deepgram + OpenAI for speech recognition and conversational AI

  • iOS-style UI elements and audio feedback inspired by Apple CarPlay

Key Features

  • Real-time turn-by-turn navigation using a 100% open-source stack

  • AI voice assistant capable of natural conversation and command recognition

  • Audio queue management to prevent overlap between navigation and AI responses

  • Pixel-perfect UI fidelity modeled after Apple’s design system (SF Pro, layout, motion)

  • Full documentation of implementation, state management, and debugging steps

What I Learned
Building this prototype revealed both the promise and fragility of AI-driven development.

  • AI can accelerate integration work dramatically — completing complex tasks in minutes.

  • However, it often bypasses exact requirements, introduces subtle logic errors, and requires ongoing supervision.

  • Model upgrades can break established workflows, leaving no control over when to adapt.

Outcome
The result is a realistic prototype demonstrating how complex, multi-API applications can be prototyped in Figma Make without paid services or manual coding. It highlights both the creative potential and practical limits of current AI tooling for design and engineering research.