AI MUSINGS

Fun with Figma Make -Simulating the Solar System

I built a NASA-accurate Solar System simulation using Figma Make — no code required.

The app uses WebGL to simulate the solar system. I recreated the orbits of all 8 planets using real astrophysics, not canned animation. It solves Kepler’s Equation — the 400-year-old formula behind orbital motion — through Newton-Raphson iteration, just like NASA’s own software.

All data comes from NASA JPL Horizons (J2000 epoch) — the same dataset used for mission planning. Planets follow true elliptical, 3D orbits that speed up near the Sun and slow down farther away. Even comets bend and sling past the Sun under its gravity, their paths curving dynamically as in real space.

You can even fast-forward time to watch which planets will be engulfed when the Sun expands into a red giant at the end of its life — a glimpse of the far future, simulated through math.

The bigger idea:

We’re entering a future where students won’t just write book reports — they’ll build living, interactive simulations to explore how the universe works.

Creating music with Suno

For the last year, I’ve been exploring music with Suno, generating original tracks through AI. To shape the output, I built a custom GPT in ChatGPT that blends songwriting rules with Suno’s prompt syntax, allowing me to craft structured prompts that turn ideas into music. Creating music with AI has opened up a new form of self-expression. It has given me the chance to explore emotions and ideas I hadn’t tapped into before. By combining my own lyrical concepts with AI’s suggestions, I’ve been able to push past traditional boundaries and experiment with sounds and stories I wouldn’t have come up with on my own. It has shown me how technology can extend creativity and help uncover new possibilities.

Adding dimensionality to social media content

I’m an astrophotographer who loves capturing the night sky, but I noticed that simply posting still images on social media wasn’t holding people’s attention. What always fascinated me in the YouTube astrophotography channels I follow was the storytelling — learning more about the objects behind the images. So I began adding new layers of content to my work. I turn a single photo into a short video by weaving in the object’s backstory, narration, and music. The story is generated with ChatGPT, the narration with ElevenLabs, and the background music with Suno.

From Start to Podcast: Rapid Production with AI Tools

The goal of this project was to test how quickly a high-quality podcast could be produced using AI tools — and whether the end result would be suitable for public viewing.

I began by sourcing data from Department of Transportation websites and organizing the material in NotebookLM. From there, I drafted a script, which I refined and shortened with ChatGPT to fit a three-minute podcast format. To create visuals, I used Leonardo.ai to generate a realistic cover images. For narration, I turned to ElevenLabs, which also provided sound effects to enhance the production.

From start to finish, the entire process — research, scripting, visuals, audio, and editing — took only half a day, demonstrating the potential of AI-driven workflows to streamline content creation at a professional level.