Student Machine Lab
We are a design and research lab building natural interfaces that make complex AI workflows simple and affordable for everyone. Our flagship project is a friendly offline AI assistant named Modu.
Building the OS for local AI
We’re building the interface layer for an on‑device future, where people interact with local AI through intuitive design and clear, unassuming language - not parameter counts. Today, most on‑device AI interfaces are built for developers first. We design and test new UX systems that make local AI work seamlessly and feel enjoyable for anyone to use.
While there are clear advantages to using open-source local models, most of these models are headless: they lack an interface for users to interact with. This modularity works perfectly fine for developers who are training models or wiring them into automated enterprise pipelines, but for the average user that doesn’t, and shouldn’t, have to know what a pip install is, the current environment setup simply does not work.
Our thesis is that this is largely a design challenge, and that the value proposition of local models can be made clear through creative design components that emphasize practical value and instant usability. This is why we are designing an on-device copilot that understands the context of your hardware, screen and actions, provides proactive recommendations, and embeds itself in your daily routine, all without sending data to the cloud or being tear-jerkingly expensive.
But building great local AI isn’t just about the interface - it’s about delivering a complete, ready-to-use experience from day one. On their own, local models still fall short of what people now expect from everyday AI tools. It is only when you layer on web search, multimodality, large scale file context, powerful RAG, and rich MCP connections that they become viable for real, day‑to‑day workflows.
People naturally expect immediate functionality without having to tinker with servers, config files, or API keys. Our team aims to achieve that "out-of-the-box" functionality, where web search, multimodality, and other key characteristics come pre-bundled in local AI apps.
To deliver that kind of out‑of‑the‑box experience on real‑world machines, we also have to be honest about the limits of today’s local models. While we are local‑first, we are also realistic about the limitations of current local models. After all, not everyone owns an M4 Max or RTX 4090. That is why we are looking beyond just local deployment. We are also actively developing server-side infrastructure designed to bridge this gap, giving users a standardized, high-performance environment for heavier workloads when local machines reach their limit. Whether self-hosted or managed, this layer would ensure people get the reliability of cloud-grade compute without sacrificing the principles of open-source AI.
Who are we?
We were the first Korean team to receive the Medici Grant from Danielle Strachman, GP of the 1517 Fund, co-founder of the Thiel Fellowship, and early backer of Ethereum, Figma, Loom, and others.
Our previous projects placed 8th in Y Combinator’s SUS Pitch Competition, were nominated for Site of the Year by Framer, won a global case competition hosted by UC Berkeley, were featured on the front page of read.cv, and reached #1 trending on the leaderboard of Korea's Product Hunt.
If you’re a student based in Seoul (or not) with even a passing interest in building meaningful tools for students, get in touch.
Student Computer Comapny is run by Joonseo Chang and Jihoon Lim. Illustrations by the great noona Hannah Lee 🌼.