Welcome to Gong Bu, an innovative iOS app designed to enhance Korean vocabulary learning through interactive, tech-driven features. As the developer behind Gong Bu, I envisioned creating an app that would make Korean practice more dynamic, engaging, and accessible for students of all levels. Here’s a breakdown of the app’s journey from initial concept to its current state.
Solo work ☀️
Getting Started: Building the MVP
Week 1: The journey began with designing the MVP, where I outlined essential features and constructed a basic project framework. Starting with file handling and data processing, the first task was to enable the app to accept CSV files containing Korean and English word pairs. This functionality allowed users to input vocabulary sets, which the app would store and use as prompts for language practice. I created a method to read CSV data, map the words into storage, and ensure that each word pair could be accessed easily for testing.
Week 2: To simulate a testing environment, I implemented a simple prompt system where the app displayed English words, prompting users to write their Korean translations. This phase involved setting up the user interface and coding a response mechanism to capture the user's input. Since one of the key features was handwriting recognition, I integrated a canvas where users could draw Korean characters.
Integrating Google Cloud Vision API for Handwriting Recognition
Weeks 3-4: With the groundwork in place, I tackled one of the core features: handwriting recognition. The app connects to the Google Cloud Vision API, which interprets handwritten Korean characters and returns recognized text in JSON format. Setting up this API integration was challenging, particularly with constructing the JSON payloads and handling API responses effectively. However, the Vision API now processes users' handwriting and evaluates whether the written character matches the prompt.
Testing and Debugging: During this phase, I tested the handwriting recognition feature rigorously, addressing issues such as API miscalls, response delays, and ensuring the app’s UI updated accurately with each test result.
Different start page :) But the handwriting OCR was not the greatest despite calling Google Gemini API.
Adding Speech-to-Text and Enhancing the Backend
Weeks 5-6: After setting up handwriting recognition, I worked on backend processing using Firebase and Vertex AI. This integration ensures all word-pair data and user progress are managed efficiently, providing a smooth, lag-free user experience. I also introduced a speech-to-text feature where users can speak commands or responses, which the app compares against the expected Korean vocabulary.
Designing for User Experience
Weeks 7-10: I shifted focus to design, refining the app's aesthetics and making it more user-friendly. I crafted an intuitive start page and implemented features like a “Start Writing” button, which opens the handwriting canvas, and a “Submit” button to send responses for grading. Additionally, I prioritized accessibility, making the interface adaptable across various screen sizes.
Current Stage and Next Steps
As I approach the launch phase, I’m optimizing the app’s performance, enhancing visual elements, and preparing for a final round of user testing. My immediate goals include adding more responsive design elements and ensuring compatibility with Apple's development standards.
Feeling: 😵🙂↕️ – It’s been a rewarding journey, with challenges and excitement at every step. Seeing the app evolve from a basic MVP to a fully-featured learning tool has been incredibly fulfilling.