The Leadership Blueprint: Shipping 4 Products with a Team of 6
Leadership isn’t about doing more; it’s about making the right calls with limited resources. At Amega, I led a team of six engineers across three concurrent projects: Wanz, Yoka, and Fundix. No dedicated teams per project. No luxury of focusing on just one thing. Every sprint was a balancing act between competing priorities, tight deadlines, and real users waiting on the other side. This article is a breakdown of how I managed the systems I built, the trade-offs I made, and the lessons in leadership, time management, and engineering strategy that I’d carry into any team I lead next.
Wanz – The Product That Taught Me to Prioritize Ruthlessly
My journey at Amega started with Wanz, a social platform for traders where users could browse trading tips, recommendations, day trades, market news, and more. At that point, I had one junior React Native developer on the team, and the product had a critical performance problem that needed immediate attention.
The app’s infinite scroll list was barely hitting 30 fps. At 60 fps, which is the standard for a smooth mobile experience, it was unusable. Feeds are the heart of any social app, so this wasn’t something we could ship around. It had to be fixed first.
I started by auditing the codebase to understand what was causing the bottleneck. We had two options: optimize the existing FlatList implementation to handle infinite scrolling properly, or replace it entirely with Shopify’s FlashList, a drop-in alternative built specifically for high-performance lists in React Native.
We went with FlashList. Not because it was the easier choice, but because it was the smarter one, given what was ahead. I already had Fundix and Yoka in the pipeline, both needed architectural planning from scratch, and I was simultaneously hiring engineers to build out the team. Spending weeks fine-tuning FlatList internals when a proven, performant solution already existed would have been the wrong trade-off. This was one of the first leadership calls I had to make at Amega: solve the problem effectively, not perfectly, and protect bandwidth for what’s coming next.
The performance issues didn’t stop at the list component. Once we had FlashList in place, we noticed another major bottleneck: the API was returning far more data than the feed actually needed. Every post came loaded with the full list of comments, every user who liked it, and every share. Most of that data was never displayed on the feed screen. It was dead weight, slowing everything down.
We stripped it back to the essentials. Each post in the feed now only receives what it needed to render: post type, image, text, and the counts for likes, comments, and shares. For comments, we requested just the top two from the backend. When a user tapped to see more, a paginated API call fetched 50 comments at a time. If the thread were longer, a ‘load more’ button would let users continue scrolling through the conversation without loading everything at once. We also moved the comments section out of the postcard and into a dedicated modal. This solved the layout issue where expanding comments would constantly shift the post height and disrupt the scroll experience. We applied the same pattern to shares.
Images were another pain point. Users were uploading HD photos that went straight into S3 at full resolution. That meant the feed was loading massive images that took far longer than they should have. We worked with the backend team to enforce a maximum width and height every image was compressed to an optimal size on upload. On the client side, we replaced React Native’s default <Image /> component with react-native-fast-image, which gave us built-in caching, priority loading, and placeholder support so users weren’t staring at blank spaces while images loaded.
With performance finally in a stable place, we shifted focus to feature development. We built out push notifications, the ability to save posts, and an entirely new live trading feed where users could follow another trader’s positions in real time. If a user chose to mirror someone’s trade, the person being followed earned a small percentage, turning the app from a passive content feed into an active trading community.
Fundix: Building the Foundation That Powered Three Products
Balancing Wanz and Starting Fundix
While development on Wanz continued, I had to carve out time for what was next. Fundix is a proprietary trading application that allows users to complete a four-week internship program and earn a funded trading account for free. Before writing a single line of code, I focused on creating the initial architectural documentation defining how the app would be structured, what modules it needed, and how it would connect to the broader ecosystem we were building.
Growing the Team
With Fundix moving from planning to execution, I hired three senior React Native developers and reassigned the junior developer from Wanz to contribute to Fundix as well. That gave Fundix a two-person squad, one senior, one junior. Rather than handing them a finished architecture, I brought all of them into the decision-making process. Together, we defined the approach for state management, network requests, trading logic, the authentication module, and more. Involving the team early meant everyone understood the “why” behind every architectural choice, not just the “what.”
TurboRepo: One Architecture Across Three Products
The single best architectural decision we made was adopting TurboRepo. Fundix, Yoka, and Wanz shared a significant amount of UI components, TypeScript types, and trading logic. Instead of duplicating code across three repositories, TurboRepo allowed us to build shared packages in isolation: UI components, themes, and utilities that every project could consume from a single source of truth. We set a non-negotiable rule from day one: every piece of code had to follow DRY, SOLID, and KISS principles. We weren’t just building one app. We were building a system that needed to be scalable, reliable, and robust across multiple products.
UI Architecture and Design System
For the component layer, we used a combination of Higher-Order Components, Pure Components, and Atomic Design principles. The design system was directly connected to Figma; whenever a designer updated colors, measurements, or variables in Figma, it automatically triggered a pull request in the repository. No manual syncing, no design drift. Every update went through a proper review and approval process before landing in the codebase.
Testing and Quality Assurance
We layered testing at every level. Storybook gave us visual testing for components in isolation. On top of that, we implemented unit testing and A/B testing to validate both code correctness and user experience decisions. The goal was simple: no feature ships without being tested, and no component exists without a visual reference.
Observability and Monitoring
For production visibility, we integrated Sentry for crash reporting, Datadog for performance monitoring, and Grafana for dashboards and alerting. If something broke in production, we wanted to know before the users did, and we wanted enough context to fix it fast.
CI/CD and Deployment Pipeline
We invested early in automation. GitHub Actions handled our CI/CD pipeline, with commitlint enforcing readable commit messages and ESLint rules maintaining consistent coding style across the team. Every pull request ran through automated checks, unit tests, functional tests, and linting before anyone could approve it.
We set up four environments: Development, QA/Testing, Staging, and Production. FastLane handled the build and publishing pipeline, pushing every build to internal testing on Google Play and TestFlight. The main branch was locked. No one, including me, could push directly to production. Every change went through the pipeline, every time.
API Contracts and Cross-Team Collaboration
To streamline collaboration with the backend team, we introduced API contracts during feature grooming sessions. Before development started on any new feature, both teams agreed on the request and response structure up front. This eliminated the back-and-forth that usually slows down mobile development and allowed frontend and backend work to happen in parallel.
The Payoff
We took our time setting this foundation, and it paid off. Within three months, we shipped both Fundix and Yoka. The architecture wasn’t just built for one product. It was built for a system of products that could scale independently while sharing a common core. That’s the difference between building fast and building smart.
Yoka: One Game, Two Platforms
Yoka was unlike anything else in our pipeline. It was a trading game where users picked two assets, Tesla vs Microsoft, Apple vs Samsung, and watched them battle based on real profit and loss data. Simple concept, surprisingly addictive.
What made Yoka technically interesting was that it wasn’t just a mobile app. We shipped it as both a React Native application and a Telegram Mini App, giving us access to Telegram’s massive user base alongside traditional mobile users. Since Telegram Mini Apps are essentially web applications running inside a specialized WebView shell, we built the Telegram version on Next.js.
Running the same product across two different platforms could have meant maintaining two completely separate codebases. We avoided that by adopting NativeWind, which brought CSS-based styling to React Native and allowed us to create components that shared the same styling approach across both Next.js and React Native. Combined with the shared UI library we had already built inside TurboRepo, our component development time dropped dramatically. Patterns and components built once were consumed by Yoka Web, Yoka Mobile, and every other project in the monorepo.
With three active products now in motion, the team needed to grow. I hired an additional mobile developer for Yoka and pulled a senior frontend developer from another team to support the Telegram Mini App. Over on Wanz, things had stabilized the junior React Native developer who had been with me since day one was promoted to Associate Software Engineer, a move he earned by consistently delivering on a product he now owned independently.
We shipped both Yoka Web and Yoka Mobile within three months and went live. Same foundation, same principles, different platform, and the TurboRepo architecture we invested in early made it possible.
Cutting Regression Time by 80% with End-to-End Automation
As we shipped more features across multiple products, regression testing became a bottleneck. The QA team was spending an overwhelming amount of time on manual regression cycles, which grew with every new release and every new feature added to the pipeline.
Something had to change. I worked closely with our QA Lead and a Senior QA Engineer to introduce end-to-end test automation using Appium, WebDriverIO, and BrowserStack. I set up the initial foundation, the framework structure, test configurations, and integration with our existing CI/CD pipeline. From there, the QA Lead and Senior Engineer took ownership, progressively writing E2E tests that covered the critical user flows across our products.
The impact was immediate. Regression time dropped from consuming 100% of the QA cycle to just 20%. What used to take days now runs in hours. The team was no longer stuck in repetitive manual testing; they had the bandwidth to focus on exploratory testing, edge cases, and validating new features. Releases became faster, more frequent, and far more confident.
From 2 Hours to 15 Minutes: Optimizing the CI/CD Pipeline
Our CI/CD pipeline had an issue: builds were taking 1 hour and 40 minutes. For a team shipping across multiple products and environments, that kind of wait time was unacceptable. Every merge meant sitting idle, and every hotfix meant watching the clock.
I took ownership of optimizing it. But before I could start, we had another priority in motion: upgrading all projects to the latest version of React Native and migrating to the New Architecture. The upgrade was necessary for long-term stability, but it made things worse before they got better. Build times jumped from one hour and forty minutes to two hours and fourteen minutes. What was already slow had become a serious blocker.
I tackled it head-on. The approach was focused on aggressive caching and smarter build strategies. I introduced ccache for compiler output caching and Ninja as the build system for faster native compilation. I configured Android to trigger only the required build variants instead of rebuilding everything from scratch. On top of that, we leveraged GitHub Actions’ built-in caching to avoid redundant dependency installs across runs.
The results were dramatic. Build times across all applications, both Android and iOS, dropped to ten to fifteen minutes. That’s roughly a 90% reduction from where we started. The only scenario where builds ran longer was when a breaking change required a full cache clear, which brought the time up to around thirty-five minutes, still significantly faster than the original pipeline.
The best part was that this wasn’t a single-environment pipeline. Every run produced builds for all 4 environments: Development, QA/Testing, Staging, and Production in that same ten to fifteen-minute window. What used to eat hours of the team’s day became something nobody had to think about.
Leadership Isn’t a Title, It’s How You Show Up Every Day
Everything I’ve talked about so far, the architecture, the performance fixes, the CI/CD pipeline, none of it works without the right team culture behind it. And culture doesn’t build itself. It starts with how you lead.
The most important lesson I learned at Amega is simple: be the example of what you expect. Don’t ask your team to write clean code if you’re cutting corners yourself. Don’t push for documentation if you’re not documenting your own decisions. Leadership isn’t about delegating expectations; it’s about demonstrating them.
Taking Care of the Team
I held one-on-one meetings with every team member every two weeks. Not status updates, real conversations about their concerns, their contributions, and where they wanted to grow. We also set up an anonymous written feedback system where each team member could share feedback about me, and I about them. Some things are harder to say face-to-face, and giving people a safe channel to be honest made the team stronger. This loop was the single biggest reason I grew as a leader because my team told me where I needed to improve, not just where they did.
Mentorship and Growth
During my time at Amega, one junior developer I mentored was promoted to Associate Software Engineer, and one senior developer was promoted to Staff Engineer. I was being considered for promotion to Principal Software Engineer as my responsibilities had expanded across the full stack. But what mattered more than titles was watching people I worked with grow into roles they earned through their own effort, knowing I played a part in that journey.
Meetings That Respect Everyone’s Time
We were ruthless about protecting the team’s time. If something could be communicated in ten minutes on Slack with more detail and context than a thirty-minute meeting, we skipped the meeting. Slack gave people time to think before responding, ask clarifying questions without interrupting, and reference the conversation later. We didn’t eliminate meetings; we eliminated the unnecessary ones.
Pre-Grooming That Actually Worked
We had standard pre-grooming and grooming sessions for new features, but I introduced an additional step, internal team grooming, after the pre-grooming session. This gave engineers a dedicated space to discuss how a feature would be approached technically before committing to estimates and timelines in the formal grooming. It removed guesswork, surfaced edge cases early, and gave the team confidence in what they were about to build.
Knowledge Sharing as a Habit
Every Friday, the company held a presentation where someone would walk through a new tool, strategy, or technology. It wasn’t optional culture, it was embedded in how we worked. These sessions helped the team stay current, sparked ideas for automation, and created a habit of learning that made us faster over time.
Lead by Building, Not by Managing
Looking back, Amega wasn’t just a job; it was where I learned what engineering leadership actually means. Not project management. Not ticket assignment. Not sitting in meetings all day deciding what other people should build.
It means making the hard architectural calls when the team is stretched thin. It means choosing FlashList over a two-week FlatList optimization because three other projects are waiting. It means building a CI/CD pipeline yourself instead of asking someone else to figure it out. It means sitting in a one-on-one and genuinely listening when a junior developer tells you they’re struggling.
Four projects. Six engineers. One shared architecture. Two promotions. An 80% reduction in regression time. A 90% reduction in build times. Three products shipped in three months.
None of that happened because I managed people. It happened because I built alongside them and made sure they had everything they needed to do their best work. If you’d like to see what we shipped, you can check out the screenshots of Fundix and Yoka on my portfolio.
In November 2025, Amega closed its doors due to reasons that were never fully shared with the team. It was the biggest setback of this entire journey. Not because of the work we lost but because of the momentum we had built together. A team that was hitting its stride, products that were growing, and an engineering culture that was genuinely working. Watching that come to an end was a reminder that not every outcome is within your control, no matter how well you lead or how hard your team ships.
But what I took away from Amega can’t be shut down. The lessons in leadership, the systems thinking, the instinct for knowing when to build and when to delegate, all of that stays. The responsibilities and leadership traits I’ve shared here are just a fraction of what I experienced. But if there’s one thing I’d carry into any team I lead next, it’s this: a product succeeds when the leader cares enough to understand the code, the people, and the problem. Everything else follows.