Engineering
Optimizing App Performance
Shankar Salwan (Shanks)April 26, 2025

At Found, we faced significant performance challenges with our integrated banking, bookkeeping and taxes platform as we scaled from businesses starting out to established businesses with rich transactional history. Our users, who rely on our platform for daily financial operations, began experiencing frustratingly slow load times—in some cases up to 2 minutes for the initial app load. This post details our systematic approach to identifying, prioritizing, and resolving these performance bottlenecks, resulting in a 90% reduction in initial app load times and significantly improved user satisfaction.

Understanding the Problem

User Impact

Our performance issues manifested in several ways that directly impacted our users. The most significant was slow initial load time, with some users waiting up to 2 minutes to load their application. This created a frustrating experience, especially for users who needed quick access to their financial information throughout the day.

Customer feedback consistently highlighted these delays as a major pain point, with comments like:

Breaking Down Our App Infrastructure

To understand the root causes of poor app load times, we needed to map out our application's dependencies and initialization flow. Our mobile app infrastructure consisted of a Cordova framework with an embedded React web application. The path to render the home page on opening the application for the first time consisted of three phases:

  1. Cordova framework initialization - Loading the native container and bridge

  2. React app bootstrapping - Loading JavaScript bundles and initializing the application

  3. Serialized user data retrieval - Fetching all user data

When we first built the Found application, it was relatively lightweight. Users interacted with a handful of pages, components, and static images. The serialized user data endpoint joined across different tables to return all data required for the application. We made the initial decision to greedily load and cache all data upfront to ensure crisp interactions after the first load, with no additional loading indicators. This approach worked well when we started with a few thousand active users!

As the product added more features, we needed to load more pages, more components, more static files, and more fields in the single serialized endpoint. Eventually, everything bloated to a point where our longest-serving and most active users started experiencing significant performance degradations. The initial app load experience for all users required serializing over thousands of transactions, business events, and deposit account balances, joining across several massive tables.

The Cordova framework got app development off the ground quickly, but since it isn't actively maintained anymore, we mostly crafted workarounds to implement plugins and observability. Over time, our workarounds became less effective, and adding any new observability into the framework became nearly impossible.

After thorough analysis, we classified our performance issues into two categories:

Known Issues:

  1. A single serialized endpoint for all application data

  2. Greedy loading of the entire application on startup

Unknown Issues:

  1. Cordova embedded app performance characteristics

  2. External SDK loading performance

  3. Service worker caching behavior

Our Approach to Improvement

With an application serving thousands of users daily, we couldn't simply rebuild everything from scratch. We needed an approach that would deliver incremental improvements while maintaining application stability and continuing to ship new features.

We developed a multi-phased approach based on these principles:

  • Add comprehensive observability before making changes

  • Target the highest-impact issues first

  • Create new patterns that could be applied incrementally

  • Gather data to inform longer-term architectural decisions

There was no single "silver bullet" that would solve all our performance issues, so we needed a methodical, data-driven strategy.

Phase 1: Performance Monitoring Infrastructure

Before making any changes, we needed to understand exactly where time was being spent during app initialization. The clear bottlenecks appeared to be in the React application load and the single serialized user data endpoint, but we needed specifics.

We added observability in two key areas:

  1. Backend API Tracing: We added rich tracing using Datadog spans around each field in the serialized endpoint, allowing us to see exactly which joins and queries were consuming the most time. This revealed that some transaction-related queries were taking up to 1 minute to complete for users with extensive transaction history.

  2. Frontend Timing Metrics: We added simple timestamp logging captured through the React application logs. These timestamps helped identify which sections of the initial load process were contributing most to latency.

This gave us our baseline metrics for app launch:

  • average time: 8s

  • p95 time: 14s

Phase 2: Addressing Known Issues

Breaking Up the Monolithic Endpoint

Our single serialized endpoint was returning the entire application state in one massive JSON payload. For users with substantial transaction history, this endpoint could take up to 2 minutes to complete.

The endpoint was problematic because:

  1. It was polled every minute, creating unnecessary server load

  2. It contained all user data, even for sections not currently viewed

  3. It consisted of database queries with expensive table joins

Our solution was to:

  1. Identify data not needed on initial page load

  2. Split these components out of the monolith endpoint into individual API calls

  3. Implement a modern asynchronous data fetching and caching strategy

We selected react-query as our data fetching library because it provided built-in functionality for handling asynchronous API calls, including caching, background refreshing, and request deduplication. This choice delivered immediate benefits by offloading data fetching lifecycle logic and enabling automatic data refresh schedules based on cache staleness.

We didn't attempt to split the entire endpoint at once. Instead, we made incremental updates, starting with the queries that showed the highest latency in our tracing data. As we removed each expensive query from the main endpoint, new bottlenecks would emerge, giving us clear targets for the next round of improvements.

The results were dramatic: the p99 latency for the initial index query dropped from 8 seconds to approximately 800ms.

Related Technical Wins

The architectural changes we made had benefits beyond just performance. By adopting react-query throughout our application, we established a new pattern for data management that provided several additional advantages:

  1. Simplified State Management: Each data store is managed by its own query, with its own cache key, eliminating complex state management code.

  2. More Efficient Updates: Instead of returning full object JSON payloads after mutations, we could simply invalidate the relevant queries and let them re-fetch in the background.

  3. Better Developer Experience: New features could be implemented faster since developers didn't need to modify the monolithic endpoint.

For example, when a user initiates a bank transfer, this action impacts both the user's balance and their activity log. Previously, the API endpoint would need to return updated balances and activities in its response, requiring the frontend to carefully update multiple parts of the state.

With our new approach, the transfer API simply returns a success response, and we invalidate the relevant queries (balances and activities). React-query automatically refreshes this data in the background, creating a more responsive user experience and simpler code.

This pattern has been adopted across our frontend application, allowing the entire engineering organization to build features faster and with fewer bugs.

Optimizing Chunk Loading

Looking at our network-request logs and webpack-bundle-analyzer exposed two bottlenecks: thousands of micro-chunks and an all-or-nothing bootstrap. Every page, component, and image (desktop and mobile) went over the wire in tiny requests before the first pixel appeared, hammering slower networks.

We responded with a two-tier plan:

  • Quick win: Collapse the micro-chunks into a handful of grouped bundles, cutting the request count by ~99 % without bloating any single file past a few hundred kB.

  • Ongoing hygiene: Prune dead routes, components, and assets to keep every new build lean.

The request reduction alone slashed first-paint times on mobile. Additionally, we identified a longer-term fix: switching from greedy to route-level lazy loading so only the code for the current screen ships up front.

Phase 3: Addressing Unknown Issues

The unknown issues around Cordova performance, service workers, and external SDK load times posed a complete black box to us. Before deciding whether to replace any of these frameworks, we needed to gain visibility into their performance characteristics.

We developed custom instrumentation to measure the initialization time of our Cordova container and the activation time of our service worker. This revealed that our service worker's cache strategy was causing significant delays during application startup, especially for users who hadn't opened the app recently.

We also identified several third-party SDKs that were loading synchronously during application startup, blocking the main thread and delaying interactive rendering.

The full details of these investigations and solutions deserve their own blog post, which will be shared soon.

Results and Lessons Learned

After implementing these improvements over several release cycles, we've seen dramatic performance gains. In the process we've learned several valuable lessons about building and maintaining high-performance applications:

  1. Measure Before Optimizing: Our initial assumptions about performance bottlenecks were often wrong. Data-driven optimization was far more effective.

  2. Incremental Improvement Works: By breaking the problem into manageable chunks, we made steady progress without disrupting users or other development work.

  3. Performance Patterns Matter: The architectural patterns we established (like our react-query implementation) created a foundation for continued performance improvements.

  4. Performance is a Feature: The positive user feedback from our performance improvements has reminded us that speed is a critical part of the user experience, not just a technical concern.

Conclusion

Performance optimization is never a "one and done" task. It requires continuous attention, measurement, and improvement. By systematically identifying and addressing both known and unknown performance issues, we've significantly improved the experience for our users while establishing patterns that will prevent similar issues in the future.

The key to our success was balancing quick wins with long-term architectural improvements, all while maintaining product velocity. We made sure new features followed our improved patterns, preventing regression while continuing to deliver value to our users.

As we continue to scale Found's platform, the lessons and patterns established during this optimization journey will guide our development practices, ensuring that performance remains a first-class concern in everything we build.

App icon cactus
All-in-one banking
for the self-employed

Found is a financial technology company, not a bank. Banking services are provided by Piermont Bank, Member FDIC.