How We Achieved 75% Faster Builds by Removing Barrel Files
Removing JavaScript barrel files from our Jira frontend codebase led to a 75% reduction in build times, with significantly faster TypeScript highlighting and unit testing. This large-scale automated change also improved CI efficiency and made code navigation much clearer for developers.
Sometimes the biggest performance wins come from questioning widely-accepted practices. Here’s how removing barrel files led to 75% faster builds and dramatically happier developers.
When “Best Practices” Become Performance Bottlenecks
At Atlassian, our Jira frontend codebase has grown to encompass thousands of internal packages—a scale where architectural decisions that work beautifully in smaller projects can become significant performance bottlenecks. We learned this lesson the hard way with barrel files.
Barrel files are a widely used JavaScript pattern that acts as a single entry point for modules. The name comes from the idea of “putting all your exports into one barrel” that other parts of your code can draw from.
Here’s how they work. Instead of importing directly from individual source files, you create an index file (the “barrel”) that re-exports everything, then import from that single file:
// Individual component files
// components/Button/Button.js
export const Button = () => { /* component logic */ };
// components/Modal/Modal.js
export const Modal = () => { /* component logic */ };
// components/TextField/TextField.js
export const TextField = () => { /* component logic */ };
// The barrel file: components/index.js
export { Button } from './Button/Button';
export { Modal } from './Modal/Modal';
export { TextField } from './TextField/TextField';
// Traditional approach - direct imports
import { Button } from './components/Button/Button';
import { Modal } from './components/Modal/Modal';
// Barrel file approach - cleaner imports via the barrel
import { Button, Modal } from './components'; // imports from components/index.js
The JavaScript community embraces barrel files because they promise cleaner imports, better encapsulation, and a clear “public API” for your modules. On paper, they’re a textbook best practice.
In our codebase, this problem was at times compounded by cascading barrel files throughout our directory hierarchy. We didn’t just have one barrel file—we had barrels feeding into other barrels:
// Deep component: features/UserManagement/components/UserCard/UserCard.js
export const UserCard = () => { /* component logic */ };
// Level 1 barrel: features/UserManagement/components/index.js
export { UserCard } from './UserCard/UserCard';
export { UserList } from './UserList/UserList';
// Level 2 barrel: features/UserManagement/index.js
export { UserCard, UserList } from './components';
export { userManagementUtils } from './utils';
// Top-level barrel: features/index.js
export { UserCard, UserList, userManagementUtils } from './UserManagement';
// Final import - looks clean but creates a massive dependency chain
import { UserCard } from './features';
This created dependency chains where importing a single component forced our tools to process dozens of intermediate barrel files and hundreds of unnecessary modules, amplifying the performance impact exponentially.
As our codebase grew, we noticed concerning trends in our development experience:
- Local TypeScript highlighting (The time it takes from hovering your mouse over a type, untill type information shows up) could take upwards of 2 minutes, leading many developers to consider the entire system as just not working
- Single unit test runs were taking minutes to run locally
- CI builds were running many more tests than necessary, as our dependency graph based selection pulled in more than actually needed to be tested
The symptoms were clear: something was causing our tools to do far more work than necessary. But identifying the root cause in a codebase of this scale required systematic investigation.
Investigating the Root Cause: Following the Data
We suspected that barrel files might be creating dependency graph bloat. When you import even a small component through a barrel file, bundlers and tools like TypeScript or Jest need to process the entire module — including dependencies you’re not actually using. The challenge was proving this hypothesis at enterprise scale.
We suspected that barrel files might be creating dependency graph bloat. To understand why, consider this example:
// components/index.js (barrel file)
export { Button } from './Button/Button';
export { Modal } from './Modal/Modal';
export { DataTable } from './DataTable/DataTable'; // Heavy component with many dependencies
export { Chart } from './Chart/Chart'; // Another heavy component
export { Form } from './Form/Form';
// ... 50 more component exports
// Your actual code - you only want a simple Button
import { Button } from './components';
Even though you’re only importing Button
, tools like TypeScript and Jest need to process the entire barrel file. This means they must read from disk, parse, and analyze DataTable
, Chart
, Form
, and all 50+ other components—plus all of their dependencies.
In TypeScript’s case, it needs to type-check every imported file and resolve all type dependencies. Jest has to transform and potentially execute code from each module to understand the test environment. What should be a simple import of one component becomes a cascade of file system reads, parsing operations, and dependency resolution across up to tens of thousands of unrelated modules.
When you import even a small component through a barrel file, this processing overhead compounds across thousands of files in a large codebase. The challenge was proving this hypothesis at enterprise scale, where these unnecessary operations multiply exponentially.
The main problem was proving the impact at scale. We had two options: carefully isolate a small subset of our codebase to test barrel file removal, or attempt a broad transformation across the entire codebase.
The isolation approach had clear appeal—it would be more controlled and less risky. However, we felt that focusing on a small subset might not give us representative results of the true performance impact. Additionally, we knew that if this optimization proved worthwhile, we’d ultimately need to transform the entire codebase anyway. We wanted to understand not just the potential benefits, but also how challenging the actual implementation would be at scale.
So we chose to attempt a “good enough” codemod for the entire codebase.
A codemod (short for “code modification”) is an automated script that transforms source code—think of it as a sophisticated find-and-replace tool that understands code structure and can make systematic changes across thousands of files.
The codemod didn’t need to handle every edge case perfectly—it just needed to be good enough to demonstrate whether our hypothesis was worth pursuing with a proper implementation. This broad approach would capture the true scale effects while giving us confidence that the results would translate to a full migration.
To our surprise the results were clearer than expected:
- Local testing performance: We saw much faster single test runs in certain scenarios
- TypeScript processing: Highlighting speed improved significantly
- CI test selection: Test selection* appeared significantly better showing upwards of 70% improvements
- Bundle generation: We saw both a slight drop in bundle size (though we were hoping for more) as well as reduced bundle time
*Test Selection
We use a dependency graph-based test selection system that identifies which parts of the code are potentially impacted by a change. The system checks if changes appear in the dependency graph and only runs tests for affected areas. By reducing the effective dependency graph through removing barrel files, the number of “affected tests” dropped significantly.
It was clear we were on to something.
Making the Case for Large-Scale Change
Armed with compelling proof-of-concept data, we now needed to convince the owners of our largest frontend codebase of a rather massive undertaking: touching almost every import statement in our codebase. The business case was strong — developer productivity could see massive benefits and the improvement in test selection also showed the potential for massive savings on CI.
However, this wasn’t just a technical challenge; there was no way we could stop development on the product to land these changes, and given the surface area of the changes, these couldn’t be done manually.
So we needed an automated transformation that would go mostly unnoticed in the day-to-day work of a repository where we needed to change almost 100,000 files, while more than a thousand developers contributed thousands of changes every single day.
Engineering the Solution: Automating Large-Scale Refactoring
Our strategy was methodical: first stop the introduction of new barrel files, then gradually transform existing barrel-file imports to direct imports, and finally delete the unused barrel files once they were no longer referenced anywhere.
Building the Technical Foundation
The data collection phase required understanding our entire dependency graph. We used an internal tool called factsmap that collects metadata about what files do throughout our codebase—tracking amongst other things what they’re importing and exporting.
Side note: We use this facts system as the foundation for other tools like our test selection system.
This metadata allowed us to resolve barrel-file imports to their actual targets, cutting out any intermediary import chains.
Where we previously had chains like `a → b → c → d`
, we could now create direct imports from `a → d`
:
// Before: Import chain through barrel files
// a.js
import { Button } from './ui';
// ui/index.js (barrel file 'b')
export { Button } from './components';
// components/index.js (barrel file 'c')
export { Button } from './Button/Button';
// Button/Button.js (actual target 'd')
export const Button = () => { /* component logic */ };
// After: Direct import
// a.js
import { Button } from './components/Button/Button';
Interestingly, some files were hybrids—while they exported their own functionality, they also re-exported other modules under their own namespace, effectively acting as both source files and barrel files, making the dependency chains even more complex than we initially realised.
For the transformation itself, we leveraged ESLint’s code modification capabilities in a clever way. ESLint has an “auto-fixing” feature where rules can not only detect problems but also automatically correct them. When you run `eslint --fix`
, rules marked as “fixable” can modify the source code directly. We created an ESLint rule that functioned as both a linter and a code transformer through its fixer functionality. Here’s a simplified example of how fixable rules work:
// ESLint rule example
module.exports = {
meta: {
fixable: "code"
},
create(context) {
return {
ImportDeclaration(node) {
if (isBarrelFileImport(node.source.value)) {
context.report({
node,
message: "Avoid barrel file imports",
fix(fixer) {
const directImport = resolveToDirectImport(node);
return fixer.replaceText(node, directImport);
}
});
}
}
};
}
};
This dual approach meant we could simultaneously mark certain code paths as “no longer able to introduce barrel files” while using the same rule to rewrite existing imports. We then used this fixable ESLint rule with a specialised parallelised ESLint runner to execute the codemod against our entire codebase efficiently. More details about ESLint’s fixable functionality can be found in the “fixable” section of the ESLint custom rules documentation.
The Three-Wave Landing Strategy
Perhaps our biggest innovation was solving the coordination challenge of landing changes during active development. We needed to avoid blocking over a thousand developers in their day-to-day work during the migration process, which we knew wouldn’t be completed in a single day. We also needed to be mindful of our changes—even if we worked during off hours, we would hammer all open pull requests and branches with conflicts, causing havoc among developers and forcing them into an endless loop of rebasing and fixing conflicts rather than staying productive. There had to be a smarter way.
Our research revealed that up to 80% of our codebase remains dormant at any given time, meaning no one is actively working on those files. This insight shaped our “three waves” approach.
We created a script that pulled data from our VCS on active branches and the files changed in those. After running the codemod against a subset of our codebase, we would automatically reject changes to the part of the codebase deemed as “hot” because it had active changes against it, thereby avoiding conflicts.
In the first wave, we used this system to avoid conflicts on a package basis. This allowed us to safely transform the majority of dormant code without interfering with day-to-day development work.
The second wave refined this approach further, targeting individual files that weren’t being changed by any pull requests rather than entire packages. While we could have skipped the first wave and immediately started with just the file-based approach, we knew we would need multiple waves and understood that our data on “hot” areas was always at risk of becoming stale. Avoiding entire packages felt like the safer approach and turned out to be correct, as we did not encounter any conflicts.
By the third wave, we were down to just a few hundred files where we competed directly with developers for landing changes, accepting that we would both encounter and create conflicts in this final “first come, first serve” situation.
This approach proved remarkably successful. We landed changes to over 90,000 files within a few days while experiencing minimal conflicts for both our automation and the developers working in the codebase.
The Cleanup Phase
After each wave, we ran additional tooling to identify files that no longer appeared in any dependency graph and deleted them automatically. This cleanup process became a nice side effect of the project—we ended up deleting several thousand obsolete files that were no longer serving any purpose.
The technical automation was only half the battle. We also needed clear communication about the timeline and impact of changes, rollback strategies in case we introduced regressions, and continuous monitoring to catch issues early. While we did create friction, especially in the last wave, our preparation and wave-based approach minimised the disruption to ongoing development work.
Measuring the Impact: The Results Speak for Themselves
The performance improvements in day-to-day development were immediately noticeable.
- TypeScript highlighting speed improved by more than 30%.
- Local unit testing became around 50% faster on average, with certain packages seeing up to 10x improvements.
These improvements meant faster feedback loops and less time waiting for tools to catch up with code changes. But the CI improvements were even more dramatic.
- Unit test execution saw 88% fewer tests run in a typical build, dropping from 1600 to 200 tests, with 73% reduction in average runtime.
- Integration testing triggered 85% fewer tests per build, dropping from 130 to 20
- while VR testing saw a 50% reduction in visual regression tests run, dropping from 50 to 25.
Overall build time per commit saw a 75% reduction in build minutes consumed delivering a significant drop in costs.
This went hand in hand with improved reliability. For developers this means that faulty tests that ended up on our main branch were very unlikely to even run and show unrelated failures on their branch builds. From the infrastructure side, reliability improved as pressure was taken from resource bottlenecks which helped with the overall developer satisfaction thanks to more stable and faster feedback loops and fewer frustrating waits.
These weren’t just nice-to-have improvements — they represented a fundamental change in our development velocity and infrastructure efficiency.
Beyond the Numbers: Unexpected Benefits
Beyond the raw performance metrics, we discovered additional advantages that we hadn’t anticipated. Clicking on imports now takes you directly to source files rather than barrel files that point to other barrel files, making IDE navigation significantly better. Direct imports make it obvious what code actually depends on what, creating clearer dependency relationships. Our build tooling became simpler with less complexity in bundling and analysis tools.
This work didn’t just solve our immediate performance problems— it enabled additional improvements. With better test selection, we could implement smarter CI strategies through dynamic pipeline orchestration further reducing the amount of resources we needed.
The barrel file removal became a force multiplier for other performance initiatives across our development infrastructure.
Weighing the Trade-offs: What We Gained and What We Lost
This wasn’t a pure win—we made deliberate trade-offs that we knew we’d have to live with. Packages can no longer easily control their “public API” through barrel files, losing a layer of encapsulation. Moving source files now requires updating every direct import, making refactoring more fragile. Direct imports might encourage deeper dependencies between modules, potentially increasing coupling.
This change forced us to reconsider some fundamental assumptions about code organisation at scale. Do we need “private” code boundaries in a codebase that never gets published to npm? Is traditional encapsulation the right model for internal development at this scale? How should we handle code discoverability when we have thousands of internal packages?
These are ongoing architectural discussions, but the performance benefits clearly justified the trade-offs in our context. We’ve moved toward prioritising performance over abstraction when the two conflict, favouring direct relationships over encapsulated ones for internal code, and making measurement-driven decisions rather than following conventional wisdom.
This doesn’t mean we’ve abandoned good engineering practices—we’ve just become more intentional about when and how we apply them.
Key Takeaways: Lessons for Engineering Teams
This experience reinforced several important principles for teams managing large codebases: question “best practices” when they don’t serve your specific scale, measure everything before and after changes, and consider compound effects of seemingly small decisions.
Don’t be afraid to challenge conventional wisdom when the data points in a different direction—sometimes the biggest wins come from removing complexity rather than adding optimisations, and the best optimisation is the complexity you choose not to maintain.
Interested in learning more about large-scale JavaScript performance optimization? We’re always happy to share insights with teams facing similar challenges.