I. Overview
React toolchain tag cloud:
Rollup Prettier Closure Compiler
Yarn workspace [x]Haste [x]Gulp/Grunt+Browserify
ES Module [x]CommonJS Module
Flow Jest ES Lint React DevTools
Error Code System HUBOT(GitHub Bot) npm
P.S. Those with [x] indicate previously used, recently (React 16) no longer used
Simple classification as follows:
Development: ES Module, Flow, ES Lint, Prettier, Yarn workspace, HUBOT
Build: Rollup, Closure Compiler, Error Code System, React DevTools
Test: Jest, Prettier
Release: npm
Organize source code according to ES module mechanism, supplemented by type checking and Lint/formatting tools, use Yarn to handle module dependencies, HUBOT checks PR; Rollup + Closure Compiler build, use Error Code mechanism to achieve production environment error tracking, DevTools assists bundle checking from the side; Jest drives unit tests, also confirms build results are clean enough by formatting bundles; finally publish new package through npm
The whole process is not very complex, but considerations on some details are quite in-depth, such as Error Code System, double-insurance envification (dev/prod environment distinction), release process tooling
II. Development Tools
CommonJS Module + Haste -> ES Module
Versions before React 15 all used CommonJS module definitions, for example:
var ReactChildren = require('ReactChildren');
module.exports = React;
Currently switched to ES Module, several reasons:
-
Helps detect module import/export issues early
CommonJS Module easily
requires a non-existent method, only discovers problem when calling and erroring. ES Module's static module mechanism requiresimportandexportmust match by name, otherwise compilation build will error -
Advantages in bundle size
ES Module can make bundle cleaner through tree shaking, fundamental reason is
module.exportsis object-level export, whileexportsupports finer-grained atomic-level export. On the other hand, importing by name allows tools like rollup to flatten and splice modules together, compression tools can then perform more aggressive variable name obfuscation on this basis, further reducing bundle size
Only switched source code to ES Module, unit test cases didn't switch, because CommonJS Module is more friendly to some of Jest's features (such as resetModules) (even if switching to ES Module, in scenarios needing module state isolation, still need to use require, so switching has little meaning)
As for Haste, it's a custom module processing tool by React team, used to solve long relative path problems, for example:
// ref: react-15.5.4
var ReactCurrentOwner = require('ReactCurrentOwner');
var warning = require('warning');
var canDefineProperty = require('canDefineProperty');
var hasOwnProperty = Object.prototype.hasOwnProperty;
var REACT_ELEMENT_TYPE = require('ReactElementSymbol');
Under Haste module mechanism, module references don't need to give explicit relative paths, but automatically look up through project-unique module names, for example:
// Declaration
/**
* @providesModule ReactClass
*/
// Reference
var ReactClass = require('ReactClass');
Superficially solves long path reference problems (doesn't solve fundamental problem of deep project structure nesting), using non-standard module mechanism has several typical disadvantages:
-
Not in line with standards, faces adaptation problems when accessing tools in standard ecosystem
-
Source code hard to read, not easy to understand module dependency relationships
React 16 removed most custom module mechanisms (still a small part in ReactNative), adopted Node's standard relative path references, long path problems thoroughly solved by refactoring project structure, adopting flat directory structure (deepest 2-level references under same package, cross-package references processed by Yarn as top-level absolute path references)
Flow + ES Lint
Flow is responsible for checking type errors, discovering potential type mismatch problems as early as possible, for example:
export type ReactElement = {
$$typeof: any,
type: any,
key: any,
ref: any,
props: any,
_owner: any, // ReactInstance or ReactFiber
// __DEV__
_store: {
validated: boolean,
},
_self: React$Element<any>,
_shadowChildren: any,
_source: Source,
};
Besides static type declarations and checking, Flow's biggest feature is deep support for React components and JSX:
type Props = {
foo: number,
};
type State = {
bar: number,
};
class MyComponent extends React.Component<Props, State> {
state = {
bar: 42,
};
render() {
return this.props.foo + this.state.bar;
}
}
P.S. For more information about Flow's React support, please see Even Better Support for React in Flow
Also there's Flow "magic" for checking export types, used to verify whether mock module's export types match source module:
type Check<_X, Y: _X, X: Y = _X> = null;
(null: Check<FeatureFlagsShimType, FeatureFlagsType>);
ES Lint is responsible for checking syntax errors and agreed coding style errors, for example:
rules: {
'no-unused-expressions': ERROR,
'no-unused-vars': [ERROR, {args: 'none'}],
// React & JSX
// Our transforms set this automatically
'react/jsx-boolean-value': [ERROR, 'always'],
'react/jsx-no-undef': ERROR,
}
Prettier
Prettier is used to automatically format code, several uses:
-
Format old code to unified style
-
Format modified parts before committing
-
Coordinate with continuous integration, ensure PR code style is completely consistent (otherwise build fails, and output parts with style differences)
-
Integrate into IDE, format occasionally during daily work
-
Format build results, on one hand improve dev bundle readability, also helps discover redundant code in prod bundle
Unified code style of course benefits collaboration, additionally, for open source projects, often face PRs with various styles, making strict formatting checks a mandatory part of continuous integration can thoroughly solve code style difference problems, helps simplify open source work
P.S. Forcing unified formatting across entire project seems somewhat extreme, is a bold attempt, but reportedly effects are quite good:
Our experience with Prettier has been fantastic, and we recommend it to any team that writes JavaScript.
Yarn workspace
Yarn's workspace feature is used to solve monorepo package dependencies (function similar to lerna bootstrap), by creating soft links under node_modules to "trick" Node module mechanism
Yarn Workspaces is a feature that allows users to install dependencies from multiple package.json files in subfolders of a single root package.json file, all in one go.
Configure Yarn workspaces through package.json/workspaces:
// ref: react-16.2.0/package.json
"workspaces": [
"packages/*"
],
Note: Yarn's actual processing is similar to Lerna, both implemented through soft links, just providing monorepo package support at package manager layer is more reasonable, specific reasons see Workspaces in Yarn | Yarn Blog
Then after yarn install can happily cross-package reference:
import {enableUserTimingAPI} from 'shared/ReactFeatureFlags';
import getComponentName from 'shared/getComponentName';
import invariant from 'fbjs/lib/invariant';
import warning from 'fbjs/lib/warning';
P.S. Additionally, Yarn and Lerna can seamlessly combine, hand dependency processing part to Yarn through useWorkspaces option, details see Integrating with Lerna
HUBOT
HUBOT refers to GitHub bots, usually used for:
-
Connect continuous integration, PR triggers build/check
-
Manage Issues, close inactive discussion posts
Mainly do some automated things around PR and Issue, for example React team plans (currently hasn't done this yet) for bot to reply to PR about bundle size impact, thereby督促 continuous optimization of bundle size
Currently each build outputs bundle size changes to file, and hands over to Git to track changes (commit), for example:
// ref: react-16.2.0/scripts/rollup/results.json
{
"bundleSizes": {
"react.development.js (UMD_DEV)": {
"size": 54742,
"gzip": 14879
},
"react.production.min.js (UMD_PROD)": {
"size": 6617,
"gzip": 2819
}
}
}
Disadvantages are imaginable, this json file often conflicts, either waste energy merging conflicts, or too lazy to commit this auto-generated troublesome file, causing version lag, so plan to use GitHub Bot to extract this trouble out
III. Build Tools
Bundle Form
Previously provided two bundle forms:
-
UMD single file, used as external dependency
-
CJS scattered files, used to support self-building bundles (taking React as source dependency)
Existed some problems:
-
Self-built versions inconsistent: bundles built with different build environments/configurations are all different
-
Bundle performance has optimization space: using packaged App way to build libraries is not quite suitable, has room for performance improvement
-
Not conducive to experimental optimization attempts: cannot apply packaging, compression and other optimization means to scattered file modules
React 16 adjusted bundle forms:
-
No longer provide CJS scattered files, what gets from npm is built, uniformly optimized bundles
-
Provide UMD single file and CJS single file, respectively used for Web environment and Node environment (SSR)
As an indivisible library form, take in all optimization links, get rid of limitations brought by bundle forms
Gulp/Grunt+Browserify -> Rollup
Previous build system was a hand-crafted tool based on Gulp/Grunt+Browserify, later limited by tools in expansion, for example:
- Poor performance in Node environment: frequent
process.env.NODE_ENVaccess slowed down SSR performance, but couldn't solve from library perspective, because Uglify relies on this to remove useless code
So React SSR best practices generally have a line "re-package React, remove process.env.NODE_ENV at build time" (of course, React 16 no longer needs to do this, reasons see bundle form changes mentioned above)
Discarded overly overly-complicated custom build tools, switched to more suitable Rollup:
It solves one problem well: how to combine multiple modules into a flat file with minimal junk code in between.
P.S. Whether Haste -> ES Module or Gulp/Grunt+Browserify -> Rollup switches are all from non-standard customized solutions to standard open solutions, should learn lessons in "hand-crafting", why industry standard things don't apply in our scenarios, must we create our own?
mock module
Build time may face dynamic dependency scenarios: different bundles depend on modules with similar functionality but different implementations, for example ReactNative's error reminder mechanism displays a red box, while Web environment outputs to Console
General solutions have 2 types:
-
Runtime dynamic dependency (injection): put both into bundle, select based on configuration or environment at runtime
-
Build time dependency processing: build multiple versions, different bundles contain respective needed dependency modules
Obviously build time processing is cleaner, i.e. mock module, don't need to care about such differences during development, automatically select specific dependencies based on environment at build time, implemented through hand-written simple Rollup plugins: dynamic dependency configuration + build time dependency replacement
Closure Compiler
google/closure-compiler is a very powerful minifier, has 3 optimization modes (compilation_level):
-
WHITESPACE_ONLY: remove comments, extra punctuation and whitespace characters, logically completely equivalent to source code
-
SIMPLE_OPTIMIZATIONS: default mode, further shorten variable names (local variables and function parameters) on basis of WHITESPACE_ONLY, logically basically equivalent, except special cases (such as
eval('localVar')accessing local variables by name and parsingfn.toString()) -
ADVANCED_OPTIMIZATIONS: perform more powerful renaming on basis of SIMPLE_OPTIMIZATIONS (global variable names, function names and properties), remove useless code (unreachable, unused), inline method calls and constants (if worthwhile, replace function calls with function body content, constants with their values)
P.S. For detailed information about compilation_level see Closure Compiler Compilation Levels
ADVANCED mode is too powerful:
// Input
function hello(name) {
alert('Hello, ' + name);
}
hello('New user');
// Output
alert("Hello, New user");
P.S. Can try online at Closure Compiler Service
Migration switch has certain risks, so React still uses SIMPLE mode, but may plan to enable ADVANCED mode subsequently, fully utilize Closure Compiler to optimize bundle size
Error Code System
In order to make debugging in production easier, we're introducing an Error Code System in 15.2.0. We developed a gulp script that collects all of our invariant error messages and folds them to a JSON file, and at build-time Babel uses the JSON to rewrite our invariant calls in production to reference the corresponding error IDs.
In short, replace detailed error messages with corresponding error codes in prod bundle, when runtime errors are captured in production environment, throw error codes with context information out, then give to error code conversion service to restore complete error information. This both ensures prod bundle is as clean as possible, also retains detailed error reporting capability same as development environment
For example illegal React Element error in production environment:
Minified React error #109; visit https://reactjs.org/docs/error-decoder.html?invariant=109&args[]=Foo for the full message or use the non-minified dev environment for full errors and additional helpful warnings.
Very interesting trick, indeed spent considerable effort on improving development experience
envification
So-called envification is building by environment, for example:
// ref: react-16.2.0/build/packages/react/index.js
if (process.env.NODE_ENV === 'production') {
module.exports = require('./cjs/react.production.min.js');
} else {
module.exports = require('./cjs/react.development.js');
}
Common means, replace process.env.NODE_ENV with target environment corresponding string constant at build time, in subsequent build process (packaging tools/compression tools) will remove extra code
Besides package entry files, also made same judgment inside as double insurance:
// ref: react-16.2.0/build/packages/react/cjs/react.development.js
if (process.env.NODE_ENV !== "production") {
(function() {
module.exports = react;
})();
}
Additionally, also worried developers might mistakenly use dev bundle online, so added some reminders in React DevTools:
This page is using the development build of React. 🚧
DCE check
DCE(dead code eliminated) check refers to checking whether useless code is normally removed
Considered a special case: if process.env.NODE_ENV is set at runtime it's also unreasonable (may exist extra code from another environment), so also did bundle environment check through React DevTools:
// ref: react-16.2.0/packages/react-dom/npm/index.js
function checkDCE() {
if (process.env.NODE_ENV !== 'production') {
throw new Error('^_^');
}
try {
__REACT_DEVTOOLS_GLOBAL_HOOK__.checkDCE(checkDCE);
} catch (err) {
console.error(err);
}
}
if (process.env.NODE_ENV === 'production') {
checkDCE();
}
// DevTools i.e. __REACT_DEVTOOLS_GLOBAL_HOOK__.checkDCE declaration
checkDCE: function(fn) {
try {
var toString = Function.prototype.toString;
var code = toString.call(fn);
if (code.indexOf('^_^') > -1) {
hasDetectedBadDCE = true;
setTimeout(function() {
throw new Error(
'React is running in production mode, but dead code ' +
'elimination has not been applied. Read how to correctly ' +
'configure React for production: ' +
'https://fb.me/react-perf-use-the-production-build'
);
});
}
} catch (err) { }
}
Principle similar to Redux's [minified detection](/articles/redux 源码解读/#articleHeader5), first declare a method containing dev environment judgment, include an identifier string in judgment (in above example is ^_^), then at runtime (through DevTools) check fn.toString() source code, if contains this identifier string it means DCE failed (useless code wasn't removed during build process), async throw it out
P.S. For detailed information about DCE check, please see Detecting Misconfigured Dead Code Elimination
IV. Test Tools
Jest
Jest is a testing tool launched by Facebook, highlights as follows:
-
Snapshot Testing: do UI testing on React/React Native components through DOM tree snapshots, compare component rendering results with previous snapshots, no differences means pass
-
Zero configuration: unlike Mocha which is powerful and flexible but configuration is cumbersome, Jest works out of the box, comes with test driver, assertion library, mock mechanism, test coverage, etc.
Snapshot Testing is similar to general approach of UI automated testing, screenshot correct results as baseline (this baseline needs continuous updating, so snapshot files generally committed with source code), after each subsequent change do pixel-level comparison with previous screenshots, if differences exist it means there are problems
Additionally, mentioning React App testing, there's an even more aggressive one: Enzyme, can adopt Jest + Enzyme to do deep testing on React components, for more information please see Unit Testing React Components: Jest or Enzyme?
P.S. For general methods of frontend UI automated testing, see How to do frontend automated testing? - Zhang Yunlong's answer - Zhihu
P.S. Can try online at repl.it - try-jest by @amasad
preventing Infinite Loops
I.e. infinite loop checking, don't want test process to be blocked by infinite loops (React 16 changed recursion to loops, has many while (true), they're not very reassured). Processing method similar to infinite recursion checking: limit maximum depth (TTL). Do through Babel plugin, inject checking during test environment build:
// ref: https://github.com/facebook/react/blob/master/scripts/jest/preprocessor.js#L38
require.resolve('../babel/transform-prevent-infinite-loops'),
// ref: https://github.com/facebook/react/blob/master/scripts/babel/transform-prevent-infinite-loops.js#L37
'WhileStatement|ForStatement|DoWhileStatement': (path, file) => {
const guard = buildGuard({
ITERATOR: iterator,
MAX_ITERATIONS: t.numericLiteral(MAX_ITERATIONS),
});
if (!path.get('body').isBlockStatement()) {
const statement = path.get('body').node;
path.get('body').replaceWith(t.blockStatement([guard, statement]));
} else {
path.get('body').unshiftContainer('body', guard);
}
}
The buildGuard used for protection is as follows:
const buildGuard = template(`
if (ITERATOR++ > MAX_ITERATIONS) {
global.infiniteLoopError = new RangeError(
'Potential infinite loop: exceeded ' +
MAX_ITERATIONS +
' iterations.'
);
throw global.infiniteLoopError;
}
`);
Note here uses a global error variable global.infiniteLoopError, used to interrupt subsequent test process:
// ref: https://github.com/facebook/react/blob/master/scripts/jest/setupTests.js#L56
env.afterEach(() => {
const error = global.infiniteLoopError;
global.infiniteLoopError = null;
if (error) {
throw error;
}
});
Check at end of each case whether infinite loop occurred, prevent test process from continuing normally after guard's throw error is caught by outer catch
manual test fixture
Besides engineering unit tests in Node environment, also created test case sets for browser environment manual testing, including:
-
Application testing based on WebDriver (at Facebook, this application refers to main site)
-
Manual test cases, manually verify DOM-related changes when needed
Mainly 3 reasons for not doing automated testing in browser environment:
-
Browser environment testing tools are not so reliable (flaky), based on past experience, can't discover many problems as desired
-
Will slow down continuous integration, affect development workflow efficiency, and will make continuous integration relatively fragile
-
Automated testing doesn't always discover DOM problems, for example input values displayed by browser may be inconsistent with those obtained through DOM properties
Unwilling to do automated testing in browser environment, but want to ensure some boundary case handling added during maintenance aren't broken by update changes, so decided to adopt most effective method: write test cases for boundary cases, manual test verification
Specific approach is to manually switch React versions against Demo App, see if behavior is consistent across different versions/different browsers:
The fixture app lets you choose a version of React (local or one of the published versions) which is handy for comparing the behavior before and after the changes.
Looks stupid, but for discovering DOM-related problems indeed is most direct and effective way, and when these cases accumulate to certain degree, will play considerable role in ensuring quality (confidently perform DOM-related changes, avoid reaching point where no one dares to move later), for example:
the DOM attribute handling in React 16 was very hard to pull off with confidence at first. We kept discovering different edge cases, and almost gave up on doing it in time for the React 16 release.
Accumulating valuable manual test cases requires investing lots of energy, besides automating as much as possible through engineering means, also plan to let community partners easily participate through GitHub Bot, so such "stupid things" are not impossible, and foreseeable benefit is: not afraid of big changes
V. Release Tools
npm publish
To standardize/simplify release process, did several things:
-
Adopt master + feature flag branch strategy
-
Tool release process
Previously adopted stable branch strategy, manually cherry-pick when releasing version, releasing a version would take a whole day. Later adjusted to publish directly from master, for unwanted breaking changes, remove through feature flag at build time, avoids tedious manual cherry-pick
Did full set of tools for release process, automate what can be automatically executed in sequence, prompt out what depends on manual operations to save and exit, after manual processing completes resume progress and continue, for example:
Automatic
test
build
Manual
changelog
smoke test
Automatic
commit changelog
publish npm package
Manual
GitHub release
update site version
test new release
notify involved team
This way reduce human errors through tooling checklist, ensure standardized consistent release process
P.S. To facilitate checking release tool itself, also provides simulated release option, can skip actual release operations, run process empty
No comments yet. Be the first to share your thoughts.