I. What is Graceful Degradation?
If a web page containing JS code can still be browsed smoothly when the user's browser does not support JS (or JS is disabled) (the website functions normally, only the visual effects may be slightly worse), then this web page can degrade gracefully
It is very necessary for web pages to degrade gracefully, because JS has always had a bad reputation (various ads, various pop-ups, and even dark things like XSS), so there is a user group that is accustomed to disabling browser JS support. This user group may not be large, but as coding personnel, we should try to make our code as perfect as possible (just like raising our own children), we should consider this situation and give all users a perfect experience
If the above reasons are not convincing enough, then there may be one point that deserves more attention: SEO, that is, search engine optimization. If you want your website to rank higher in search results, it is necessary to do SEO well. Search engine robots cannot understand the meaning of JS code, so search engine robots are equivalent to users who insist on using old browsers (that do not support JS). Obviously, this user is very important
II. How to Achieve Graceful Degradation?
To degrade gracefully, you only need to follow one principle: progressive enhancement
The so-called "progressive enhancement" is to use some original and reliable methods to implement the most basic and important functions, first ensure the integrity of the functions, and then use some additional information layers to wrap the original page (implement display effects, achieve better visual effects and user experience). Even if the clothes are blocked, the functions are still complete, just may not look good
Completely separating JS code from HTML code can achieve "progressive enhancement". HTML is a fully functional original layer, and external JS code is a gorgeous coat (sounds a bit like CSS, but it is indeed so. JS functions are very powerful, but if you rely too much on JS code, you reverse the primary and secondary)
III. What is Backward Compatibility?
Backward compatibility means that JS code should be compatible with lower versions of DOM (some browsers may not support the latest version of DOM, meaning that certain DOM APIs will not be available), for example:
The most commonly used DOM APIs may be these:
document.getElementById(); document.getElementsByTagName(); document.getElementsByClassName();//New feature in HTML5 DOM
But some browsers may not support these methods at all, or only support some of them, then the page will be inaccessible due to JS code errors, or the page functions will no longer be complete
Previously, a method used to ensure backward compatibility was "browser sniffing" technology, that is, asking the browser through BOM: "Do you support this DOM API?". Because there are so many browsers, even a simple line of JS code needs to be wrapped by many layers of browser sniffing code, causing our code to become very bloated
Browser sniffing technology actually still exists in CSS. Of course, CSS cannot obtain browser features through BOM, so it adopts a relatively passive approach:
/*Set transparency to 0.75*/ filter: alpha(opacity = 25); /\*Support IE browser\*/ -moz-opacity: 0.25; /\*Support FireFox browser\*/ opacity: 0.25; /\*Support Chrome, Opera, Safari and other browsers\*/
DOM development to now no longer needs to use browser sniffing technology to ensure backward compatibility. We can do this:
if(document.getElementById){
document.getElementById();
}
if(document.getElementsByTagName){
document.getElementsByTagName();
}
if(document.getElementsByClassName){
document.getElementsByClassName();//New feature in HTML5 DOM
}
This better approach is called "object detection" technology. Although it is still not perfect (to be perfect, unless the browser market is unified..), it is already much better than browser sniffing (at least it does not need to modify JS code because a new browser appears on the market). Object detection no longer relies on BOM, but relies on DOM itself to detect whether the browser supports the specified DOM API
IV. JS Performance Optimization Techniques
- Minimize JS access to DOM
- Minimize HTML tags
- Merge JS scripts
- Position of script tags
- Compress JS code
1. Before optimization:
for(var i = 0;i < document.getElementsByTagName("a");i++){
if(document.getElementsByTagName("a")[i].getAttribute("title") == "main"){
//do something
}
}
After optimization:
var elems = document.getElementsByTagName("a");
for(var i = 0;i < elems.length;i++){
if(elems[i].getAttribute("title") == "main"){
//do something
}
}
Every time a DOM method is called to get a tag object, internally it does a complete search of the DOM tree, which is a very expensive operation. Minimizing DOM access can improve performance
2. Before optimization:
<div>
<div>
<div>
<div>
<div>
<p>
Text
</p>
</div>
</div>
</div>
</div>
</div>
After optimization:
<p> Text </p>
This example may be a bit extreme, but it is enough to illustrate the problem. HTML code should be as concise as possible (just achieve the expected presentation effect)
3. Before optimization:
<scrpit src="./scripts/A.js" type="text/javascript"></script> <scrpit src="./scripts/B.js" type="text/javascript"></script> <scrpit src="./scripts/C.js" type="text/javascript"></script>
After optimization:
<scrpit src="./scripts/All.js" type="text/javascript"></script>
Every time the browser encounters a script tag when loading a page, if the tag points to an external script file, it needs to send a request to load the external file. If there are too many script tags, this cost will be impossible to ignore. Therefore, all JS code should be placed in an external file and loaded with one script tag
4. Before optimization:
<head>
<script></script>
</head>
After optimization:
<body>
html code
<script></script>
</body>
That is to say, placing the script tag at the end of body loads the fastest, and placing it here does not affect the triggering of events such as window.onload. As for why placing it at this position is the fastest, it may be related to the order in which the browser interprets HTML code (load the head part first, if head is too large, it will cause users to wait for a long time and still not see the body content)
5. Before optimization:
var elems = document.getElementsByTagName("p");
for(var i = 0;i < elems.length;i++){
//do something
}
After optimization:
var elems=document.getElementsByTagName("p");for(var i=0;i<elems.length;i++){}
Yes, that's right, the optimized code is not for people to read, but although it is not easy to read, such code has a small volume, which makes the external file volume smaller, of course it can improve performance
P.S. There are specialized tools to help us do this work, such as JSMin, etc.
No comments yet. Be the first to share your thoughts.