Skip to main content

Discovering Potential SEO Problems_SEO Notes 4

Free2015-09-12#SEO#SEO审计#规范化转向#深层链接#关键词研究#关键词映射分布#关键词自我竞争

SEO activities are more often carried out on existing sites, doing SEO during redesign or redesigning for SEO, so discovering potential SEO problems from existing sites is the first step, known in the industry as "SEO audit".

1. Factors to Focus on in SEO Audit

1. Usability

Usability cannot directly affect SEO, but it can affect many other factors, such as conversion rate, and whether people will link to this website.

2. Accessibility/Crawlability

Ensure the website is friendly to search engine spiders, and accessibility also directly affects usability.

2. Search Engine Health Check

You can do quick checks through these simple methods:

  • Search site:domain in search engines (e.g., Baidu search site:ayqy.net), ensure all pages are included in the index library, compare this number with the website's actual page count to see if they match.

  • Try searching the brand name to ensure good ranking. If ranking is low, it's likely been penalized.

  • Check snapshots to ensure page snapshot versions match the real pages.

Of course, there are more reliable check methods:

  • Keyword Health Check

    Is keyword positioning accurate? Is the website structure logic the same as how users search for related keywords? Do multiple pages target the same keywords (is there self-competition)?

  • Duplicate Content Check

    The first thing to do is ensure pages without www (e.g., ayqy.net) 301 redirect to pages with www (www.ayqy.net), or vice versa (called canonical redirect). Generally, the former is used more often.

    Then ensure there's no https version copying http pages. Additionally, duplicate content checks are needed. The method is simple: copy a unique piece of content, add half-width double quotes and search (e.g., search "through the thick heart wall"), see if search results have duplicate pages within the site. It's impossible and unnecessary to check all content; just ensure the most important pages have no duplicates. You can also use commands like inurl: and intitle: to check duplicate content.

    Also ensure each piece of content corresponds to a unique URL. If it's really hard to guarantee, you should use internal links to indicate which one is the canonical version, but this is definitely not a long-term solution. This situation must be thoroughly avoided (e.g., use cookies instead) to avoid self-competition.

  • URL Check

    Ensure clear, short, descriptive URLs are used. Descriptive means containing keywords, but avoid keyword stuffing. Don't add parameters (if you must, keep them minimal). Keywords should be as simple as possible, making it easy for users (and spiders) to understand.

  • Title Tag Check

    Ensure every page's title tag on the website is unique and descriptive. Ideally, don't waste time putting brand names in URLs. If you must include them, brand names should be placed at the end of the title, not the beginning, because putting keywords at the very front of titles benefits ranking.

    Additionally, check if title tags are less than 70 characters in length (too long will be considered keyword stuffing).

  • Content Check

    Do the website's main pages have sufficient content? Do these pages use h tags? From another perspective, ensure pages with little content don't occupy a high proportion of total pages.

  • Meta Tag Check

    Check website pages' meta robots tags. If this tag exists, carefully check whether it restricts spider access to main content. Also, NoIndex and NoFollow tags could completely destroy SEO plans.

    Also ensure each page has a unique meta description tag. If this is really unachievable, consider removing description tags. Although meta description tags aren't a ranking factor, they may be used to calculate duplicate content quantity, and search engines frequently use meta description content as webpage descriptions in search results, thus affecting click-through rates.

  • Sitemap and robots.txt File Check

    Use webmaster tools to check robots.txt files, and ensure sitemap files include all pages you want indexed.

  • Redirect Check

    Check response header information, ensure all redirects are 301, especially CMS default settings and hosting provider default settings.

  • Internal Link Check

    Find pages with too many links. It's recommended that intra-page links don't exceed 100, although placing more links on high-trust pages is fine.

    Ensure the website fully utilizes internal link anchor text (the "gold mine" mentioned earlier), but don't abuse it because it will affect user experience.

  • Avoid Unnecessary Subdomains

    Search engines may not pass the domain's full trust and link weight to subdomains, because subdomains may be under completely different people's control (e.g., blog platforms, forums), so search engines believe they should be evaluated separately. Moreover, from an implementation perspective, content on subdomains can easily be placed in subdirectories in most cases.

  • Geo-targeting

    If the domain is targeted at a specific country, location information should be added to all pages as much as possible, and check in local search results to see if problems exist.

  • External Link Check

    Use backlink tools to check external links, find anchor text traps (if a large number of external link anchor texts contain a certain important keyword, search engines may consider it cheating). On the other hand, also ensure the website's main keywords appear a certain number of times. Missing keywords in external link anchor text is also not good; a suitable balance needs to be found.

    Also look at deep links (links to pages other than the homepage), which help improve ranking of key website parts. You should also look at the pages where external links are located: do they look like paid links, do they look like spam sites? If大量 low-quality external links appear, it will greatly affect ranking.

    Finally, ensure there are enough external links and guarantee external link quality as much as possible. These situations can be analyzed by comparing with main competitors.

  • Page Load Time

    Is page load time too long? If too long, it will affect website crawling and indexing. Of course, only when it exceeds 5 seconds or longer does it need key attention.

  • Image alt Tags

    Do all images have alt text and filenames containing keywords? Currently, search engines still heavily rely on this information to identify image content, and this is a legal form of keyword stuffing.

  • Code Quality

    Although search engines don't require W3C validation, checking code is also necessary. Poor code may trigger negative effects.

3. Importance of Keyword Check

Keyword check includes the following steps:

1. Keyword Research

The sooner the better. Keywords drive page SEO; determine which words to use as keywords as early as possible.

2. Website Structure

Website structure design may be complex. Need to check keyword research and existing website (modify as little as possible). Can consider from a sitemap perspective.

Need a hierarchical system that can guide users to every page where conversion (making money) can happen. The ideal structure is to make parent pages of money-making pages rank as high as possible in related keyword search results. Hierarchical structure becomes very complex when involving geographical locations. The ideal situation is: design a single hierarchical system that appears natural to users and maps to the closest keywords. Of course, if users can access a product through multiple search methods, building such a hierarchical structure will be difficult.

3. Keyword Mapping Distribution

After understanding the keyword list and overall structure, need to map and distribute main related keywords to URLs. During this process, it's easy to discover which pages are unrelated to target keywords and which keywords have no corresponding pages at all. All unrelated pages should be deleted. If needed, can return to the previous step to adjust website structure (appropriate website structure should allow natural keyword mapping distribution).

4. Website Check

If keyword mapping distribution is done well, website checks will be relatively easy. When checking title tags and h tags, can reference keyword mapping distribution.

4. Keyword Self-Competition

Keyword self-competition occurs when multiple pages target the same keywords, such as multiple pages having the same target keywords in title and h tags.

If spiders find a bunch of pages with the same keywords from a site, this won't make search engines think the site is more relevant to that keyword. Instead, it forces search engines to make a choice among these pages. This is self-competition (internal competition).

Self-competition brings the following negative effects:

  • Reduces value of internal anchor text

  • Dilutes value passed from external links (external links point to multiple pages with the same theme, not concentrated)

  • Reduces content quality (after writing 3-4 articles on the same theme, content value will decrease)

  • Reduces conversion rate

Methods to eliminate self-competition: delete extra pages, 301 redirect to target pages (avoid 404).

5. Server and Host Problems

Such problems have low probability of occurrence, but can greatly affect SEO, for example:

  • Server timeout

  • Response time too long

  • Shared IP address (are there spam websites on the shared IP address?)

  • Blocked IP address (possibly even surrounded by spam, search engines simply block the entire IP segment)

  • Spider detection and processing (restricting user access may accidentally hurt spiders)

  • Bandwidth and transfer limits (traffic limits, directly leading to inability to access)

  • Server geographical location (local search optimization)

Reference Materials

  • "The Art of SEO"

Comments

No comments yet. Be the first to share your thoughts.

Leave a comment