Tuning pages for search engines promote click-through rates from search results. Use it in combination with browser security and website performance. Search bots scan sites to gather the information that is in source code but invisible to the user. Pages that lack search data delegate this critical marketing task to the search bot. The chart shows the percentage of websites using each of the fields for search engines.
The next diagram displays default search results containing the page title, URL, and description. These are clickbait because it attracts users. The URL seo is a pretty link because it is human-readable, lacks a file extension, and mirrors the page description. Placing the code shown below into the HTML header creates the above result.
<title>SEO | Strategic Mind</title> <meta name="description" content="Boost webpage ranking by making pages fast, easily indexed by search engines, and populated with Web 3.0 data.">
The title is part of the HTML standard and shows in the browser tab. WordPress and other Content Management Systems automatically insert it into the page. Its maximal width in search results is 600 pixels or 50 to 60 characters. The description is from 100 to 150 characters on mobile and goes up to 350 on desktops. So the leading part of the description needs to make sense by itself because the rest may not be there. Search engines can change the title or description to better match user requests.
The image tag has several SEO metrics. Search bots prefer descriptive file names with hyphens separating words because that provides more information. Unlike the page examples, there is a file extension on the URL. The alt field should be present. Its content displays when there is no image and with text-only browsers. Bots examine images, although that is mainly to filter inappropriate content. The title field is optional, and the text pops up when the mouse is over the image. All of these fields are relevant to SEO.
The logo appears in many locations, including the browser tab, bookmarks, and history. Modern logos are more functional compared to the original favicon.ico. It uses modern file types, can be much larger, and allows different images for varying display sizes. The logo on the left is for small display areas such as in the browser tab. The bigger one on the right is for larger display areas. The code below has the standard as well as Apple logo links for a web page.
Structured data is a modern framework to encode search data. It does not replace the older clickbait. Instead, it stores that information and allows for considerably more detail. The search result shown below is enhanced because it is more prominent and functional. In this case, there is a click to phone link. There are multiple options for adding structured data to a page. In most scenarios, LD-JSON (Linked Data JSON) is the best choice because of the straightforward syntax. The code below presents the company name, address, phone number, and related fields. It fits in just before the ending HTML tag. It can also hold pricing, product description, and many other fields.
The image below is a prepopulated Facebook post. The data comes from the URL in the message. Examples are at the bottom of this page, or you can reference the URL in a post. The following chart presents the percentage of websites that include data for social posting. Facebook introduced the Open Graph type used by most platforms, while Twitter has there standard.
: The Open Graph image provides the URL for a picture. It should be in landscape mode with an aspect ratio of 1:1.9. The maximal width is 1,200 pixels, although rendering tends to be 300 to 500. The image used by the social platform does not need to be visible on the page.
Mobile devices consume over half the bandwidth on the internet, yet, it is not supported by 38% of business websites. Google now indexes sites using mobile, and not desktop, pages. A mobile-friendly display has no horizontal scrolling. Also, the page links are at least finger width apart, which is 50 pixels. Users can test their site with the Google Mobile-Friendly Tester. Here is the code to make the website aware of device-width. It is a prerequisite for responsive coding provides instructions on how the page adapts to changing display areas.
A successful HTTP return code is between 200 and 299. Even when the page looks flawless, an unsuccessful code prevents indexing. The robots file has to allow page scanning to add, update, or delete the index. Pages are removed from the index if they have the no-index tag. The search bot also ensures there is no spamming, plagiarism, or other misleading content.
A backlink is an inbound referral from another website. Links from reputable sites have the most value, especially when they are on topic. For example, a Wiki site carries more authority than a local directory because it is on topic and has exceptional standing.
All else being equal, older domains are better because they are less likely to have malware and had more time to hone their content. The following chart shows the distribution of creation dates across domains.
Website URL management provides the site structure to search bots and other applications seeking to understand the content. It improves page ranking by minimizing SEO errors and supports automated testing of URL links. It involves creating a sitemap, removing broken links, eliminating redirects, and validating meta links; as shown in the next diagram.
A sitemap identifies the URL marketed to search engines and other applications. The following chart shows the distribution of websites by the number of indexable pages.
Search bots find new web pages using backlinks, other search engines, and other techniques. However, sitemaps accelerate indexing and give site owners more control over the schedule. They render the most benefit for new domains, a large number of URLs, and pages that are not attached to the root webpage with HTML.
The default name for the sitemap is sitemap.xml in the root directory. Administrators can submit the sitemap directly to the larger search consoles. Adding a reference to the /robots.txt file helps all bots find the map. Here is how to add the entry.
Search bots detect broken links even when they are not apparent to the user. They degrade domain authority and project a sloppy image. The following chart shows the distribution of broken internal links across websites. Internal links have the lowest error rate. Backlinks, meta links, search indices, or outbound links have far more errors. For example, 5.2% of the Facebook pointers are dead, which is 80 times worse than internal links. To thoroughly test connections confirm the following scenarios return an HTTP success code between 200 and 299.
The conical URL matches the sitemap and page name.
Redirection is in place for all common errors, including protocol mapping, regional domain names, and typos.
The next command checks internal links using open-source tools wget and grep. The first one lists the broken links while the second identifies the source files. Replace the missing.jpg with the results from your scan. Run the second command for each broken link. If necessary, download the website content to flat files.
Redirection maps one URL to another. It enhances website usability by rerouting users and search bots from similar URL names to the correct one. It maintains page ranking, backlinks, and search indexes when changing URL names. Those changes can result from usability, domain transfers, name standardization, and implementing pretty links. The following chart shows the percentage of websites that duplicate all content across protocols and domain prefixes. There are many ways that sites degrade domain ranking through duplication. Also, it complicates URL testing because it's unclear which one holds the search engine and backlink ranking and backlinks. The following redirections map hundreds of URLs to a single one that matches the sitemap and conical URL.
: Websites can have many domain names. It happens during name migrations, to support multiple languages, and to prevent confusion from similar names. Minimizing ambiguity improves usability and reduces fraud. Mapping them back to a single place promotes domain authority and page ranking. Here is an example of domain redirection that comes back to this spot on the page.
: When users enter a URL without a protocol, the request goes to HTTP. However, most sites use HTTPS. Redirection from one to the other increases usability by reducing the number of error messages users receive. The next link demonstrates how it works.
: The www in front of a domain name is the DNS prefix. Websites can redirect any prefix to a single location, including a blank. The feature auto-corrects typos and helps clean up undesired search entries. Use wildcard DNS and SSL certificates to remap any prefix. For example, the DNS for this site includes strategicmind.com, *.strategicmind.com, and *.*.strategicmind.com. These following links got cleared out of the search index because they redirected back to this page.
: Case sensitivity varies based on the segment of the URL and the underlying technologies. For example, Linux file names are case sensitive while Windows are not. It can also change based on database configurations and if the page is dynamically or static. Similar issues happen with search engine indices. The best practice is using lower case. The next example redirects back to here.
: A directory listing has a trailing slash on the name. However, web servers interpret it in multiple ways. The handling of the slash on a single website may vary between pages, server directories, and the root URL. Hidding directory listings keep sites more secure. The best practice is to map directory listings back to the file name. Here is an example.
: Spelling correction is possible for any part of the URL. Previous sections corrected the protocols, domain names, directories, and character case. Also, it's possible to correct the spelling for the final part of the URL, as shown here.