Search Engine Optimization In Google

1.  Introduction

         It is dream for every owner website to have their site recognize and first rank in Search Engine Google. To do that, People usually called this process with Search Engine Optimization (SEO). SEO is process to do a little refactor in some fragment of website to give significant impact in performance and user experience in Search Engine.

         There are some things that every master SEO to know, for example like to help Search Engine to crawl your website with ease, understanding site content, etc. With that, we have to know about how Search Engine work. There are some basic term in Search Engine on Google, and there are:

  1. Indeks : Google stores all web pages that it knows about in its index. The index entry for each page describes the content and location (URL) of that page.
  2. Crawl : The process of looking for new or updated web pages. Google discovers URLs by following links, by reading sitemaps, and by many other means.
  3. Crawler : Automated software that crawls (fetches) pages from the web and indexes them.
  4. Googlebot : The generic name of Google’s crawler.Googlebot crawls the web constantly.

         In this article, the author will be explained deep about how to optimize site in Search Engine Google.

2.  Is your site already exists on Search Engine Google?

            Thats the first question that you have to answer before doing SEO. The easiest way to do that is just search your site on google. If your site doesn’t exists, then there are many factor tht Googlebot skip your site for crawling, For example :

  1. The site isn’t well connected from other sites on the web.
  2. You’ve just launched a new site and Google hasn’t had time time to crawl it yet.
  3. The design of the site make it difficult for Google to crawl its content effectively.
  4. Google received an error when trying to crawl your site.
  5. Your policy blocks Google from crawling the site.

         For point e, it sometimes happened when you don’t want people to find your specific URL in Search Engine Google because for security, doesn’t necessary, etc. To do that, you can in three ways, and there are:

  1. Make “robot.txt” file that usually placed in your domain and subdomain for telling Googlebot to not crawling page that list in “robot.txt”.
  2. Using tag no index
  3. Using method authorization (It is the best practice)
  4. From explanation before, is URL and site hierarchy affect in process crawling ?

         Of course it is. Search Engine need unique URL each content page for process crawling and indexing that content, also directed user to that content. In a common URL are generally split into multiple distinct sections.

         It is Protocol://hostname/path/filename?Querystring#fragment. For example https://www.example.com/RunningShoes/Women.htm?size=8#info

         URL is usually displayed in a search result in Google below the document title. So it is important to make URL name simple, Here’s a tip you can try :

  • Use words in URLs
  • Create a simple directory structure
  • Provide one version of a URL to reach a document

         Navigation of website is important as well, Althought Google’s search results are provided at page level, Google also likes to have a sense of what role a page plays in the bigger picture of the site.

         Also Navigation is really useful for user to go to more specific content. Because often, one page on folder root cant contain all contain. For Example root page -> related topic listing -> specific topic. So you must plan carefully about navigation based on your homepage.

         To do that , you have to create simple navigational page for users. Here’s a tips for reathing that:

  • Create a naturally flowing hierarchy
  • Use text for navigation
  • Create a navigational page for users, a sitemap for search engines
  • Show useful 404 pages

4.  Then, what can I do to help Googlebot to find my site for crawling and understand my content ?

            When Google bot crawls a page, For optimal rendering and indexing, always allow Googlebot to access the Javascript, CSS, and image files used by your website. Otherwise, it will result in suboptimal rankings. To help Googlebot easier to crawling and make user have nice experience when searching content they wanted, There are many ways to do that, for example :

  • Create unique, accurate page titles.

A <title> tag tells both users and search engines what the topic of particular page is. The <title> tag should be placed within the <head> element of the HTML document.

  • Create good titles and snippets in search results.
  1. Accurately describe the page’s content
  2. Create unique titles for each page
  3. Use brief, but descriptive titles
  • Use the “description” meta tag

A page’s description meta tag gives Google and other search engines a summary of what the page is about. It is might contain sentence or two or even a short paragraph. The description meta tag is placed within <head> element of your HTML document.

  • Use heading tags to emphasize important text
  1. Imagine you’re writing outline.
  2. Use heading sparingly across the page.
  • Add structured data markup

Structured data is code that you can add to your sites pages to describe your content to search engines, so they can understand what’s on your pages and eye catching. The best practice to do that are:

  • Check your markup using the Rich Results test.
  • Use Data Highlighter
  • Keep track of how your marked up pages are doing

5.  How manage our appearance in Google Search results?

         Here, we can help Google by providing specific information about our site, which can help our site display in richer features in search results.

         Google uses structured data that it finds on the web to understand the content of the page, to gather information about the web and the world in general, and to enable special search result features and enhancements. Because the structured data labels each individual element of the recipe, users can search for your recipe by ingredient, calorie count, cook time, and so on. Structured data is coded using in-page markup on the page that the information applies to. The structured data on the page should describe the content of that page.

         Google Search supports structured data in JSON-LD, Microdata and RDFa formats. As an example, following is an example code of JSON-LD format:

<script type=”application/ld+json”>
{
“@context”: “https://schema.org”,
“@type”: “Organization”,
“url”: “http://www.example.com”,
“name”: “Unlimited Ball Bearings Corp.”,
“contactPoint”: {
“@type”: “ContactPoint”,
“telephone”: “+1-401-555-1212”,
“contactType”: “Customer service”
}
}
</script>

6.  It’s better to organize our site hierarchy

  • Understand how search engines use URLs

URLs are generally split into multiple distinct sections:

      protocol://hostname/path/filename?querystring#fragment

For example:

      https://www.example.com/RunningShoes/Womens.htm?size=8#info

Google recommends that all websites use https:// when possible. The hostname is where your website is hosted. Google differentiates between the “www” and “non-www” version (for example, “www.example.com” or just “example.com”).

Path, filename, and query string determine which content from your server is accessed. These three parts are case-sensitive, so “FILE” would result in a different URL than “file”. The hostname and protocol are case-insensitive; upper or lower case wouldn’t play a role there.

A fragment (in this case, “#info”) generally identifies which part of the page the browser scrolls to. Because the content itself is usually the same regardless of the fragment, search engines commonly ignore any fragment used.

When referring to the homepage, a trailing slash after the hostname is optional since it leads to the same content (“https://example.com/” is the same as “https://example.com”). For the path and filename, a trailing slash would be seen as a different URL (signaling either a file or a directory), for example, “https://example.com/fish” is not the same as “https://example.com/fish/”.

  • Navigation is important for search engines

The navigation of a website is important in helping visitors quickly find the content they want. It can also help search engines understand what content the webmaster thinks are important. Although Google’s search results are provided at a page level, Google also likes to have a sense of what role a page plays in the bigger picture of the site.

  • Plan your navigation based on your homepage

Unless our site has only a handful of pages, we should think about how visitors will go from a general page (your root page) to a page containing more specific content. Do we have enough pages around a specific topic area that it would make sense to create a page describing these related pages (for example, root page -> related topic listing -> specific topic)?

  • Using ‘breadcrumb lists’

A breadcrumb is a row of internal links at the top or bottom of the page that allows visitors to quickly navigate back to a previous section or the root page. Many breadcrumbs have the most general page (usually the root page) as the first, leftmost link and list the more specific sections out to the right.

  • Create a simple navigational page for users

Make it as easy as possible for users to go from general content to the more specific content they want on our site. Add navigation pages when it makes sense and effectively work these into our internal link structure. Make sure all of the pages on our site are reachable through links, and that they don’t require an internal “search” functionality to be found. Link to related pages, where appropriate, to allow users to discover similar content.

  • Simple URLs convey content information

Creating descriptive categories and filenames for the documents on our website not only helps you keep your site better organized, it can create easier, “friendlier” URLs for those that want to link to your content. Visitors may be intimidated by extremely long and cryptic URLs that contain few recognizable words.

  • URLs are displayed in search results

Lastly, remember that the URL to a document is usually displayed in a search result in Google below the document title. Google is good at crawling all types of URL structures, even if they’re quite complex, but spending the time to make our URLs as simple as possible is a good practice.

7.  Optimize your content

  • Make your site interesting and useful

Creating compelling and useful content will likely influence our website more than any of the other factors discussed here. Users know good content when they see it and will likely want to direct other users to it. Organic or word-of-mouth buzz is what helps build our site’s reputation with both users and Google, and it rarely comes without quality content.

  • Know what your readers want (and give it to them)

Think about the words that a user might search for to find a piece of your content. Users who know a lot about the topic might use different keywords in their search queries than someone who is new to the topic.

  • Act in a way that cultivates user trust

Users feel comfortable visiting your site if they feel that it’s trustworthy. A site with a good reputation is trustworthy. Cultivate a reputation for expertise and trustworthiness in a specific area. Provide information about who publishes our site, provides the content, and its goals.

  • Make expertise and authoritativeness clear

Expertise and authoritativeness of a site increases its quality. Be sure that content on our site is created or edited by people with expertise in the topic.

  • Provide an appropriate amount of content for our subject

Creating high quality content takes a significant amount of at least one of the following: time, effort, expertise, and talent/skill. Content should be factually accurate, clearly written, and comprehensive.

  • Avoid distracting advertisements

It’s better not let the advertisements distract users or prevent them from consuming the site content. For example, advertisements, supplement contents, or interstitial pages (pages displayed before or after the content you are expecting) that make it difficult to use the website.

  • Use links wisely
    1. Write good link text
    2. Be careful who you link to
    3. Combat comment spam with “nofollow”
    4. Automatically add “nofollow” to comment columns and message boards

8.  Optimize your images

  • Use the “alt” attribute

Provide a descriptive filename and alt attribute description for images. The “alt” attribute allows us to specify alternative text for the image if it cannot be displayed for some reason. Other than that, the alt text for that image will be treated similarly to the anchor text of a text link. Also, optimizing our image filenames and alt text makes it easier for image search projects like Google Image Search to better understand our images.

  • Help search engines find your images

An Image sitemap can provide Googlebot with more information about the images found on our site. This increase the likelihood that our images can be found in Image Search results. The structure of this file is similar to the XML sitemap file for our web pages.

  • Use standard image formats

Use commonly supported filetypes – Most browsers support JPEG, GIF, PNG, BMP and WebP image formats. It’s also a good idea to have the extension of your filename match with the file type.

9.  Let’s make our site mobile-friendly

  • Choose a mobile strategy

There are multiple ways of making our website mobile ready and Google supports different implementation methods :

  1. Responsive web design (Recommended)
  2. Dynamic serving
  3. Separate URLs
  • Configure mobile sites so that they can be indexed accurately

Regardless of which configuration you choose to set up our mobile site, there are key points that we should take note of:

  1. If we decide to use Responsive Web Design, use meta name=”viewport” tag to tell the browser how to adjust the content. If we use Dynamic Serving, use the Vary HTTP header to signal our changes depending on the user-agent. If we are using separate URLs, signal the relationship between two URLs by <link> tag with rel=”canonical” and rel=”alternate” elements.
  2. Keep resources crawlable. Blocking page resources can give Google an incomplete picture of your website. If Googlebot doesn’t have access to a page’s resources, such as CSS, JavaScript, or images, we may not detect that it’s built to display and work well on a mobile browser.
  3. Avoid common mistakes that frustrate mobile visitors, such as featuring unplayable videos (for example,, Flash video as the page’s significant content).
  4. Mobile pages that provide a poor searcher experience can be demoted in rankings or displayed with a warning in mobile search results.
  5. Provide full functionality on all devices. Mobile users expect the same functionality – such as commenting and check-out – and content on mobile as well as on all other devices that our website supports.
  6. Make sure that the structured data, images, videos, and metadata we have on our desktop site are also included on the mobile site.
  • It’s time to promote our website

Effectively promoting our new content will lead to faster discovery by those who are interested in the same subject. As with most points covered in this document, taking these recommendations to an extreme could actually harm the reputation of our site.

A blog post on our own site letting our visitor base know that we added something new is a great way to get the word out about new content or services. Other webmasters who follow our site or RSS feed could pick the story up as well.

  1. Lastly, let’s analyze our search performance and user behavior

A. Analyzing your search performance

Using Search Console can help you identify issues that, if addressed, can help our site perform better in search results. With the service, webmasters can:

  • See which parts of a site Googlebot had problems crawling
  • Test and submit sitemaps
  • Analyze or generate robots.txt files
  • Remove URLs already crawled by Googlebot
  • Specify your preferred domain
  • Identify issues with title and description meta tags
  • Understand the top searches used to reach a site
  • Get a glimpse at how Googlebot sees pages
  • Receive notifications of quality guidelines violations and request a site reconsideration

B. Analyzing user behavior on our site

If we have improved the crawling and indexing of our site using Google Search Console or other services, web analytics programs like Google Analytics are a valuable source of insight for the traffic coming to our site. We can use these to:

  • Get insight into how users reach and behave on your site
  • Discover the most popular content on our site
  • Measure the impact of optimizations we make to your site, for example, did changing those title and description meta tags improve traffic from search engines?
  1. Appendix: Link URLs used in this paper

The following URLs are referenced in this paper:

https://developers.google.com/search/docs/guides/search-gallery

https://support.google.com/webmasters/answer/7451184?hl=id

Presented by

Arif Rahman Hakim (2001847912)

I Nyoman Aditya Yudiswara (2001847881)

Sani M Isa