Some parts of this page may be machine-translated.

Jamstack SEO Guide [Second Half]

Jamstack SEO Guide [Second Half]

This will explain the overview of what is necessary for SEO success, whether you are someone who operates a Jamstack website or not.

 

Author: Nebojsa Radakovic March 9, 2021

 

4. Is there anything else that can affect the performance of the website?

There are also several points to note. Please check if they are being done correctly in order to improve the performance of the website.

 

4-1 Performance Budget

When launching a new website or planning a renewal, please pay attention to the performance budget. In order to improve the balance of performance issues without compromising functionality and user experience, it is important to set the underlying purpose and overall approach in the initial stages of web development.

 

It was also very helpful during the reconstruction of our website. If you decide to take this approach, please start planning using the performance budget calculator.

 

4-2 URL Structure, Site Structure, Navigation

The clarity of the URL, website structure, internal links, and navigation can greatly impact the impression of a website from the perspective of users and crawlers. The importance, impact, and complexity of a website's structure increases with its size. There are several general rules that should be followed.

 

Google considers it beneficial for a website to have fewer clicks from the homepage to a specific page.

 

By planning the structure of the website in parallel with keyword research, it is possible to increase the website's authority and efficiently spread it evenly across pages, thereby increasing the likelihood of appearing in search results for all desired keywords.

 

It is essential to have clear categories, keywords, and links to the main page in the website menu. Please pay attention to internal links. In other words, only link to paragraphs related to the topic in articles related to the topic. Avoid having so-called orphan pages that are not linked from any page on the website.

 

Finally, use short URLs that prioritize keywords and separate words with hyphens to make them easier to read. Please make sure that the URL clearly expresses the content of the page as much as possible.

 

4-3 JavaScript

Reduce the usage of JavaScript on the website. It's a very simple thing to do. While JS can improve website functionality, it can also decrease website performance depending on usage.

 

In the new world of Core Web Vitals, the actual execution time of JS has the greatest impact on First Input Delay (FID).

 

In general, delay or remove third-party scripts. Improve JS performance and defer non-essential scripts as much as possible. Make sure to place JS code below the main content. This will not negatively impact user experience. With Google Tag Manager, you can simplify using custom JS, for example.

 

4-4 Image

The most effective way to save the overall size of the page and the loading speed of the page is to optimize images. First, you can take advantage of lazy loading.

 

The second option is to use the WebP or AVIF image format, which is designed to be more optimized and have a smaller file size than JPG (or PNG).

 

As a result, the website will become faster. By optimizing and compressing images and delivering them from a CDN, you can improve the LCP (Largest Contentful Paint) score. Keep in mind that image optimization also includes elements of web design and UX. It is not just a matter of simply resizing images.

 

Most static site generators are trying to provide native image processing solutions. If you are using Gatsby, you can use the gatsby-image package, which is designed to seamlessly work with Gatsby's native image processing capabilities using GraphQL and Sharp. Not only does it help optimize images, but it also automatically applies blur-up effects and lazy loading for images that are not currently displayed on the screen. You can also use the new gatsby-plugin-image (currently in beta) to improve LCP and add support for AVIF.

 

Since version 10.0.0, Next.js has built-in Image Component and Automatic Image Optimization. Images are lazy-loaded by default, rendered in a way that avoids Cumulative Layout Shift issues, and served in the latest formats such as WebP if supported by the browser. They are also optimized on-demand based on user requests.

 

Hugo's users can apply this shortcode to resize, lazy load, and progressive load images. Alternatively, they can use open source solutions like ImageOptim to run in the images folder. Finally, Jekyll's users can do things like this or set up Imgbot here.

Don't focus on performance metrics, keep it in mind. For example, if the search results for your niche/topic/keyword are filled with pages using videos or flashy animations, it's safe to say that performance scores are a problem for most people (although it's a bit of a vague concept). However, that doesn't mean that pages with only high-performing text and images will always rank higher. There's also the possibility that they won't convert well for your target audience. Why? Ranking is a multi-factor game, and performance is just one piece of the puzzle.

5. Index and Crawling Ability

 

No matter how great your content is, it won't mean anything if it's not properly indexed and crawled by search engines. Allowing search engines to crawl your website is one thing. However, ensuring that bots can crawl and discover all necessary pages, while excluding pages you don't want them to see, is a separate issue.

 

5-1 Robots.txt and XML Sitemap

The robots.txt file provides information about files and folders that you want search bots to crawl, or files and folders that you do not want to crawl. It can be useful for making an entire section of a website private (for example, all WordPress websites have a robots.txt file that prevents bots from crawling the admin directory). It can also be used to prevent images and PDFs from being indexed, or to prevent internal search result pages from being crawled and displayed in search engine results.

 

Please make sure that the robots.txt file is located in the top-level directory of the website, indicating the location of the sitemap and ensuring that the content/sections of the website that you want to be crawled are not blocked.

 

On the other hand, a sitemap is an XML file that provides valuable information about the structure of a website and its pages to crawlers. It tells the crawler which pages are important for the website, how important they are, when they were last updated, how often they are changed, and if there are alternative language versions of the page.

 

A sitemap helps search engine crawlers index pages faster. This is especially useful for websites with thousands of pages or a deep website structure. Once you have created a sitemap, be sure to notify Google Search Console with the big G.

 

Gatsby users can use a plugin to automatically create robots.txt and sitemap.xml. Jekyll users can either use the sitemap plugin or follow this tutorial to quickly generate a sitemap manually. For robots.txt, simply add the file to the root of your project.

 

Hugo has a template file for the site map. On the other hand, for robots.txt, you can generate a customized file like other templates. If you are using Next.js, the easiest and most common way is to use this solution to generate the site map and robots.txt during the build process.

 

5-2 Duplicate Content, Redirect, Canonicalization

 

We all want Google to recognize that our content is original. However, there are times when this can become an issue. This can happen when a single page is accessed through multiple URLs (HTTP and HTTPS), when original articles are shared on platforms like Medium, or when different pages have similar content.

 

What kind of problems are there and what should be done in such cases?

 

On some pages and websites, having the same or slightly different content is considered duplicate content. There is no universal solution to the problem of how to mark similar content as duplicate. The answer varies and also depends on the interpretation of Google and other search engines. For example, on e-commerce sites, the same content may be displayed on multiple item pages, but it is rarely considered duplicate content.

 

However, intentionally using the same content on multiple pages or domains can potentially damage the original page or website.

 

Why? With this, it becomes difficult for search engines to determine which page is more relevant to the search query. If you do not explicitly tell Google which URL is the original/canonical, Google may choose for you, resulting in unexpected pages being boosted.

 

There are several ways to handle duplicate content, depending on the situation.

 

If duplicate content is found on one or multiple pages within the website, the best solution is to rewrite the content. However, in cases where the same topic/keyword/product is covered, consider setting up a 301 redirect from the duplicate page to the original page. URL redirects are also a helpful practice in notifying search engines of changes made to the website's structure.

 

For example, even if you change the URL structure of a page, if you want to keep the benefits of backlinks, using a 301 redirect will declare the new URL as the successor to the previous URL.

 

If you are running a website on Netlify, you can easily set up redirects and rewrite rules by adding a _redirects file to the root of the public folder. Similarly, if you are deploying a project on Vercel, you can set up redirects in the vercel.json file located in the root directory. For Amazon S3 users, you can set up redirects as shown below.

Another way to deal with duplicate content is to use the rel=canonical attribute in link tags.

 

  <link href="URL OF ORIGINAL PAGE" rel="canonical" />
  

 

There are two ways to use it. Using the above code will cause the search engine to point to the original canonical version of the page. This means that the current crawler should be treated as a copy of the specified URL.

 

Alternatively, it can be used as a self-referencing rel=canonical link to an existing page.

 

  <link href="PAGE URL" rel="canonical" />
  

 

In either case, the canonical attribute ensures that the correct page or preferred version of the page is indexed.

 

For example, Gatsby has a simple plugin called gatsby-plugin-canonical-urls that sets the base URL used for the website's canonical URL. If you are using Next JS, you can either use the package next-absolute-url or opt out of Next SEO, a plugin that makes SEO management easier in Next.js projects.

 

Hugo supports permalinks, aliases, and link normalization, with multiple options for handling relative and non-relative ones. As explained here, these are absolute URLs. One possible canonical URL solution for Jekyll can be found here.

 

5-3 Structured Data

 

Search engines such as Google use Schema.org structured data to better understand the content of a page and display it in rich search results.

 

Implementing structured data correctly may not directly affect rankings, but it can increase the likelihood of appearing in about 30 rich search results using schema markup.

 

Creating data with a proper structure is very easy. For information on schemas suitable for content, please visit schema.org. You can also use Google's Structured Data Markup Helper to check the coding process.

 

Structured data is one way to provide detailed information about a website's page to Google (and other search engines), but the biggest challenge is determining which type to use on a page. Best practice is to focus and typically use one top-level type per page.

 

Using structured data is most useful for search queries that display more than just the title and description, such as e-commerce, recipes, and job listings. Take a look at this article from MOZ to understand which type of structured data is right for you.

 

There are two ways to handle structured data with Jamstack. Most headless CMSs provide tools for managing structured data on a page-by-page basis through the definition of custom components, so you can rest assured. Alternatively, you can also add a schema as part of the template you are using.

 

5-4 Crawl Budget

The crawl budget refers to how much attention search engines pay to your website. If you are running a large-scale website with a large number of pages, it is important to prioritize what to crawl, when to crawl, and how much resources to allocate for crawling. This is managed through the crawl budget. If not properly managed, important pages may not be crawled and indexed.

 

If you are operating a website with a considerable number of pages (think over 10,000 pages) or have recently added a new section with a large number of pages that require crawling, it is recommended to set your crawling budget to automatic control.

 

However, it is good to know that there are some things you can do to maximize your website's crawling budget. Most of these are things we have already mentioned, such as improving website performance, limiting redirects and duplicate content, and setting up a good website structure and internal links.

6. What is Technical SEO?

Today's SEO is a combination of efforts from developers, UX, product, and SEO specialists. It is about finding a balance between potential viewers, search engines, and business goals and expectations. When done correctly, it is not just a strategic way to increase website traffic, but also improves UX, conversion, and accessibility at the same time.

 

Speed and performance have become important for both users and search engines, making a reliable architecture essential for supporting website performance.

 

Jamstack may be a new way to build websites, but in addition to the benefits of performance and SEO, it offers impressive advantages compared to traditional stacks. Among them, security and scalability are top-notch.

 

In Part 2, we will discuss content, on-page, and off-page optimization.

 

[jamstack_blog_tag]

Related Blogs

Popular Article Ranking

For those who want to know more about manual creation and instruction manual creation

Tokyo: +81-3-5321-3111
Nagoya: +81-52-269-8016

Reception hours: 9:30 AM to 5:00 PM JST

Contact Us / Request for Materials