We love blogging here at Brighton SEO so we have created a blog to tell you about what we are up to and what the rest of the world is up to in our sector. We would love to hear from you if you would like to contribute in any way shape or form.

19 Technical SEO Facts For Amateurs

Technical WEB optimisation is an awesome field. There are so many little nuances to it that make it exciting, and its practitioners are required to have excellent problem solving and demanding thinking expertise.

In this blog, we cover some enjoyable technical search engine optimisation information. While they might not impress your date at a dinner party, they may beef up your technical WEB optimisation knowledge — they might assist you to in making your website rank higher in search results.

Let's dive into the list.

1. Web page speed matters

Most think of slow load times as a nuisance for customers, but its penalties go further than that. Page velocity has long been a search ranking issue, and Google has even mentioned that it may soon use mobile web page speed as a consideration to mobile search rankings. (In fact, your viewers will respect faster page load instances, too.)

Many have used Google's PageSpeed Insights to get an evaluation of their web site pace and recommendations for enhancement. For those seeking to enhance mobile site efficiency in particular, Google has a new web page speed tool out that is mobile-centered. This tool will test the web page load time, take a look at your mobile web site on a 3G connection, consider mobile usability and more.

2. Robots.txt records data are case-sensitive and have to be positioned in a website's main directory

The file must be named in all lower case (robots.txt) in order to be acknowledged. Moreover, crawlers only look in a single place after they search for a robots.txt file: the site's fundamental directory. If they do not find it there, often they're going to simply proceed to crawl, assuming there isn't any such file.

3. Crawlers cannot always access infinite scroll

And if crawlers can't access it, the page might not rank.

When utilising infinite scroll in your website, make sure that that there's a paginated sequence of pages along with the one long scroll. Be sure to implement replaceState/pushState on the infinite scroll web page. It is an enjoyable little optimisation that almost all net developers are usually not conscious of, so make sure to verify your infinite scroll for rel=”next” and rel=”prev within the code.

4. Google doesn't care the way you structure your sitemap

As long as it's XML, you may construct your sitemap however you would like — class breakdown and total structure is up to you and will not have an effect on how Google crawls your site.

5. The noarchive tag won't damage your Google rankings

This tag will prevent Google from displaying the cached version of a page in its search results, but it will not negatively affect that web page's overall ranking.

6. Google normally crawls your home page first

It's not a rule, however generally talking, Google often finds the home web page first. An exception could be if there are a lot of hyperlinks to a specific page inside your website.

7. Google scores inner and external hyperlinks differently

A link to your content or web site from a third-party site is weighted differnetly than a link from your own web site.

8. You can test your budget in Google Search Console

Your crawl budget is the number of pages that serps can and wish to crawl in a given amount of time. You can get an idea of yours in your Search Console. From there, you can try to improve it if necessary.

9. Disallowing pages with no SEO worth will improve your crawl budget

Pages that aren't important to your SEO efforts usually embody privacy insurance policies, expired promotions or phrases and circumstances.

My rule is that if the page shouldn't be meant to rank, and it doesn't have 100% unique quality content, block it.

10. There is a lot to find out about sitemaps

  • XML sitemaps must be UTF-8 encoded.
  • They cannot include session IDs from URLs.
  • They must be less than 50,000 URLs and no larger than 50 MB.
  • A sitemap index file is recommended instead of multiple sitemap submissions.
  • You may use different sitemaps for different media types: Video, Images and News.

11. You can verify how Google's mobile crawler ‘sees' pages of your website

With Google migrating to a mobile-first index, it is more vital than ever to make sure your pages perform effectively on mobile units.

Use Google Console's Mobile Usability report to find specific pages in your website that may have points with usability on mobile devices. You may also try the mobile-friendly test.

12. Half of page one Google results are now HTTPS

Website security is becoming more and more essential. Along with the ranking enhancement given to secure websites, Chrome is now issuing warnings to customers when they encounter sites with pages that aren't safe. And it seems to be that webmasters have responded to those updates: In line with Moz, over half of internet sites on page one of search results are HTTPS.

13. Attempt to maintain your web page load time to 2 to three seconds

Google Webmaster Trends Analyst John Mueller recommends a load time of two to 3 seconds (although a longer one will not necessarily affect your rankings).

14. Robots.txt directives don't stop your web site from ranking in Google (completely)

There is quite a lot of confusion over the Disallow” directive in your robots.txt file. Your robots.txt file simply tells Google not to crawl the disallowed pages/folders/parameters specified, but that doesn't imply these pages won't be listed. From Google's Search Console Help documentation:

You should not use robots.txt as a way to hide your web pages from Google Search results. It is because different pages might point to your web page, and your web page may get indexed that way, avoiding the robots.txt file. If you want to block your web page from search results, use one other method equivalent to password protection or noindex tags or directives.

15. You can add canonical from new domains to your primary domain

This allows you to hold the worth of the old domain while using a more modern domain name in marketing materials and different locations.

16. Google recommends maintaining redirects in place for at least one year

Because it may well take months for Google to recognise that a site has moved, Google consultant John Mueller has recommended holding 301 redirects live and in place for at the very least a year.

Personally, for vital pages — say, a page with rankings, links and good authority redirecting to another essential web page — I recommend you by no means do away with redirects.

17. You'll be able to control your search box in Google

Google may sometimes embrace a search box along with your listing. This search field is powered by Google Search and works to indicate to users relevant content within your website.

If desired, you can choose to power this search box with your own search engine, or you can include results from your mobile app. You can even disable the search box in Google using the nositelinkssearchbox meta tag.

18. You may allow the ‘notranslate' tag to prevent translation in search

The notranslate” meta tag tells Google that they should not provide a translation for this page for various language versions of Google search. It is a good option if you're skeptical about Google's means to correctly translate your content.

19. You can get your app into Google Search with Firebase app indexing

If you've got an app that you have not listed, now's the time. By utilising Firebase app indexing, you can allow results from your app to appear when someone who's installed your app searches for a related keyword.

 

 
 

Read The Original Article

SEO Trends to Watch Out for in 2018
Maximising Your Online Marketing

Related Posts