- 10 Feb 2016
- 7 Min read
Technical SEO: A beginner’s guide
So you’ve heard all the buzz about content marketing, you love doing the outreach and link building and you’ve thrown yourself into paid advertisements, but the world of technical SEO just seems a little bit… well, technical. Fear not, I’m about to disclose some of Glass Digital’s top secrets to get you on your way to being a techie!
Do you really need technical SEO?
Often seen as the driest part of digital marketing, technical search engine optimisation is often put off in favour of more exciting tasks. Before I dive into some tips and tricks, let me first explain why the technical aspect is so critical to your overall strategy. Think of technical SEO like a car — it can have a sporty body and you can add as much fuel as you like, but without the mechanics (engine, steering, brakes and suspension), you’re going nowhere. If your website isn’t technically sound, users aren’t going to find your amazing content. With that in mind, what can you do to ensure you’re giving your digital marketing the best chance to succeed?
1. Conduct a technical audit
You need to assess the issues before you can begin to solve them. Here at Glass Digital we run a thorough examination of on-site, off-site and back-end technical issues. The easiest way to do this is to create a template of all the possible technical problems on a site. This should include smaller issues such as multiple H1s and internal linking errors, to fundamental problems such as duplicate content and search engine robots being blocked from crawling your site.
Performing a technical audit manually would be a nightmare, which is why crawling software has been created. Our weapon of choice here at Glass Digital is Screaming Frog’s SEO Spider. This fantastic tool will crawl a site the same way as a search engine and give you a full list of all the URLs. What is especially handy is it does a lot of the work for you — there are tabs to look for H1 errors, missing ALT tags, and 404 and 301 errors. You can also ask the SEO Spider to ignore the robots.txt file and therefore crawl a website for errors before you allow search engines to index it. Please make sure that you get your client’s permission before doing this, as crawling of a site can put strain on the server.
2. Don’t forget about mobile optimisation
Google’s Amit Singhal officially revealed last year that mobile users have overtaken desktop, so why, I hear you cry, are so many sites still not optimised for mobile? It is pretty unbelievable, but it’s a crucial ranking factor you must be implementing. The easiest way to figure out if a site works on a mobile… is open the site on your mobile. Ah, isn’t it nice when things are that simple? The thing is, that isn’t quite fool proof, as a site’s layout may be responsive to your screen but still have other issues, or it may work on your phone and not others. It’s hard to be 100% sure by just looking. Not to worry, as Google has created a tool primarily for this, simply named the Mobile Friendly Test. If you simply enter your website in the text field, Google tells you exactly what they find, including any issues with the HTML and CSS which wouldn’t be obvious at first glance.
3. Check the robots.txt file
A robots.txt file, also known as the robots exclusion protocol, is a file that gives instructions to a search engine on where they can and can’t crawl your site. The reason you need to check this file is because sometimes web owners block their site from being crawled all together, often completely by accident. Furthermore, when your site is in development, you don’t want Google and other search engines to index it before it’s ready, so you’ll want to block them from crawling it. The problem is, some people forget to change this back when they go live — it’s an easy mistake to make, but a very costly one. If you’d like more information on robots.txt files, take a look at this excellent guide by robotstxt.org.
The robots.txt file is also invaluable for blocking parts of your site you don’t need search engines to crawl. The reason you’ll want to do this is because search engines have a crawl budget; this means they limit how many pages of a site they crawl upon visiting. It’s believed that Google (and most likely the other search engines) decide your crawl budget depending on your site’s authority — the more authoritative your site is, the bigger your crawl budget. Therefore, when you’re building your site’s search engine ranking, you need to block unnecessary pages, such as the internal search results, from being crawled.
4. Do you have HTML & XML sitemaps?
A HTML sitemap is designed to help users understand your website’s navigation, and should always be included on large sites. An XML sitemap is focussed towards search engines, and is the more important of the two from a SEO perspective. Search engines use both of these sitemaps to find and crawl all of your site’s links.
Be aware that an XML sitemap should include less than 50,000 links and be smaller than 50mb in size. You can separate XML sitemaps out into categories, products and images to make this easier.
5. Avoid duplicate content
This is huge! Google hates duplicate content, as it doesn’t know which version to rank highest, or whether it should prioritise one or spread the link metrics out. Duplicate content can happen on- and off-site, and you need to tackle both, as this is such a big issue to the search engines a website can receive a penalty for excessive duplicate content.
One of the most common forms of duplicate content on-site is from the use of facets. For example, a website may have ‘Football’ and ‘Rugby’ sections which both include an ‘Accessories’ page. This leaves you with two pages that are both targeting the word ‘Accessories’, making them too similar for Google to understand correctly. To solve this problem, you need to spell out in the title, meta description, H1 and H2 tags, and content that the pages are for ‘Football Accessories’ and ‘Rugby Accessories’ respectively.
Off-site, the content on an “About Us” page will often be used by journalists and bloggers writing about a business, which means that it’s important to regularly update the content on this page to make sure it’s completely unique. You also need to make sure that any product descriptions on your site are original content, as if you copy it verbatim from a website such as Amazon, Amazon will simply outrank you.
6. Update your site structure
So your client has a new site, they’ve merged some categories together and moved some products around, and as a consequence, their URLs have changed. That might not seem like the end of the world until you understand that Google has indexed and ranked a page that no longer exists (and now returns a 404 error). So what should you do? Whenever you update your site’s structure, be sure to implement a 301 redirect from the old page to the new equivalent. You can also use canonical tags on similar pages where you would prefer Google to rank one over the other.
7. Prioritise the issues and tackle them accordingly
So you’ve conducted your technical audit and you are ready to let your client know what needs to be done. Before doing so, create a technical strategy in which you prioritise the biggest issues. Often you will find that some issues have a knock-on effect to others, and you’ll need to tackle them in stages. This will also help you visualise the SEO process and to be able to communicate with your client more efficently about what needs to be done and when. There would be nothing worse than your client conducting a month’s worth of work for you only to realise that another section should have been worked on first. Get your plan in order beforehand.
So there we have it, a few tips and tricks to help you tackle the SEO issues on your website. If you are a website owner and would like our help, please visit our technical SEO page for more info.