Is SEO still relevant to acquisitions?

With the ever evolving algorithmic dynamics of google, and how it tries to interpret and classify websites available on the web, there has in recent years seen a shift in the overall tactics once deployed by search engine optimization specialists.


The once common tactics seen on the web such as aggressively building up landing pages and key word stuffing pages in order to attempt to boost rankings are now seen as outdated. More evident with googles recent penguin release such tactics has seen to cause many once establish high ranking sites to loose rankings on various keywords and suffer the search engine penalties associated with them.


Other tactics that have fallen fowl of the search engines are things like participating in large link building campaigns, whereby paid or otherwise links are aggressively sought after in order to build large amounts of them, either by buying placements from other websites or by spamming digital link directories.
The aim now is to not use such disparate link building campaigns and to shift towards building fewer but more quality links that are relevant and possess more suitable and diverse anchor tags.

 

The obsession with that number 1 spot
With previous understanding of user driven behavior and the ability to acquire new users to your website, there was always the obsession with obtaining the number 1 spot. It is however still relevant to aim as high as possible and at the least to remain on the first page of any given search term. However the understanding of users behavior and how people interact with search results has changed alot. With most search engines and especially with those of google, the companies revenue is based upon the amount of ads its able to sell, and as such there has been a greater prevalence in the ads seen at the top of search results. This has also resulted in a shift in users behavior as they are more aware of the prevalence of advertisements located at the top of search results, thus have adapted their behavior in the understanding that they may have to scroll further down the result set in order to locate the results that are of more relevance to them.


The main tactic in order to capture an increase in the click through rate is the better utilization of the web pages title and descriptions that will be displayed in the search result. It has been shown that these have a greater impact as to whether links are clicked and selected if they are tailored to each relevant page with meaningful descriptions and titles.
Creating a wide range of landing pages and keywords what are well targeted to long tail searches prove a good move in order to capture ranks for more defined niche search traffic. The focus on these better quality content pages with deep dives into topics help to bring in the various small percentile of keywords, but accumulatively provide a greater range of search results. Gone are the days of creating lots of landing pages with thin content in them as google especially are now removing these pages and penalizing sites that perform this kind of tactic.


Any change that is being made to a website require long term planning and commitment as improvements generally take a long time period to acquire any improvements in search rankings. The usual timescale for any such improvements would be on a timescale of 2 to 6 months, but this may be accelerating depending on your sites value and the terms being sought after.

Using the Priority Queue Pattern with our Microsoft Azure Solution

In a recent programming conundrum, I was trying to find a way I could promote certain customer queries to the top of the support list compared to that of a general nature. The limitations with the azure service bus is the inability to add priority headers to packets being placed into the queue, so that was a nonstarter.

With queues, they work on a FIFO (first in first out) nature and allow developers to decouple the components that are adding items to the queue from those that are performing tasks against it.

The queue in general was great at adding items to be processed in an asynchronous nature, however with recent updates to our SLA requirements we were encountering issues where non urgent request were creating a service bottleneck in our system.

The solution that we decided to try to implement was the priority queue on the Microsoft Azure using a few message queues and multiple consumer instances against those queues.

 

The plan was to identify the data being processed from the producer side, and based on that generate the relevant packets and use the message router to send the packets of data to the corresponding queue. In our example, we spawn 3 types that were being used (High, Medium and Low priorities).

In essence the queue would function as normal with the producer – consumer peeking and deleting messages as they are being added to the queue. The difference and where the priority queue pattern comes into play is the number of consumers being allocated to subscribed to the particular queues. With the high priority queue, we have 5 instances competing to consumer the messages. With the medium we had 3 and with the low we had one. The result of this is that the high priority queue would be able to handle many more requests and faster than the other queues, and therefore would be able to provide a far better SLA time and meet expectations.

 

priority-queue-separate

 

For more information on it you can read https://docs.microsoft.com/en-us/azure/architecture/patterns/priority-queue

You can also find the implementation example on Github

Headless CMS architecture pattern

In recent days I have come across a new term going around whilst investigating new alternatives for our current CMS architecture. CMS systems have been around for many years and whilst they have served their purpose in being able to add, update and deliver data to customers, they were designed primarily for desktop devices. With the advancement of mobile and other devices, it has become increasingly important that a cms solution be able to cope with this advancement and also account for scalability issues that one may face.

 

What does Headless CMS provide?

Headless CMS provides a back end only CMS system that is built with a content system and exposes a Restful API that is responsible for displaying data onto any device. With this approach it presents a presentation agnostic approach to delivering data and doesn’t tie any particular presentation to its backend.

 

What is in a traditional CMS?

  • Provides a method to store and maintain data.
  • Provides a CRUD UI
  • Provides presentation layer to display the data.

 

What is in a headless CMS?

  • Provides a method to store and maintain data.
  • Provides a CRUD UI
  • Provides Restful API to the data.

 

Why decouple CMS?

With traditional monolithic CMS systems the content management application and content delivery application exist together in a single application, they provide a solution for simple blog and basic websites where everything can be managed in one place.

With a decoupled CMS, we are able to separate the content management application from the content delivery application, this frees up developers to be able to choose the way they want to deliver content to users. It is important to understand that the creation of content is not the same as delivering and that the separation are clear.

In a decoupled CMS, it promotes the microservices architecture approach and you can leverage the use of event driven message queues in order to store state for content and updates to the website. By using the event driven approach when you delete a content element, then there is a call to the contentdelete event.

Event consumers will be responsible for consuming the changes produces by the event source that handles events and can be optimised through the API.

 

What would an architecture look like?

architecture

For a headless CMS system, we could deploy something based upon CQRS:-

CQRS stands for Command Query Responsibility Segregation. It's a pattern that I first heard described by Greg Young. At its heart is the notion that you can use a different model to update information than the model you use to read information.

From: Martin Fowler

 

Utilise event sourcing:-

Create an approach that is responsible for handling a sequence of events on as operations on data where each event is appended onto a read only basis to temp or permanent store. When a action takes place, the application sends a series of sequence command operations that once stored can later be replayed when executing the same series of operations on the set of data.

Heres a brief fact sheet of what it performed with event sourcing:-

  • Events are immutable
  • Task produced can run in the background
  • Improve performance and stability as no contention for processing of transactions
  • Events represent what events have occurred.
  • Append only nature to the data means that an audit trail is provided, can replay events at any time.
  • Decouple task from events and provides flexibility and extensibility.

There are however a few issues that also need to be considered:-

  • Some delay when adding events to the event store between the handler, publishing events and consumers handling the events.
  • If you need to undo a change to the data is to add a compensating event to the event store

 

You can read more about it here.

https://docs.microsoft.com/en-us/azure/architecture/patterns/event-sourcing