Verified Data

Built SaaS for large scale company websites, to treat auditing analytics with the quality of finance data

Working in this company over the project: designer, architect, 2 senior developers and 2 developers

Working as a company to grow the right project team to get the right price point and execution of building a system from scratch for the client.

Objective brief

Analytics audits are slow and often done only yearly or at company business changes. This product would up the game and allow for daily or weekly visibility, to vastly cut the time and risk of being open to data error in the running of large company websites. Where the controller may not directly manage the teams or agency that will be needed to make changes. Treat analytics data with the correct care the business finances are.

Verified Data brings confidence to your Google Analytics reporting as data under the hood is constantly verified and checked for you

Approach

To workshop the needs and roadmap of the customers and client to create a new product and business opportunity for them. Followed by 2 week sprints and weekly meeting calls for all the team.

Taking the good work and research of 10 sections of best practice outlined in Brian Clifton's book Successful Analytics and moving a consultant service into an automated system.

We broke to project into MPV and then three phases, to grow a Software as a Service, (SaaS) product as the best fit for this need. A custom build application with a scaling backend to process the needs.

Work was also done to create and model the scaling needs of the system costs as three different customer scales were tested through different price points.

Verified Data, Google Analytics auditing SaaS tool website Verified Data Laravel website

Process

First to tackle a project of this scale that involved a high amount of R&D and was not copping an existing system model, we started with a Minimal Viable Product (MVP). This was undertaken over the period of 6-8 weeks as a project in it's self.

An MVP is to focus on the user being able to complete a set of actions, possibly in a presentation or demo setting. So it is not needed to look more polished than working and not always end with and exact list of features you start with. It is a process to demonstrate the technical approach is viable and also the user can understand the approach.

Designer, consultant, and myself as architect got together and after meeting with the client to agree the base features, of login, dashboard, and single set of api tests, to run and then show results in a sample report format. This would show the whole process viability from acquisition to setting up and authorising to retrieving data and then scoring and displaying it. The basis for a single section out of 15 to be audited and taken away as a report.

We settled quickly after planning this project in Laravel with scaling a backend worker system as an approach that would be flexible, light and powerful. Also in this rapid build stage it had most of the elements needed or packages available to do the integration with third party systems.

Also agreed early on to not build out architecture and systems we would not need before we needed to customise them. So we partnered with an cloud provider of open source project Scrapy to do the spidering and data gathering, and PhantomJS headless browser cloud platform to do JavaScript front end browser testing.

Both these systems could be taken back on board later if we wanted the extra server infrastructure, but represented similar queue and worker systems we would need but with extra requirements and failure rates of systems you expect from these that would take time to learn and scale we did not need to compared the low third party cost.

With the success of completing this we moved on to the main project in milestone stages.

The architecture was kept all in the same codebase but designed in a micro service separated way, to allow the developers to follow similar patterns but not need to overlap and slow each other on a daily basis.

The design developed and we made assessments as to when to add and change technologies as we grew. Adding Redis to add faster state handling, about half way into the project, when we wanted to be able to cancel audits. Also as we moved to Docker containers in production the need for a centralised logging and monitoring system was needed, or logs would be one set on each machine and hard to follow and manage the storage of.

The architecture grew according to plan, while we took an agile scrum approach to the smaller deliverables. This was needed as each section of the audit often required going back to researching the new data gathering and its display.

With a system of background jobs, logging becomes a vital way to debug and monitor a systems progress.

As the project neared completion for the beta phase, we could train and transition to an internal developer team. Visiting Sweden on several occasions to work with and train in a close environment.

Learnings

A great deal was learnt over the project for how a system like this is the inverse of a traditional marketing website. It provides access for a few users on the frontend and then simulates thousands on the backend to test their website.

Through research and practice I solved one scaling issue we had with worker Job Queues getting very long when many audits ran at once. As there are limits on apis and how fast you can work on a clients websites with an SLA, we could not simply add more workers. With a Job Queue jobs come off one by one, and so more workers were likely to all take jobs from the same website.

Instead we used a Token Ring pattern with RabbitMQ, this allows for any number of queues and then their workers to be attached to a circle. Then the job audit identifier was hashed and would always go to a similar queue on the ring, but a new audit would go to a different one. This was great abstraction as it took the need to control this division away from the application software and control and kept it scalable decoupled. More need, add more queues to the ring, and keep a set number of workers on that queue. Result was scale with only changing the driver, not the architecture.

Some more on launching Laravel SaaS application

Verified Data, Report on website for Google Analytics auditing Audited website in Verified Data

Client: Search Integration

Link to current Verified Data website

Mautic Tags