A micro tracker for any events like clicks, leads and impressions. Micro service design and built on Micro from Zeit. Google BigQuery is used for storage and currently the only option. Barnebys Analytics is also referred to as BA.
s = signature *
p = programId *
k = kind *
a = affiliate
url = url
d1-d5 = dimension1-5
sp = sponsored
* is mandatory
BA is a flexible tracker where you as a user define your own rules to track an event. To some extent.
To track clicks (or any other event that you might wanna track) you just need to use the
mandatory parameters. If no URL was specified the tracker will respond with an empty pixel. When you
need to keep track on leads use the affiliate a parameter to drop a cookie. The next time an event occurs
from the same visitor a lead event will be tracked. Default session length is 1h but it’s customizable trough
an environment variable. The `kind parameter is then used for separating you’r different events.
When needed to segment your events specify a value for the predefined dimensions (d1 to d5). Which can be used
to filter and/or segment your tracked events.
Click tracking example
analytics.mydomain.com/r/collect?_h=track&p=<programId>&d1=<dimension1>&k=click&url=https://github.com&s=<hash>
Lead tracking example
analytics.mydomain.com/r/collect?_h=trackp=<programId>&k=click&url=https://yoursite.com&a=1&s=<hash>
Pixel tracking example
analytics.mydomain.com/r/collect?_h=track?p=<programId>&k=impressionk&s=<hash>
BA enforces you to sign each url. This prevents anyone to spoof or manipulate your tracking events.
By getting the md5 of ${secret} + ${uri} and appending that hash to the uri with the s parameter BA keeps your tracking links secured.
tracking in Google Big Queryapi/ directorynow envYou can deploy Barnebys Analytics to Azure Kubernetes Service (AKS) for scalable, managed container orchestration. Below is a high-level guide to get started:
az aks create --resource-group <ResourceGroup> --name <AKSClusterName> --node-count 2 --enable-addons monitoring --generate-ssh-keys
docker build -t <acr-name>.azurecr.io/ba-analytics:latest -f docker/Dockerfile_prod .
az acr login --name <acr-name>
docker push <acr-name>.azurecr.io/ba-analytics:latest
deploy folder (see deploy/deploy-scripts/ for production and staging examples). Update the image reference in these YAML files to your pushed image as needed.az aks get-credentials --resource-group <ResourceGroup> --name <AKSClusterName>
kubectl apply -f deploy/deploy-scripts/analytics-prod.yml
For more details, see the Azure AKS documentation.
You can deploy to any node compatible machine but for ease and scalability we suggest using Now.
now
SECRET your secret key for creating hashes
SESSION_NAME your cookie name for sessions
SESSION_MAX_AGE max age for your sessions
SITE_URL optional, used for redirecting invalid requests to your dashboard or site
BA uses input streams to let BigQuery handle the buffering for you. The known limitation is 100.000 writes/second and if you need more than that, you can request an increase from Google by a 50.000 increase per request.