Saying goog-bye to Google Maps Timeline

Oh I’ve been waiting for this day. The number of people shocked that I was streaming my every movement to Google- at least one (shout out to you!). The days have officially come to an end.

One of the last Google services I used, it was a journal of sort. What was I doing on this day in 2014? When was the last time I went to that restaurant? How much time did I really spend at Six Flags? All very, very, (very?) important questions.

So when the news came that timeline as we know it is gone, I finally took the steps to self-host an alternative. Most supporting files at the end.

The stack:

  • Docker:
    • traccar
    • mariadb
  • Python script for migrating exported google timeline data
  • Owntracks app for future location data

The server:

  • Spin up our compose yml after updating the passwords
  • Traccar doesn’t support authentication or PSK encryption from the owntracks app when submitting via http, so use traefik or swag to handle basic auth + ssl for port 5001. There’s no need to expose 8082 on the internet, but wrap that bad boy in some SSL too while you’re at it.
  • I’m using MariaDB to future proof and skip out on the headache of corruption and eventual migration from the built in H2 database
  • Log in, create a new device with a 2 character identifier (TID) and you’re good to go.

Historical Data:

Time to order some takeout- some Google Takeout that is. Export just your timeline data to make it quick, as JSON not KML, and sit tight.

Importing 15 years of historical data can be… fun. This was my first experiment of teamwork with our dear friend Mr. GPT, and honestly: nice. We now have two scripts, one to import “activity segments” from the export, and another to import “place visits”. It definitely could’ve been consolidated but I realized the place visits were missing a day later so….

The data did have to be massaged a little to fit into traccar. Activity segments from google show the start time, end time, and the path. I needed this converted into periodic location + timestamps. To do that, the script divides the total time over the path to guesstimate that conversion. Then, I am creating two entries for place visits, one at arrival and one at departure.

Honestly there are duplicate entries. You can clean up the scripts. I might put on my SQL gloves one day and clean up the database. For now, whatever, I got my data out.

Import the activity segments first, then the place visits, and do it before involving the mobile app so you can play around with things, wipe and rebuild the whole server without a care in the world. I imported in chronological order to accurately show the total distance traveled (I scripted a rename of the JSON files from 2010_October to 2010_10, so python pulls in order)

Update the TRACCAR_URL, TID & INPUT_FOLDER in the scripts, pip install requests if you don’t have it, and let it rip.

Future Data:

  1. Point owntracks to your public SSL hostname or VPN protected internal hostname.
  2. Set the TID to match the identifier in traccar.
  3. Set your UserID & Password to the basic auth creds you created.
  4. Set it to “Significant” mode to save power
  5. Profit

The magic files:

docker-compose.yml:

services:
traccar:
image: traccar/traccar:latest
ports:
- 8082:8082
#UI
- 5001:5001 #owntracks submission port
volumes:
- ./traccar/logs:/opt/traccar/logs:rw
- ./traccar/traccar.xml:/opt/traccar/conf/traccar.xml:ro
- ./traccar/data:/opt/traccar/data:rw
restart: unless-stopped

mysql:
image: mariadb:latest
restart: unless-stopped
environment:
MYSQL_ROOT_PASSWORD: <rootdbpw>
MYSQL_DATABASE: traccar
MYSQL_USER: traccar
MYSQL_PASSWORD: <dbpw>
volumes:
- ./mysql-data:/var/lib/mysql

traccar.xml

<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE properties SYSTEM 'http://java.sun.com/dtd/properties.dtd'>
<properties>
    <!-- Documentation: https://www.traccar.org/configuration-file/ -->
    <entry key='owntracks.port'>5001</entry>
    <entry key='database.driver'>org.mariadb.jdbc.Driver</entry>
    <entry key='database.url'>jdbc:mariadb://mysql:3306/traccar?allowMultiQueries=true</entry>
    <entry key='database.user'>traccar</entry>
    <entry key='database.password'><dbpw></entry>
</properties>

ImportActivitySegments.py

import os
import json
import requests
from datetime import datetime, timedelta
import time

# === CONFIGURATION ===
TRACCAR_URL = "http://127.0.0.1:5001"  # Replace with your Traccar server IP or hostname
TID = "test"  # Tracker ID - must match the Traccar device ID
INPUT_FOLDER = "Semantic Location History"  # Folder containing monthly JSON files

def iso_to_datetime(ts_str):
    return datetime.fromisoformat(ts_str.replace("Z", "+00:00"))

def post_to_traccar(lat, lon, timestamp, acc=30):
    payload = {
        "_type": "location",
        "tid": TID,
        "lat": lat,
        "lon": lon,
        "tst": int(timestamp),
        "acc": acc
    }
    headers = {"Content-Type": "application/json"}
    r = requests.post(TRACCAR_URL, headers=headers, json=payload)
    print(f"{datetime.utcfromtimestamp(timestamp)} → {r.status_code}")

def process_file(filepath):
    with open(filepath, 'r', encoding='utf-8') as f:
        data = json.load(f)

    count = 0
    skipped = 0

    for entry in data.get("timelineObjects", []):
        seg = entry.get("activitySegment")
        if not seg:
            continue

        waypoints = seg.get("waypointPath", {}).get("waypoints", [])
        if len(waypoints) < 1:
            skipped += 1
            continue

        try:
            start_ts = iso_to_datetime(seg["duration"]["startTimestamp"])
            end_ts = iso_to_datetime(seg["duration"]["endTimestamp"])
        except Exception as e:
            print(f"Invalid timestamps in segment: {e}")
            skipped += 1
            continue

        duration = (end_ts - start_ts).total_seconds()
        if duration < 30:
            skipped += 1
            continue

        spacing = duration / max(len(waypoints) - 1, 1)

        for i, point in enumerate(waypoints):  # include all points, including first
            lat_raw = point.get("latE7")
            lon_raw = point.get("lngE7")
            if lat_raw is None or lon_raw is None:
                continue
            lat = lat_raw / 1e7
            lon = lon_raw / 1e7
            if lat == 0.0 and lon == 0.0:
                continue
            ts = start_ts + timedelta(seconds=i * spacing)
            post_to_traccar(lat, lon, ts.timestamp())
            count += 1

        # Add endLocation
        end = seg.get("endLocation", {})
        lat_raw = end.get("latitudeE7")
        lon_raw = end.get("longitudeE7")
        if lat_raw is not None and lon_raw is not None:
            lat = lat_raw / 1e7
            lon = lon_raw / 1e7
            if lat != 0.0 or lon != 0.0:
                post_to_traccar(lat, lon, end_ts.timestamp())
                count += 1

    print(f"{os.path.basename(filepath)} → Sent {count}, Skipped {skipped}")

def main():
    for filename in sorted(os.listdir(INPUT_FOLDER)):
        if filename.endswith(".json"):
            full_path = os.path.join(INPUT_FOLDER, filename)
            print(f"Processing {filename}...")
            process_file(full_path)

if __name__ == "__main__":
    main()

ImportPlaceVisits.py

import os
import json
import requests
from datetime import datetime
import time

# === CONFIGURATION ===
TRACCAR_URL = "http://127.0.0.1:5001"  # Replace with your Traccar server IP or hostname
TID = "test"  # Tracker ID - must match the Traccar device ID
INPUT_FOLDER = "Semantic Location History"  # Folder containing monthly JSON files

def iso_to_unix(ts):
    return int(datetime.fromisoformat(ts.replace("Z", "+00:00")).timestamp())

def post_to_traccar(lat, lon, timestamp, name=None, acc=30):
    payload = {
        "_type": "location",
        "tid": TID,
        "lat": lat,
        "lon": lon,
        "tst": timestamp,
        "acc": acc,
        "attributes": {
            "type": "placeVisit"
        }
    }
    if name:
        payload["attributes"]["name"] = name
    headers = {"Content-Type": "application/json"}
    r = requests.post(TRACCAR_URL, headers=headers, json=payload)
    print(f"{datetime.utcfromtimestamp(timestamp)} → {r.status_code} {name or ''}")

def process_file(filepath):
    with open(filepath, 'r', encoding='utf-8') as f:
        data = json.load(f)

    count = 0
    skipped = 0

    for entry in data.get("timelineObjects", []):
        pv = entry.get("placeVisit")
        if not pv:
            continue

        loc = pv.get("location", {})
        dur = pv.get("duration", {})
        lat_raw = loc.get("latitudeE7")
        lon_raw = loc.get("longitudeE7")

        if not lat_raw or not lon_raw:
            skipped += 1
            continue

        name = loc.get("name", "")
        lat = lat_raw / 1e7
        lon = lon_raw / 1e7
        if lat == 0.0 and lon == 0.0:
            skipped += 1
            continue

        try:
            start_ts = iso_to_unix(dur["startTimestamp"])
            end_ts = iso_to_unix(dur["endTimestamp"])
        except Exception as e:
            print(f"Invalid duration format: {e}")
            skipped += 1
            continue

        post_to_traccar(lat, lon, start_ts, name)
        post_to_traccar(lat, lon, end_ts, name)
        count += 2

    print(f"{os.path.basename(filepath)} → Sent {count}, Skipped {skipped}")

def main():
    for filename in sorted(os.listdir(INPUT_FOLDER)):
        if filename.endswith(".json"):
            full_path = os.path.join(INPUT_FOLDER, filename)
            print(f"Processing {filename}...")
            process_file(full_path)

if __name__ == "__main__":
    main()

Posted