InvisaGig: Super Simple. Crazy Fast. Contact us if you have any questions.

InvisaGig Telemetry System – Complete Installation Guide

Created by: Richard
Email: [email protected]
Last update: November 12th, 2025

Quick Start Overview

What This Guide Will Do

This guide will walk you through setting up a complete telemetry monitoring system for your InvisaGig device in approximately 30-45 minutes. By the end, you’ll have:

  • Automated data collection every 60 seconds from your InvisaGig device
  • PostgreSQL database storing all network metrics and performance data
  • Grafana visualization dashboard accessible from any browser
  • Historical data tracking for signal strength, temperature, data usage, and more
  • Support for both local network and Tailscale connections

The 7-Step Process

  1. Install Dependencies (5 min) – Docker, Python, and system packages
  2. Configure Project (2 min) – Set your InvisaGig IP and admin password
  3. Create Docker Setup (1 min) – Configure containerized services
  4. Build Collector (1 min) – Set up data collection service
  5. Configure Database (1 min) – Create PostgreSQL schema
  6. Set up Grafana (1 min) – Configure visualization datasource
  7. Launch System (5 min) – Start all services and verify operation

What You’ll Need Before Starting

  • InvisaGig device IP address (find this in your device settings)
  • Admin password (you’ll create this during setup)
  • Terminal access to your Debian server (SSH or console)
  • Port 3000 available for Grafana web interface

How to Use This Guide (Click to Follow Along on YouTube as well!)

Simply copy and paste! Each step contains complete code blocks marked with πŸ“‹ that you can copy directly into your terminal. No manual typing required – just follow the steps in order and paste the commands.

⚑ Pro Tip: Use a terminal client that supports copy/paste (like PuTTY, Terminal, or MobaXterm) for the smoothest experience. Have your InvisaGig device IP ready before starting Step 2.

Requirements

  • Fresh Debian Linux installation
  • Root or sudo access
  • InvisaGig device IP address (local or Tailscale)
  • Network connectivity

Step 1: Install System Dependencies

πŸ“‹ Copy and paste this entire block into your terminal:

				
					sudo bash << 'INSTALL_EOF'
set -e

echo "Updating system..."
apt update && apt upgrade -y

echo "Installing dependencies..."
apt install -y \
    apt-transport-https \
    ca-certificates \
    curl \
    gnupg \
    lsb-release \
    git \
    python3 \
    python3-pip

echo "Adding Docker repository..."
mkdir -p /etc/apt/keyrings
curl -fsSL https://download.docker.com/linux/debian/gpg | gpg --dearmor -o /etc/apt/keyrings/docker.gpg
echo "deb [arch=$(dpkg --print-architecture) signed-by=/etc/apt/keyrings/docker.gpg] https://download.docker.com/linux/debian $(lsb_release -cs) stable" | tee /etc/apt/sources.list.d/docker.list > /dev/null

echo "Installing Docker..."
apt update
apt install -y docker-ce docker-ce-cli containerd.io docker-compose-plugin

echo "Installing Docker Compose..."
curl -L "https://github.com/docker/compose/releases/latest/download/docker-compose-$(uname -s)-$(uname -m)" -o /usr/local/bin/docker-compose
chmod +x /usr/local/bin/docker-compose

echo "Installing Tailscale..."
curl -fsSL https://tailscale.com/install.sh | sh

systemctl enable docker
systemctl start docker

echo "Installation complete!"
echo "IMPORTANT: Run 'tailscale up' next to connect to your Tailscale network"
INSTALL_EOF
				
			

After Step 1: Connect to Tailscale

IMPORTANT: If your InvisaGig device is on Tailscale, run this command now:

πŸ“‹ Copy and paste:

				
					tailscale up
				
			

Follow the prompts to authenticate and connect to your Tailscale network.


Step 2: Create Project Structure

πŸ“‹ Copy and paste this entire block (you’ll be prompted for your InvisaGigs Tailscale IP and the password you want to use on Grafana):


				
					#!/bin/bash
set -e

echo "=== InvisaGig Telemetry Setup ==="
echo ""
echo "Enter the IP address of your InvisaGig device"
echo "This can be a local IP (192.168.x.x) or Tailscale IP (100.x.x.x)"
read -p "InvisaGig IP address: " DEVICE_IP

echo ""
echo "Choose a secure password for the database and Grafana"
echo "You will use this password to login to Grafana"
read -s -p "Admin password: " ADMIN_PASSWORD
echo ""

PROJECT_DIR="/opt/invisagig-telemetry"
mkdir -p $PROJECT_DIR/{collector,grafana/{dashboards,datasources}}
cd $PROJECT_DIR

cat > .env << EOF
POSTGRES_DB=invisagig_telemetry
POSTGRES_USER=invisagig_user
POSTGRES_PASSWORD=$ADMIN_PASSWORD
INVISAGIG_URL=http://$DEVICE_IP/telemetry/info.json
COLLECTION_INTERVAL=60
GRAFANA_ADMIN_PASSWORD=$ADMIN_PASSWORD
TZ=America/Chicago
EOF

echo "Project structure created at $PROJECT_DIR"
echo "Password saved for Grafana login"
				
			

Step 3: Create Docker Configuration

πŸ“‹ Copy and paste this entire block:


				
					cd /opt/invisagig-telemetry

cat > docker-compose.yml << 'EOF'
services:
  postgres:
    image: postgres:15-alpine
    container_name: invisagig_postgres
    restart: unless-stopped
    environment:
      POSTGRES_DB: ${POSTGRES_DB}
      POSTGRES_USER: ${POSTGRES_USER}
      POSTGRES_PASSWORD: ${POSTGRES_PASSWORD}
      TZ: ${TZ}
    volumes:
      - postgres_data:/var/lib/postgresql/data
    healthcheck:
      test: ["CMD-SHELL", "pg_isready -U ${POSTGRES_USER} -d ${POSTGRES_DB}"]
      interval: 10s
      timeout: 5s
      retries: 5
    networks:
      - invisagig

  collector:
    build:
      context: ./collector
      dockerfile: Dockerfile
    container_name: invisagig_collector
    restart: unless-stopped
    environment:
      - DB_HOST=postgres
      - DB_NAME=${POSTGRES_DB}
      - DB_USER=${POSTGRES_USER}
      - DB_PASSWORD=${POSTGRES_PASSWORD}
      - INVISAGIG_URL=${INVISAGIG_URL}
      - COLLECTION_INTERVAL=${COLLECTION_INTERVAL}
      - TZ=${TZ}
    depends_on:
      postgres:
        condition: service_healthy
    volumes:
      - collector_logs:/app/logs
    networks:
      - invisagig

  grafana:
    image: grafana/grafana-oss:latest
    container_name: invisagig_grafana
    restart: unless-stopped
    environment:
      - GF_SECURITY_ADMIN_PASSWORD=${GRAFANA_ADMIN_PASSWORD}
      - GF_LOG_LEVEL=info
      - TZ=${TZ}
    volumes:
      - grafana_data:/var/lib/grafana
      - ./grafana/datasources:/etc/grafana/provisioning/datasources
    ports:
      - "3000:3000"
    depends_on:
      - postgres
    networks:
      - invisagig

volumes:
  postgres_data:
  grafana_data:
  collector_logs:

networks:
  invisagig:
    driver: bridge
EOF

echo "Docker configuration created"
				
			

Step 4: Create Collector Files

πŸ“‹ Copy and paste this entire block:


				
					cd /opt/invisagig-telemetry

cat > collector/Dockerfile << 'EOF'
FROM python:3.11-slim
WORKDIR /app
RUN apt-get update && apt-get install -y gcc libpq-dev curl && rm -rf /var/lib/apt/lists/*
COPY requirements.txt .
RUN pip install --no-cache-dir -r requirements.txt
COPY collector.py .
COPY entrypoint.sh .
RUN chmod +x entrypoint.sh && mkdir -p /app/logs
ENV PYTHONUNBUFFERED=1
ENTRYPOINT ["./entrypoint.sh"]
EOF

cat > collector/requirements.txt << 'EOF'
requests>=2.31.0
psycopg[binary]>=3.1.0
EOF

cat > collector/entrypoint.sh << 'EOF'
#!/bin/bash
echo "InvisaGig Collector Starting..."
echo "Device: $INVISAGIG_URL"
echo "Interval: ${COLLECTION_INTERVAL}s"
sleep 10
exec python collector.py
EOF
chmod +x collector/entrypoint.sh

echo "Collector files created"
				
			

Step 5: Create Collector Script

πŸ“‹ Copy and paste this entire block:


				
					cd /opt/invisagig-telemetry

cat > collector/collector.py << 'EOF'
#!/usr/bin/env python3
"""
InvisaGig Telemetry Collector
Created by: didneyworl with Claude Opus 4.1
Email: didney@netsolution.shop
"""
import requests, json, logging, time, sys, os, re
from datetime import datetime
try:
    import psycopg as psycopg2
except ImportError:
    import psycopg2

DB_CONFIG = {
    'host': os.getenv('DB_HOST', 'postgres'),
    'port': 5432,
    'dbname': os.getenv('DB_NAME'),
    'user': os.getenv('DB_USER'),
    'password': os.getenv('DB_PASSWORD'),
}

INVISAGIG_URL = os.getenv('INVISAGIG_URL')
COLLECTION_INTERVAL = int(os.getenv('COLLECTION_INTERVAL', '60'))

logging.basicConfig(
    level=logging.INFO,
    format='%(asctime)s - %(levelname)s - %(message)s',
    handlers=[
        logging.FileHandler('/app/logs/collector.log'),
        logging.StreamHandler()
    ]
)
logger = logging.getLogger(__name__)

def safe_get(data, *keys, default=None):
    try:
        result = data
        for key in keys:
            if result is None: return default
            result = result.get(key) if isinstance(result, dict) else default
        return None if result in ["null", "", " "] else result
    except: return default

def parse_temperature(temp_str):
    if not temp_str or temp_str == "null": return None
    try: return int(float(str(temp_str).lower().replace('c', '').replace('Β°', '').strip()))
    except: return None

def fix_json(raw_text):
    raw_text = re.sub(r':\s*,', ': null,', raw_text)
    raw_text = re.sub(r':\s*\n\s*}', ': null\n}', raw_text)
    raw_text = re.sub(r':\s*}', ': null}', raw_text)
    raw_text = re.sub(r':\s*]', ': null]', raw_text)
    return raw_text

def wait_for_database():
    for i in range(30):
        try:
            conn = psycopg2.connect(**DB_CONFIG)
            conn.close()
            logger.info("Database connected")
            return True
        except Exception as e:
            logger.info(f"Waiting for database... ({i+1}/30)")
            time.sleep(2)
    return False

def create_tables():
    try:
        conn = psycopg2.connect(**DB_CONFIG)
        cur = conn.cursor()
        cur.execute("""
            CREATE TABLE IF NOT EXISTS telemetry_data (
                id SERIAL PRIMARY KEY,
                timestamp TIMESTAMP DEFAULT CURRENT_TIMESTAMP,
                company VARCHAR(255), model VARCHAR(50), modem VARCHAR(50),
                ig_version VARCHAR(50), fw_version VARCHAR(255),
                temp_c INTEGER, carrier VARCHAR(100), network_mode VARCHAR(50),
                lte_str INTEGER, nsa_str INTEGER, sa_str INTEGER,
                lte_snr INTEGER, nsa_snr INTEGER, sa_snr INTEGER,
                lte_band INTEGER, nsa_band INTEGER, sa_band INTEGER,
                sim1_total_mbytes DECIMAL(10,2), sim2_total_mbytes DECIMAL(10,2),
                sim1_tx_mbytes DECIMAL(10,2), sim1_rx_mbytes DECIMAL(10,2),
                sim2_tx_mbytes DECIMAL(10,2), sim2_rx_mbytes DECIMAL(10,2),
                up_time BIGINT,
                car_agg_lte JSONB, car_agg_nr5g JSONB,
                raw_json JSONB
            )
        """)
        cur.execute("CREATE INDEX IF NOT EXISTS idx_timestamp ON telemetry_data(timestamp DESC)")
        conn.commit()
        cur.close()
        conn.close()
        logger.info("Database tables ready")
        return True
    except Exception as e:
        logger.error(f"Table creation error: {e}")
        return False

def collect_data():
    try:
        response = requests.get(INVISAGIG_URL, timeout=30)
        response.raise_for_status()
        
        raw_text = response.text
        fixed_text = fix_json(raw_text)
        data = json.loads(fixed_text)
        
        conn = psycopg2.connect(**DB_CONFIG)
        cur = conn.cursor()
        
        values = [
            safe_get(data, 'device', 'company'),
            safe_get(data, 'device', 'model'),
            safe_get(data, 'device', 'modem'),
            safe_get(data, 'device', 'igVersion'),
            safe_get(data, 'device', 'fwVersion'),
            parse_temperature(safe_get(data, 'timeTemp', 'temp')),
            safe_get(data, 'activeSim', 'carrier'),
            safe_get(data, 'activeSim', 'networkMode'),
            safe_get(data, 'lteCell', 'lteStr'),
            safe_get(data, 'nsaCell', 'nsaStr'),
            safe_get(data, 'saCell', 'saStr'),
            safe_get(data, 'lteCell', 'lteSnr'),
            safe_get(data, 'nsaCell', 'nsaSnr'),
            safe_get(data, 'saCell', 'saSnr'),
            safe_get(data, 'lteCell', 'lteBand'),
            safe_get(data, 'nsaCell', 'nsaBand'),
            safe_get(data, 'saCell', 'saBand'),
            safe_get(data, 'dataUsed', 'SIM1', 'totalMBytes'),
            safe_get(data, 'dataUsed', 'SIM2', 'totalMBytes'),
            safe_get(data, 'dataUsed', 'SIM1', 'txMBytes'),
            safe_get(data, 'dataUsed', 'SIM1', 'rxMBytes'),
            safe_get(data, 'dataUsed', 'SIM2', 'txMBytes'),
            safe_get(data, 'dataUsed', 'SIM2', 'rxMBytes'),
            safe_get(data, 'timeTemp', 'upTime'),
            json.dumps(safe_get(data, 'carAgg', 'lte', default=[])),
            json.dumps(safe_get(data, 'carAgg', 'nr5g', default=[])),
            json.dumps(data)
        ]
        
        cur.execute("""
            INSERT INTO telemetry_data 
            (company, model, modem, ig_version, fw_version, temp_c, carrier, network_mode,
             lte_str, nsa_str, sa_str, lte_snr, nsa_snr, sa_snr,
             lte_band, nsa_band, sa_band, sim1_total_mbytes, sim2_total_mbytes,
             sim1_tx_mbytes, sim1_rx_mbytes, sim2_tx_mbytes, sim2_rx_mbytes,
             up_time, car_agg_lte, car_agg_nr5g, raw_json)
            VALUES (%s,%s,%s,%s,%s,%s,%s,%s,%s,%s,%s,%s,%s,%s,%s,%s,%s,%s,%s,%s,%s,%s,%s,%s,%s,%s,%s)
        """, values)
        
        conn.commit()
        cur.close()
        conn.close()
        
        logger.info(f"Data collected - Carrier: {values[6]}, Temp: {values[5]}Β°C")
        return True
        
    except Exception as e:
        logger.error(f"Collection error: {e}")
        return False

def main():
    logger.info("InvisaGig Collector Started")
    if not wait_for_database() or not create_tables():
        sys.exit(1)
    
    while True:
        try:
            collect_data()
            time.sleep(COLLECTION_INTERVAL)
        except KeyboardInterrupt:
            break
        except Exception as e:
            logger.error(f"Error: {e}")
            time.sleep(60)

if __name__ == "__main__":
    main()
EOF

echo "Collector script created"
				
			

Step 6: Configure Grafana Datasource

πŸ“‹ Copy and paste this entire block:

				
					cd /opt/invisagig-telemetry

ADMIN_PASSWORD=$(grep POSTGRES_PASSWORD .env | cut -d'=' -f2)

cat > grafana/datasources/postgres.yaml << EOF
apiVersion: 1

datasources:
  - name: InvisaGig PostgreSQL
    type: postgres
    access: proxy
    url: postgres:5432
    database: invisagig_telemetry
    user: invisagig_user
    secureJsonData:
      password: $ADMIN_PASSWORD
    jsonData:
      sslmode: disable
      postgresVersion: 1500
    isDefault: true
    editable: true
EOF

echo "Grafana datasource configured"
				
			

Step 7: Build and Start Services

πŸ“‹ Copy and paste this entire block:


				
					cd /opt/invisagig-telemetry

echo "Building Docker containers..."
docker-compose build --no-cache

echo "Starting services..."
docker-compose up -d

echo "Waiting for services to initialize..."
sleep 30

ADMIN_PASSWORD=$(grep GRAFANA_ADMIN_PASSWORD .env | cut -d'=' -f2)
docker-compose exec grafana grafana-cli admin reset-admin-password "$ADMIN_PASSWORD"

SERVER_IP=$(hostname -I | awk '{print $1}')
echo "
=========================================
InvisaGig Telemetry System Ready!
=========================================
Grafana URL: http://$SERVER_IP:3000
Username: admin
Password: [The password you set in Step 2]

Your telemetry system is now running!
========================================="
				
			

Accessing Grafana

  1. Open your browser and navigate to: http://YOUR_SERVER_IP:3000
  2. Login with:
    • Username: admin
    • Password: The password you set in Step 2

The InvisaGig PostgreSQL datasource is already configured and ready to use.


Importing Dashboards

Click Here for Richards all done custom dashboard, just for YOU!

To import a dashboard JSON file:

  1. Click the Grafana (open menu) icon/logo in the upper left corner of the page
  2. Navigate to Dashboards β†’ Click on the Button near the upper right (Likely says “New”) then selectΒ Import
  3. Click Upload JSON file and select your dashboard file
  4. In the import screen, ensure InvisaGig PostgreSQL is selected as the datasource
  5. Click Import

Your dashboard will be created and start displaying real-time data from your InvisaGig device.

NOTICE! You may not see updated info on your dashboard for a few minutes.

If you continue to not see data on your dashboard, make sure to check the Connections -> data sources -> InvisaGig PostgreSQL -> scroll to the bottom and click “Save & Test” and be sure it responds with a message that is like ‘Database Connection OK’


Verify Data Collection

πŸ“‹ Check collector logs:

				
					cd /opt/invisagig-telemetry
docker-compose logs collector --tail=10
				
			

You should see messages like:

				
					Data collected - Carrier: AT&T, Temp: 38Β°C
				
			

πŸ“‹ Check database records:

				
					docker-compose exec postgres psql -U invisagig_user -d invisagig_telemetry -c "SELECT COUNT(*) as total_records FROM telemetry_data;"
				
			

System Management Commands

View Real-time Logs

πŸ“‹ Copy and paste:

				
					cd /opt/invisagig-telemetry
docker-compose logs -f collector
				
			

Check Service Status

πŸ“‹ Copy and paste:

				
					cd /opt/invisagig-telemetry
docker-compose ps
				
			

Stop the System

πŸ“‹ Copy and paste:

				
					cd /opt/invisagig-telemetry
docker-compose down
				
			

Restart Services

πŸ“‹ Copy and paste:

				
					cd /opt/invisagig-telemetry
docker-compose restart
				
			

View Database Records

πŸ“‹ Copy and paste:


				
					cd /opt/invisagig-telemetry
docker-compose exec postgres psql -U invisagig_user -d invisagig_telemetry -c "SELECT timestamp, carrier, temp_c FROM telemetry_data ORDER BY timestamp DESC LIMIT 10;"
				
			

Troubleshooting

Grafana Login Issues

If you cannot login to Grafana, reset the password:

πŸ“‹ Copy and paste:

				
					cd /opt/invisagig-telemetry
docker-compose exec grafana grafana-cli admin reset-admin-password YourNewPassword
				
			

Collector Connection Issues

Check if your InvisaGig device is accessible:

πŸ“‹ Copy and paste (replace YOUR_INVISAGIG_IP):

				
					curl http://YOUR_INVISAGIG_IP/telemetry/info.json
				
			

πŸ“‹ View collector error logs:

				
					cd /opt/invisagig-telemetry
docker-compose logs collector --tail=50
				
			

Database Connection Issues

Test database connectivity:

πŸ“‹ Copy and paste:

				
					cd /opt/invisagig-telemetry
docker-compose exec postgres psql -U invisagig_user -d invisagig_telemetry -c "SELECT 1;"
				
			

Restart Everything

If you need to restart the entire system:

πŸ“‹ Copy and paste:

				
					cd /opt/invisagig-telemetry
docker-compose down
docker-compose up -d
				
			

System Information

  • Database: PostgreSQL 15 Alpine
  • Collection Interval: 60 seconds
  • Data Retention: Unlimited (consider implementing cleanup for long-term use)
  • Default Timezone: America/Chicago (configurable in .env)
  • Ports Used: 3000 (Grafana)

Data Collected

The system collects and stores:

  • Device information (model, firmware, carrier)
  • Temperature readings
  • Signal strength (LTE, 5G NSA, 5G SA)
  • Signal quality (SNR values)
  • Network bands
  • Data usage per SIM (Total, TX, RX)
  • Device uptime
  • Carrier aggregation data
  • Complete raw JSON for reference

Installation Complete!

Your InvisaGig telemetry system is now operational and collecting data every 60 seconds. Access Grafana to visualize your device metrics and import custom dashboards.

For support: [email protected]

// // // //