Surveillance System

4-Virus Surveillance

Real-time monitoring system for Hantavirus, Polyomavirus, Spumavirus, and EEEV in Japanese pig populations.

Target Viruses

Hantavirus

HIGH

ハンタウイルス

Source:Rodent-borne zoonotic
High:>100 copies/mL
Status:Monitored

Polyomavirus

HIGH

ポリオーマウイルス

Source:Sus scrofa polyomavirus
High:>100 copies/mL
Status:Monitored

Spumavirus

CRITICAL

スピューマウイルス

Source:MHLW Special Mgmt #5
Critical:>500 copies/mL
High:>100 copies/mL
Status:Monitored

EEEV

CRITICAL

東部ウマ脳炎ウイルス

Source:NOT endemic to Japan
Critical:ANY
Status:Monitored

External Information Sources

Daily Monitoring Sources

Automated collection at 11:00 JST (02:00 UTC)

MAFF

Daily

Ministry of Agriculture, Forestry and Fisheries

URL: www.maff.go.jp
Method: Web Scraping
Data: Surveillance Reports

E-Stat

Daily

Government Statistics Portal

URL: www.e-stat.go.jp
Method: REST API
Data: Livestock Statistics

PubMed

Daily

NCBI PubMed Database

URL: pubmed.ncbi.nlm.nih.gov
Method: E-utilities API
Data: Research Publications

J-STAGE

Daily

Japan Science and Technology

URL: www.jstage.jst.go.jp
Method: Web Scraping
Data: Japanese Publications (ToS Compliant)

Severity Classification

CRITICAL

Response: < 5 min

Criteria:

Spumavirus >500 copies/mL, ANY EEEV detection

Actions:

SNS Immediate Alert
SMS to Key Personnel
Slack #critical-alerts
Dashboard Flashing
Pipeline Pause

HIGH

Response: < 30 min

Criteria:

Hantavirus >100 copies/mL, Polyomavirus >100 copies/mL

Actions:

SNS Notification
Email Alert
Slack #pathogen-alerts
Dashboard Warning

MEDIUM

Response: < 2 hours

Criteria:

External keyword match, Low-level detections

Actions:

Email Notification
Slack #pathogen-monitoring
Dashboard Display

LOW

Response: < 24 hours

Criteria:

Academic publications, Informational

Actions:

Dashboard Record
Daily Summary

System Architecture

Data Flow

Dual-source monitoring with severity-based alerting

External Sources (Daily)        Internal Pipeline (Real-time)
      ↓                               ↓
  Lambda Collector              S3 Event Trigger
      ↓                               ↓
  DynamoDB ←──────────────────── Lambda Listener
      ↓
Severity Engine
      ↓
Notification Router
   ├─ SNS/SES (Email/SMS)
   ├─ Slack (Bot API + Webhooks)
   ├─ Streamlit Dashboard
   └─ REST API

System Components

Alerting

• AWS SNS Topics

• SES Email Templates

• SMS for Critical

• Deduplication (1h)

Slack Integration

• Bot API + Webhooks

• Rich Block Kit Format

• Channel Routing

• Daily Summaries

Dashboard

• Streamlit UI

• Real-time Updates (30s)

• Plotly Charts

• Trend Analysis

Storage

• DynamoDB Tables (3)

• S3 Data Lake

• TTL: 24h (J-STAGE ToS)

• Point-in-Time Recovery

Quick Start

Launch Dashboard

streamlit run surveillance/dashboard/app.py
# Access: http://localhost:8501

Start REST API

uvicorn surveillance.api.main:app --reload --port 8000
# API Docs: http://localhost:8000/docs

Manual Collection Test

# Test PubMed + J-STAGE search
python surveillance/external/academic_monitor.py

# Test MAFF scraping
python -m surveillance.external.maff_scraper

# Test E-Stat API
python -m surveillance.external.estat_client

Slack Notification Setup

# Configure Slack credentials
cp surveillance/.env.template surveillance/.env
# Edit .env with your Slack Bot Token

# Test Slack connection
python surveillance/tests/test_slack_integration.py --test-conn

# Send test alerts
python surveillance/tests/test_slack_integration.py --test-alert

API Endpoints

REST API (FastAPI)

Programmatic access to surveillance data

GET
/api/v1/detections
Get virus detections (filterable)
GET
/api/v1/alerts/active
Get active alerts summary
GET
/api/v1/external/daily-updates
Get external source updates
GET
/api/v1/statistics/trends
Get detection trends

Important Notes