Project Showcase
GitHub

Predictive Driving Intelligence Showcase

Static presentation for a reinforcement learning policy, a learned world model, and V2X-informed lane-risk estimation.

System Summary

Project Overview

This deployment presents a compact autonomous driving decision stack built around three elements: a discrete actor-critic policy for lane selection, a learned world model for short-horizon prediction, and V2X-style lane danger signals that summarize local traffic risk.

The public site is intentionally static. Training runs offline, while the deployed experience focuses on model structure, training data design, and representative rollout evidence from saved artifacts.

Reinforcement Learning

Policy and Environment Design

The reinforcement learning component is framed as lane-level tactical decision-making in a three-lane traffic simulator. The agent receives a compact local state, chooses from discrete lane-plus-motion actions, and is optimized to balance progress, safety, and merge discipline.

Observation Space

Action Space

Reward Design

Learning Objective

Environment Generation

Predictive Modeling

World Model and Training Data

The predictive model is trained on transition tuples collected from policy-environment interaction. Its role is to approximate one-step dynamics, reward, and episode completion so that candidate action sequences can be evaluated before execution.

Training Data Schema

Sample Transition

Optimization Targets

Saved Model Evidence

Showcase

The sections below are generated from saved artifacts rather than live execution. They show one fixed-sample policy rollout and one imagination example in which the world model predicts short-horizon futures and those predictions are compared against the real simulator.

Fixed-Sample Policy Rollout

Full Rollout Log

Imagination Rollouts

Candidate Futures Ranked by Predicted Return

Predicted vs Real Environment