Project Overview
AWS DeepRacer is a 1/18th scale autonomous racing car platform that provides a practical way to learn reinforcement learning. The challenge was to develop a model that could navigate complex racing tracks at relatively high speeds, achieving competitive lap times.
Technical Implementation
Reward Function Engineering
- Designed a sophisticated reward function incorporating multiple parameters:
- Track position and centerline deviation
- Steering angle and stability metrics
- Speed optimization based on track segments
- Progress tracking and completion incentives
- Implemented dynamic rewards scaling based on racing conditions
Hyperparameter Optimization
- Fine-tuned critical parameters including:
- Learning rate: Optimized for convergence speed
- Entropy and discount factor
- Neural network architecture
- Action space granularity
- Developed all-new method to assess model performance
Training Infrastructure
- Using AWS SageMaker for distributed training
- Implemented parallel training pipelines for faster iteration
- Created automated evaluation scripts for performance tracking
- Optimized compute resource utilization to reduce costs
Results & Impact
- Achieved ranking in top 0.25% globally (13th out of 5,200+ submissions), receiving recognition from AWS
- Reduced training costs by 40% through efficient resource management (first vs. final model)
Technical Stack
-
AWS Services:
- DeepRacer Console
- SageMaker
- RoboMaker
-
Languages & Frameworks:
- Python
- TensorFlow
- Pandas for data analysis
Key Learnings
- Deep understanding of reinforcement learning principles
- Practical experience with hyperparameter tuning
- Efficient cloud resource management
- Algorithm optimization, for smart decision-making