edwinkim / README.md
Preview
Hi there, I'm Edwin 👋
I'm a freshly graduated student at UC Berkeley for Computer Science. I specialize in AI and have a passion for building tools that solve complex real-world problems at scale.
Contact Info
class ContactInfo:
email = "edwinkim0509@berkeley.edu"
phone = "(310) 617 6693"
linkedin = "linkedin.com/in/kedwin"
location = "Berkeley, CA"
education = "UC Berkeley, CS (2025)"Current Focus
- Creating projects that I would use in my personal life.
- Contributing to real-time open source projects.
- Taking care of my kitten.
Pinned
Public
Hermes
Grid Outage Prediction & Restoration Time Estimator. 94% precision in forecasting grid failures 48 hours in advance. Real-time geospatial dashboard processing 10M+ daily data points.
Random Forest
Geospatial
Python
Python
12
4
Public
CALI
Real-time Stress-Tracking Platform. Custom BiLSTM neural network processing 5 heterogeneous streams. Converges 40% faster than baseline.
BiLSTM
Signal Processing
Real-time
Python
8
2
Technical Skills
Skill Proficiency (Contribution Style)
Less
More
ML / AI
PythonTensorFlowPyTorchBiLSTMRandom ForestSignal ProcessingTime Series
Backend
API DesignDatabase OptDist. SystemsCachingFastAPIDjango
Frontend
ReactNext.jsTailwind CSSData VizReal-time UI
Data Eng
ETL PipelinesData FusionGeospatialPandas/Numpy
Infrastructure
GitDockerAWSLinuxCI/CD
Contribution Activity
ML Engineer
CALI
2025- Architected real-time stress-tracking platform using custom BiLSTM networks.
- Fused 5 heterogeneous time-series streams with <10ms tolerance.
- Generated 100,000+ synthetic data points for model training.
- Achieved 40% improvement in model convergence speed.
SWE Intern
MyFitnessPal
2024- Refactored core food-log query engine serving 50M+ DAU.
- Reduced request latency by 25% and API payload size by 40%.
- Optimized queries for millions of concurrent sessions.
Computational Researcher
UC Berkeley
2023- Designed scalable materials informatics pipeline for 1,000+ samples.
- Automated extraction of 50+ high-dimensional features.
- Created dataset of 50,000+ data points, improving accuracy by 23%.