Data Engineer
Yuriy Markiv
Versatile Foundational Data Engineer with 16+ years of experience building scalable, governed data platforms across customer, commerce, and financial domains. Expert in designing ETL/ELT frameworks, API integrations, and high-volume batch and streaming pipelines (Kafka), with strong foundations in data modeling, data quality, and performance optimization. Proven track record unifying fragmented sources through reusable connectors and cloud-native pipelines across AWS, Azure, and GCP. Systems-driven and detail-oriented, focused on delivering reliable, analytics-ready data that supports reporting and business decision-making.
Key Projects & Skills
- Built and restructured scalable data ecosystems across customer, commerce, financial, and transactional domains, unifying fragmented sources into governed, analytics-ready platforms.
- Designed and implemented end-to-end ETL/ELT frameworks from scratch, including batch and near-real-time pipelines with Kafka and Airflow orchestration.
- Developed robust API-driven integrations (REST, SOAP, GraphQL) and reusable connectors enabling reliable data movement across diverse sources and targets.
- Modeled high-quality analytical data structures using star schema and normalization principles to support operational and reporting needs.
- Implemented automated data validation, reconciliation, monitoring, and error-handling frameworks to ensure data quality and pipeline reliability.
- Optimized SQL workloads, warehouse performance, and high-volume ingestion pipelines across PostgreSQL, SQL Server, MongoDB, MySQL, and Snowflake environments.
- Delivered cloud-native data solutions across AWS, Azure, and GCP using Python-based engineering, CI/CD practices, and version-controlled workflows.
- Partnered cross-functionally with analytics, product, and DevOps teams, providing architectural guidance, documentation, and governance best practices for scalable data operations.