I have been working with data since 2019, and have gained extensive experience in utilizing tools such as Python, Kafka, PySpark, and Airflow to design and implement data pipelines. My expertise in these technologies allows me to quickly and effectively process large amounts of data, while ensuring data quality and integrity.
Find me!