DescriptionAbout Solidus Labs
At Solidus, we believe digital assets offer an opportunity to transform finance and make it more inclusive, transparent, and efficient. Our team uses over 20 years of experience developing Wall Street-grade FinTech to build crypto-native market surveillance and compliance infrastructure. Digital asset firms globally use our products to detect, investigate, and report compliance threats, trade manipulation, and fraud in order to protect investors, provide transparency, apply for licensing, and comply with regulation. Our work is key to promoting institutional adoption of digital assets and growing the ecosystem.
About the position
We’re looking for a talented Data Engineer, with experience in building robust, scalable, maintainable and thoroughly monitored data pipelines on cloud environments.
The quality of our data-related products is heavily dependent on the quality of incoming data, and we’re looking for someone who can help us tackle issues such as data-duplication, velocity, schema adherence (and schema versioning), high availability, data-governance and more.
As a young and ambitious company in an extremely dynamic space, any candidate must be independent, accountable, organized, have a self-starter attitude, and be willing to get their hands dirty with day-to-day work that might be out of their official scope, while keeping an eye on their goals and the big picture.
,
Responsibilities,
RequirementsRequirements
- BSc. in Computer Sciences from a top university, or equivalent
- 5+ years in data-engineering, and data pipeline development on high volume production environments
- 5+ years experience with Java & Python based systems
- 2+ years experience with monitoring systems (Prometheus, Grafana, Zabbix, Datadog)
- Extensive SQL experience
- Ability to Develop, design, and maintain end-to-end ETL workflows, including data ingestion and transformation logic, involving different data sources
- Experience with big data programming, preferably using Apache Spark or experience with messaging queues such as Apache Kafka, RabbitMQ, etc.
- Experience with designing infrastructure to maintain high availability SLAs
- Experience with monitoring and managing production environments
Preferred
- Familiarity with Airflow, ETL tools & Snowflake
- Strong collaborative spirit - able to work closely with engineers, data scientists and financial experts in designing solutions, while acting as the data engineering authority
- Experience with Docker & Kubernetes