About Us:The HBAR Foundation acts as an integrated force multiplier to help builders and creators overcome the challenges of bringing their ideas to market. We do this through a comprehensive grant program that provides financial backing for projects, expert support across technology, marketing, and business development; and access to a thriving ecosystem that helps to raise awareness of projects, accelerate innovation, and scale adoption.
We understand that bringing an idea to market can be challenging from a number of perspectives. Lack of funding and operational support can be massive roadblocks for entrepreneurs and builders that want to rapidly deploy projects. We eliminate these roadblocks through a streamlined grant program that can quickly provide funding for a project as well as expert support across technical, marketing, business development, and other operational functions needed to scale.
Position Overview:As a Staff Data Architect, you will be responsible for designing, developing, implementing, and automating robust data pipelines that ingest, transform, aggregate, and load data to and from diverse data sources across the Internet in order to power business-critical decision-making workflows and provide KPIs to both internal and external customers. You will work closely with stakeholders to understand business requirements, translate them into scalable technical solutions, and oversee the seamless integration of systems to automate and visualize the flow of data. Your expertise will be crucial in enhancing the efficiency and reliability of data pipelines, ensuring data availability and accuracy for analysis, and improving the organization’s ability to make data-driven decisions.
Key Responsibilities:
- Architecture: Design and implement scalable and efficient solutions for retrieval and long-term retention of data from various and sundry internal and external sources (APIs, databases, cloud storage, third-party systems, etc.).
- Automation & Integration: Automate data collection and transformation to minimize human intervention via scripting, implementation of ETL/ELT processes, and integrating APIs.
- Collaboration & Stakeholder Engagement: Work closely with cross-functional teams (technical and non-technical customers, including Business Development, Operations, Financial Analysts, and ecosystem partners) to understand data needs and provide simple and elegant solutions.
- Performance Optimization: Continuously monitor and optimize workflows to improve performance, scalability, and reliability, addressing any bottlenecks.
- Integrity and Consistency-Checking: Continuously monitor data integrity to ensure consistency, accuracy, and accessibility for analysis and decision-making, urgently addressing any data quality issues.
- Documentation & Best Practices: Develop and maintain documentation on data workflows, architectures, and automation. Ensure adherence to best practices in data management and security.
- Data Governance & Security: Implement and enforce data governance principles and security protocols to ensure the safe handling of sensitive data and compliance with applicable regulations.
Required Skills & Qualifications:
- Proven experience as a Data Architect, Data Engineer, or similar role, with a focus on automating data ingestion, transformation, and aggregation.
- Strong expertise in data modeling, data integration, and designing scalable data architectures.
- In-depth knowledge of various data storage systems (e.g., SQL, NoSQL, data lakes, cloud storage) and data processing frameworks.
- In-depth knowledge of modern data warehousing solutions (e.g., Snowflake, BigQuery).
- Experience with ETL tools, data orchestration platforms (e.g., DBT, Dagster, Apache Airflow, Fivetran), and scripting languages (e.g., Python, SQL).
- Proficiency with cloud platforms (AWS, Google Cloud) and related services for data storage, processing, and automation.
- Experience with APIs and web scraping for retrieving data from external sources.
- Strong problem-solving skills, with the ability to troubleshoot and optimize complex data workflows.
- Knowledge of data governance, data quality, and security best practices.
- Ability to communicate technical concepts effectively to both technical and non-technical stakeholders.
Preferred Qualifications:
- Familiarity with containerization and orchestration tools (e.g., Docker, Kubernetes).
- Experience working with Business Intelligence (BI) tooling (e.g. LightDash, Looker).
- Experience working with Distributed Ledger Technologies.
- Experience working in the web3/blockchain/crypto ecosystem.
- Knowledge of Agile and Lean methodologies.
How we’ll support you
- Competitive compensation package including restricted coin units
- 100% company paid medical, dental, and vision insurance
- 100% company paid short-term disability, long-term disability, and life insurance
- 401(k) matching of 50% on 8% of salary
- EAP resources such as mental health counseling, financial, and legal services
- Flexible paid time off to enable a balance between deliverables and personal priorities
- Mentorship, training and development from industry leaders to help progress your career
*As HBAR Inc. is a remote company hiring candidates around the world, our perks and benefit packages may adjust based on your location
HBAR Inc. is an Equal Employment Opportunity (EEO) employer and welcomes all qualified applicants. Applicants will receive fair and impartial consideration without regard to race, sex, color, religion, national origin, age, disability, veteran status, genetic data, or other legally protected status.