Senior Data Engineer, Analytics
The Data Engineer, Analytics role sits at the intersection of Analytics, Data Engineering, and Business Intelligence - serving as a technical expert who translates business needs into highly efficient data products. The Data Team is responsible for architecting and transforming raw data to self-service dashboards that business stakeholders can use to derive People and Engineering Insights. In this role, you'll have an opportunity to drive impact on a large scale by delivering trusted, transformed data that Senior Leadership will use to power the People and Engineering business decisions at GitLab.
Â
ResponsibilitiesÂ
- Collaborate with other functions across the company by building reports and dashboards with useful analysis and data insights
- Explain trends across data sources, potential opportunities for growth or improvement, and data caveats for descriptive, diagnostic, predictive (including forecasting), and prescriptive data analysis
- Deep understanding of how data is created and transformed through GitLab products and services provided by third-parties [1] to help drive product designs or service usage or note impacts to data reporting capabilities
- Understand and document the full lifecycle of data and our common data framework so that all data can be integrated, modeled for easy analysis, and analyzed for data insights
- Document every action in either issue/MR templates, the handbook [2], or READMEs so your learnings turn into repeatable actions and then into automation following the GitLab tradition of handbook first! [3]
- Expand our database with clean data (ready for analysis) by implementing data quality tests while continuously reviewing, optimizing, and refactoring existing data models
- Craft code that meets our internal standards for style, maintainability, and best practices for a high-scale database environment. Maintain and advocate for these standards through code review
- Contribute to and implement data warehouse and data modeling best practices, keeping reliability, performance, scalability, security, automation, and version control in mind
- Follow and improve our processes and workflows for maintaining high quality data and reporting while implementing the DataOps [4] philosophy in everything you do
Â
Requirements
- A minimum of 3-5 years experience in a similar role
- Advocate for improvements to data quality, security, and query performance that have particular impact across your team as a Subject Matter Expert (SME)
- Solve technical problems of high scope and complexity
- Exert influence on the long-range goals of your team
- Understand the code base extremely well in order to conduct new data innovation and to spot inconsistencies and edge cases
- Experience with performance and optimization problems, particularly at large scale, and a demonstrated ability to both diagnose and prevent these problems
- Help to define and improve our internal standards for style, maintainability, and best practices for a high-scale web environment; Maintain and advocate for these standards through code review
- Represent GitLab and its values in public communication around broader initiatives, specific projects, and community contributions
- Provide mentorship for Junior and Intermediate Engineers on your team to help them grow in their technical responsibilities
- Deliver and explain data analytics methodologies and improvements with minimal guidance and support from other team members. Collaborate with the team on larger projects
- Build close relationships with other functional teams to truly democratize data understanding and access
- Influence and implement our service level framework SLOs [5] and SLAs for our data sources and data services
- Identifies changes for the product architecture and from third-party services from the reliability, performance and availability perspective with a data driven approach focused on relational databases, knowledge of another data storages is a plus
- Proactively work on the efficiency and capacity planning to set clear requirements and reduce the system resources usage to make compute queries cheaper
- Participate in Data Quality Process [6] or other data auditing activities
Also, we know it’s tough, but please try to avoid the ​​confidence gap​.​​ You don’t have to match all the listed requirements exactly to be considered for this role.
Hiring Process
To view the full job description and hiring process, please view our​ ​handbook [7]​. Additional details about our process can also be found on our ​hiring page​. [8]
Country Hiring Guidelines
Please visit our Country Hiring Guidelines [9] page to see where we can hire.
Your Privacy
For information about our privacy practices in the recruitment process, please visit our Recruitment Privacy Policy [10]page.
Â- https://about.gitlab.com/handbook/business-ops/data-team/#-extract-and-load
- https://about.gitlab.com/handbook/
- https://about.gitlab.com/handbook/handbook-usage/#why-handbook-first
- https://en.wikipedia.org/wiki/DataOps
- https://about.gitlab.com/handbook/business-ops/data-team/platform/#slos-service-level-objectives-by-data-source
- https://about.gitlab.com/handbook/business-ops/data-team/data-quality-process/
- https://about.gitlab.com/job-families/finance/data-analyst/
- https://about.gitlab.com/handbook/hiring/
- https://about.gitlab.com/handbook/people-group/employment-solutions/#country-hiring-guidelines
- https://about.gitlab.com/handbook/hiring/recruitment-privacy-policy/