SIRVA is a leading partner for corporations to outsource their
mobility needs, relocating and moving their executives and staff
globally. SIRVA offers an extensive portfolio of mobility services
across approximately 170 countries providing an end-to-end solution to
deliver an enhanced mobility experience and program control and security
for customers. SIRVA has a portfolio of well-known and recognizable
brands including Allied Van Lines, northAmerican Van Lines, SMARTBOX,
and Allied Pickfords. For more information please visit www.sirva.com.
SIRVA brings together strong, collaborative people in a dynamic culture of mutual respect, support, and passion for the brand and product. We believe innovation drives winning performance, and we constantly challenge ourselves to be the very best we can in every aspect of our business. You will be surrounded by some of the brightest and most driven people in the industry. At SIRVA, you will be in great company!
The Data Engineer, which is an emerging role in the SIRVA data and analytics team, will play a pivotal role in operationalizing the most-urgent data and analytics initiatives for SIRVA digital business initiatives. The bulk of the data engineer’s work would be in building, managing and optimizing data pipelines and then moving these data pipelines effectively into production for key data and analytics consumers (like business/data analysts, data scientists or any personal that needs curated data for data and analytics use cases). Data engineers also be responsible to ensure compliance with data governance and data security requirements while creating, improving and operationalizing these integrated and reusable data pipelines.
FUNCTIONS AND RESPONSIBILITIES
10% -Create functional & technical documentation - e.g. Data pipelines, source to target mappings, ETL specification documents, run books
80% -Build data pipelines, apply automation in data integration and management, tracking data consumption patterns, performing intelligent sampling and caching
10% -Tests, debugs, and documents Pipelines and data integration processes, SQL and/or no-SQL queries
QUALIFICATIONS AND PREFERRED SKILLS
• Strong experience with advanced analytics tools for Object-oriented/object function scripting using languages such as [R, Python, Java, Scala, others].
• Strong ability to design, build and manage data pipelines for data structures encompassing data transformation, data models, schemas, metadata and workload management. The ability to work with both IT and business in integrating analytics and data science output into business processes and workflows.
• Strong experience with popular database programming languages including [SQL, PL/SQL, others] for relational databases and certifications on upcoming [NoSQL/Hadoop oriented databases like HBase, Cassandra, others] for nonrelational databases.
• Strong experience in working with large, heterogeneous datasets in building and optimizing data pipelines, pipeline architectures and integrated datasets using traditional data integration technologies. These should include [ETL/ELT, data replication/CDC, message-oriented data movement, API design and access] and upcoming data ingestion and integration technologies such as [stream data integration, CEP and data virtualization].
• Strong experience in working with SQL on Hadoop tools and technologies including [HIVE, Impala, Presto, others] from an open source perspective and [Hortonworks Data Flow (HDF), Talend, others] from a commercial vendor perspective.
• Strong experience in working with and optimizing existing ETL processes and data integration and data preparation flows and helping to move them in production.
• Basic experience working with popular data discovery, analytics and BI software tools like [Sisense, Tableau, Qlik, PowerBI and others] for semantic-layer-based data discovery.
• Strong experience in working with data science teams in refining and optimizing data science and machine learning models and algorithms.
• Basic understanding of popular open-source and commercial data science platforms such as [Python, R, KNIME, Alteryx, others] is a strong plus but not required/compulsory.
• Demonstrated ability to work across multiple deployment environments including [cloud, on-premises and hybrid], multiple operating systems.
• Adept in agile methodologies and capable of applying DevOps and increasingly DataOps principles to data pipelines to improve the communication, integration, reuse and automation of data flows between data managers and consumers across an organization
• Has good judgment, a sense of urgency and has demonstrated commitment to high standards of ethics, regulatory compliance, customer service and business integrity.
• Strong experience supporting and working with cross-functional teams in a dynamic business environment.
EDUCATION AND CERTIFICATION REQUIREMENTS
• A bachelor's or master's degree in computer science, statistics, applied mathematics, data management, information systems, information science or a related quantitative field [or equivalent work experience] is required.
• Advanced degree in computer science, statistics, applied mathematics is preferred.
- Pay Type Salary
- Oakbrook Terrace, IL, USA