Christopher Becker’s Post

View profile for Christopher Becker, graphic

Administrative Management Professional

We are currently requesting resumes for the following position: Data Architect Resume Due Date: Friday, June 14, 2024 (5:00PM EST)   Number of Vacancies: 2 Job ID: 24-079 Level: MP6 Duration: 4 months Hours of work:  40 Location: 700 University Ave (100% Remote) *For complete job description please click the link below* Job Overview Data Architecture and Design: Lead the design and implementation of modular and scalable data pipelines (ELT/ETL) and data infrastructure. This includes creating curated common data models that provide a single source of truth for analytics and downstream systems. Data Security and Standards: Collaborate with security teams to ensure data security during transfer and storage. Define data modeling and database system field standards for optimal join performance and data extensibility. Data Pipeline Development: Create code templates for data pipelines and transformations for various data types (structured, semi-structured, unstructured). Data Modeling Expertise: Develop modeling guidelines for building facts, dimensions, and other data model components following industry best practices. Data Transformation: Transform data into a more consumable semantic layer for business users, translating technical language into business-centric terms. Technology Adoption: Research and introduce new technologies into the environment through Proof-of-Concepts (POCs). Prepare POC code designs for development and production. Toolset: Proficient in Microsoft data tools like Azure Data Factory, Data Lake, SQL Databases, Data Warehouse, Synapse Analytics, Databricks, Purview, and Power BI. Data Pipeline Orchestration: Design and advise on data pipeline execution to meet customer latency expectations, manage dependencies, and ensure data freshness with minimal disruption. Data Governance: Ensure data security, access management, and data cataloging requirements. Mentorship: Guide data modelers, analysts, and scientists in building models for KPI delivery, operational system interaction, and enhanced machine learning predictability. Qualifications: University degree in computer science, software engineering, or related data fields (data engineering, analysis, AI, machine learning). 6-8 years of experience in data modeling, data warehouse design, and data solution architecture in a Big Data environment. Experience with cloud-based data lake ingestion and data modeling projects. Experience with relational and in-memory data models (star/snowflake schemas). Experience designing and implementing event-driven, near-real-time, or streaming data solutions for varied data types across platforms. Strong knowledge of data model design for problem solving, data pipeline design patterns, data structure optimization, low-latency processing, and common data model creation. To apply please send your resume to careers@cpus.ca or through the following link: https://lnkd.in/efDvK69G

We are currently requesting resumes for the following position: Data Architect Resume Due Date: Friday, June 14, 2024 (5:00PM EST)   Number of Vacancies: 2 Job ID: 24-079 Level: MP6 Duration: 4 months Hours of work:  40 Location: 700 University Ave (100% Remote) Recruiter: Valerie Dziawa *For complete job description please click the link below* Job Overview Data Architecture and Design: Lead the design and implementation of modular and scalable data pipelines (ELT/ETL) and data infrastructure. This includes creating curated common data models that provide a single source of truth for analytics and downstream systems. Data Security and Standards: Collaborate with security teams to ensure data security during transfer and storage. Define data modeling and database system field standards for optimal join performance and data extensibility. Data Pipeline Development: Create code templates for data pipelines and transformations for various data types (structured, semi-structured, unstructured). Data Modeling Expertise: Develop modeling guidelines for building facts, dimensions, and other data model components following industry best practices. Data Transformation: Transform data into a more consumable semantic layer for business users, translating technical language into business-centric terms. Technology Adoption: Research and introduce new technologies into the environment through Proof-of-Concepts (POCs). Prepare POC code designs for development and production. Toolset: Proficient in Microsoft data tools like Azure Data Factory, Data Lake, SQL Databases, Data Warehouse, Synapse Analytics, Databricks, Purview, and Power BI. Data Pipeline Orchestration: Design and advise on data pipeline execution to meet customer latency expectations, manage dependencies, and ensure data freshness with minimal disruption. Data Governance: Ensure data security, access management, and data cataloging requirements. Mentorship: Guide data modelers, analysts, and scientists in building models for KPI delivery, operational system interaction, and enhanced machine learning predictability. Qualifications: University degree in computer science, software engineering, or related data fields (data engineering, analysis, AI, machine learning). 6-8 years of experience in data modeling, data warehouse design, and data solution architecture in a Big Data environment. Experience with cloud-based data lake ingestion and data modeling projects. Experience with relational and in-memory data models (star/snowflake schemas). Experience designing and implementing event-driven, near-real-time, or streaming data solutions for varied data types across platforms. Strong knowledge of data model design for problem solving, data pipeline design patterns, data structure optimization, low-latency processing, and common data model creation. To apply please send your resume to careers@cpus.ca or through the following link: https://lnkd.in/ekxaVHaZ

To view or add a comment, sign in

Explore topics