IIT Inc. Home

IIT Jobs Data Bank-Job Detail

Submit Resume For This Job (via email)
Follow us on LinkedIn or Twitter or Facebook
Follow us on Facebook Follow us on LinkedIn
ID Location Skills $$ Job Type Status
14172 Downtown Manhattan, NY 10004 Data Warehouse (DW): ( 2 POS ) data modeling, architecture, ETL, Azure Data Factory, Oracle, SQL DOE Contract OPEN
If you are unable to click on links above to submit resume, you may email your resume to
jobs@iit-inc.com    

Subject=IIT Career Site/Resume for JobID=14172 (Data Warehouse (DW): ( 2 POS ) data modeling, architecture, ETL, Azure Data Factory, Oracle, SQL) in Downtown Manhattan NY 10004 (NON)

Estimated Length: 12 Months Work hours:40.00 Est. OT Hrs/Wk:

PLEASE NOTE THIS POSITION WILL ALLOW CONSULTANT TO WORK REMOTELY. HOWEVER, CONSULTANT WILL BE REQUIRED TO COME ONSITE AS NEEDED BY THEIR MANAGER/TEAM (AT THEIR OWN EXPENSE).

Requirements

SUMMARY OF ROLE & RESPONSIBILITIES:

Client is looking to onboard a Data Analyst.

The Open Data Coordinator coordinates the publishing of Client  data to the New York State Open Data portal as required under recently enacted state law (the Client Open Data Act, legislative bills S4625A/A1442B, and Section 1279-i of the Public Authorities Law). The role will develop the catalog of publishable Client data (consisting of both currently published and unpublished datasets), create a plan and schedule for publishing this data on the State’s open data portal, and coordinate the initial and ongoing publishing of that data per the plan. In carrying out this work, the Analyst recommends changes to data pipelines and data storage to facilitate efficient publishing of the data.

NYS Open Data Portal publishes data to the NYS open data portal as well as develop an internal Client data portal where data will be housed.

REQUIRED EXPERIENCE:

• Coordinate and collaborate with data owners to identify systems and datasets containing information that are of interest and value and therefore useful to publish
• Perform data analysis, data profiling, and data mapping activities for new data sources
• Translate data requirements into technical documents
• Interface with Data Engineers to extract, transform, and load (ETL) data from a wide variety of data sources to ensure the timely and seamless delivery of data
• Develop scripts and queries to import, manipulate, clean, and transform data from different sources
• Working with Data Engineering in Designing and building tools that make our data pipelines more reliable, manageable and resilient.
• Ensure we have data consistency between production and analytical databases.
• Work with Data Engineering in Architecting our Open Data Catalog
• Support and resolve data inquiries with accuracy and in a timely manner; Triage, troubleshoot and help remediate issues
• Contribute towards documentation of the processes/projects

What we're looking for:

• Bachelors in Computer Science or similar experience
• Experience working with relational databases like Oracle, Microsoft SQL, MySQL; Data Warehouses and Data Lakes.
• Proficiency in SQL and Python
• Experience using Power BI, Tableau or similar tools.
• Experience with Microsoft Azure or Amazon Web Services
• Experience with large-scale data pipelines and ETL tools is preferred
• Strict confidentiality of sensitive customer data.
• Experience working with Git (rebasing/merging, cherry picking, conflict resolution, etc.)
• A team player with a solution oriented attitude with both the technical and soft skills to get things done.
• Excellent communication and interpersonal skills.
• Excellent organizational skills and attention to detail.
• Experience with Jira and Confluence for documentation.

SKILLS REQUIRED:

• Ability to write technical documents and end-user documentation

* Documentation skills is a plus.

Additional Skills and Information:

• Bachelors in Computer Science or similar experience
• Experience working with relational databases like Oracle, Microsoft SQL, MySQL; Data Warehouses and Data Lakes.
• Proficiency in SQL and Python
• Experience using Power BI, Tableau or similar tools.
• Experience with Microsoft Azure or Amazon Web Services
• Experience with large-scale data pipelines and ETL tools is preferred
• Strict confidentiality of sensitive customer data.
• Experience working with Git (rebasing/merging, cherry picking, conflict resolution, etc.)
• A team player with a solution oriented attitude with both the technical and soft skills to get things done.
• Excellent communication and interpersonal skills.
• Excellent organizational skills and attention to detail.
• Experience with Jira and Confluence for documentation.

 

 

 
Submit Resume For This Job (via email)
 
Don't See a Position Matching Your Skills?
Click here to Email / Register your resume and be notified of future job openings.
 
About IIT:

Founded in 1995, IIT is a leading provider of Workforce Solutions to Government and Fortune-1000 organizations. IIT is a winner of Inc-500 award. IIT's core services include:

  • Consulting for projects / IT Outsourcing
  • IT staffing (Contract / Temporary / Contingent / Consulting)
  • Custom Workforce Solutions
  • Recruitment Process Outsourcing (RPO)
  • Headquartered in New York, IIT has over 400 consultants deployed at Client Sites. Other IIT highlights include

  • Winner of Inc-500 award 2 consecutive years
  • Winner of Ernst & Young / USPAACC Fast-50 award 2 consecutive years
  • Winner of USPAACC Top-10 Award in the Northeast US
  • IBM Business Partner
  • Oracle Business Partner
  • Adobe Business Partner
  • NYSA Member - New York Staffing Association - Regional Affiliate of ASA / American Staffing Association
  • NYS MBE certified
  • Our Consultants love working for IIT

  • Competitive compensation
  • W2 or C2C
  • Biweekly Direct Deposit for W2 Consultants
  • Visa and Green Card sponsorship opportunities for qualified individuals
  • Local contact for you to meet and talk to anytime (not someone sitting overseas in a different time zone)
  • Follow us on LinkedIn or Twitter or Facebook

    IIT is an Equal Opportunity Employer